Multi-Processor Job Scheduling in High-Performance Computing (HPC) Systems

Author(s):  
Annu Priya ◽  
Sudip Kumar Sahana

Processor scheduling is one of the thrust areas in the field of computer science. The future technologies use a huge amount of processors for execution of their tasks like huge games, programming software, and in the field of quantum computing. In hard real-time, many complex problems are solved by GPU programming. The primary concern of scheduling is to reduce the time complexity and manpower. There are several traditional techniques for processor scheduling. The performance of traditional techniques is reduced when it comes under huge processing of tasks. Most scheduling problems are NP-hard in nature. Many of the complex problems are recently solved by the GPU programming. GPU scheduling is another complex issue as it runs thousands of threads in parallel and needs to be scheduled efficiently. For such large-scale scheduling problem, the performance of state-of-the-art algorithms is very poor. It is observed that evolutionary and genetic-based algorithms exhibit better performance for large-scale combinatorial problems.

Author(s):  
Annu Priya ◽  
Sudip Kumar Sahana

Processor scheduling is one of the thrust areas in the field of computer science. The future technologies use a huge amount of processing for execution of their tasks like huge games, programming software, and in the field of quantum computing. In real-time, many complex problems are solved by GPU programming. The primary concern of scheduling is to reduce the time complexity and manpower. Several traditional techniques exit for processor scheduling. The performance of traditional techniques is reduced when it comes to the huge processing of tasks. Most scheduling problems are NP-hard in nature. Many of the complex problems are recently solved by GPU programming. GPU scheduling is another complex issue as it runs thousands of threads in parallel and needs to be scheduled efficiently. For such large-scale scheduling problems, the performance of state-of-the-art algorithms is very poor. It is observed that evolutionary and genetic-based algorithms exhibit better performance for large-scale combinatorial and internet of things (IoT) problems.


Author(s):  
Vincent Breton ◽  
Eddy Caron ◽  
Frederic Desprez ◽  
Gael Le Mahec

As grids become more and more attractive for solving complex problems with high computational and storage requirements, bioinformatics starts to be ported on large scale platforms. The BLAST kernel, one of the main cornerstone of high performance genomics, was one the first application ported on such platform. However, if a simple parallelization was enough for the first proof of concept, its use in production platform needed more optimized algorithms. In this chapter, we review existing parallelization and “gridification” approaches as well as related issues such as data management and replication, and a case study using the DIET middleware over the Grid’5000 experimental platform.


2013 ◽  
Vol 662 ◽  
pp. 957-960 ◽  
Author(s):  
Jing Liu ◽  
Xing Guo Luo ◽  
Xing Ming Zhang ◽  
Fan Zhang

Cloud computing is an emerging high performance computing environment with a large scale, heterogeneous collection of autonomous systems and flexible computational architecture. The performance of the scheduling system influences the cost benefit of this computing paradigm. To reduce the energy consumption and improve the profit, a job scheduling model based on the particle swarm optimization(PSO) algorithm is established for cloud computing. Based on open source cloud computing simulation platform CloudSim, compared to GA and random scheduling algorithms, the results show that the proposed algorithm can obtain a better solution concerning the energy cost and profit.


2020 ◽  
Vol 2020 ◽  
pp. 1-17 ◽  
Author(s):  
Ibrahim Attiya ◽  
Mohamed Abd Elaziz ◽  
Shengwu Xiong

In recent years, cloud computing technology has attracted extensive attention from both academia and industry. The popularity of cloud computing was originated from its ability to deliver global IT services such as core infrastructure, platforms, and applications to cloud customers over the web. Furthermore, it promises on-demand services with new forms of the pricing package. However, cloud job scheduling is still NP-complete and became more complicated due to some factors such as resource dynamicity and on-demand consumer application requirements. To fill this gap, this paper presents a modified Harris hawks optimization (HHO) algorithm based on the simulated annealing (SA) for scheduling jobs in the cloud environment. In the proposed HHOSA approach, SA is employed as a local search algorithm to improve the rate of convergence and quality of solution generated by the standard HHO algorithm. The performance of the HHOSA method is compared with that of state-of-the-art job scheduling algorithms, by having them all implemented on the CloudSim toolkit. Both standard and synthetic workloads are employed to analyze the performance of the proposed HHOSA algorithm. The obtained results demonstrate that HHOSA can achieve significant reductions in makespan of the job scheduling problem as compared to the standard HHO and other existing scheduling algorithms. Moreover, it converges faster when the search space becomes larger which makes it appropriate for large-scale scheduling problems.


2011 ◽  
Vol 474-476 ◽  
pp. 392-397
Author(s):  
Zhi Wei Tang ◽  
Xi Xuan Wu

This article introduces an intelligent surveillance distributed system based on TMS320DM642. The system platform has many functions, such as OSD (on screen display), analog video output, digital video output, Hard Disk, Ethernet and so on. DM64. User can set the rules via the management software. The video input from analog cameras and IP cameras can be processed by DM642 according to the rules. If any event happens which acts against the rules, alarm will be given. The system provides immediate, accurate and intelligent services for users. In order to realize the complex image processing algorithms on DM642, we optimize the algorithms based on DSP and propose a series of rapid image processing algorithms. The design of the project puts the emphasis on the feasibility of distributed high-performance processing from both hardware and software aspects, which may be easily applied to other large scale or hard real-time intelligent information processing.


Author(s):  
Reshmi Raveendran ◽  
D. Shanthi Saravanan

With the advent of High Performance Computing (HPC) in the large-scale parallel computational environment, better job scheduling and resource allocation techniques are required to deliver Quality of Service (QoS). Therefore, job scheduling on a large-scale parallel system has been studied to minimize the queue time, response time, and to maximize the overall system utilization. The objective of this paper is to touch upon the recent methods used for dynamic resource allocation across multiple computing nodes and the impact of scheduling algorithms. In addition, a quantitative approach which explains a trend line analysis on dynamic allocation for batch processors is depicted. Throughout the survey, the trends in research on dynamic allocation and parallel computing is identified, besides, highlights the potential areas for future research and development. This study proposes the design for an efficient dynamic scheduling algorithm based on the Quality-of-Service. The analysis provides a compelling research platform to optimize dynamic scheduling of jobs in HPC.


2005 ◽  
Vol 16 (02) ◽  
pp. 145-162 ◽  
Author(s):  
HENRI CASANOVA

The dominant trend in scientific computing today is the establishment of platforms that span multiple institutions to support applications at unprecedented scales. On most distributed computing platforms a requirement to achieve high performance is the careful scheduling of distributed application components onto the available resources. While scheduling has been an active area of research for many decades most of the platform models traditionally used in scheduling research, and in particular network models, break down for platforms spanning wide-area networks. In this paper we examine network modeling issues for large-scale platforms from the perspective of scheduling. The main challenge we address is the development of models that are sophisticated enough to be more realistic than those traditionally used in the field, but simple enough that they are still amenable to analysis. In particular, we discuss issues of bandwidth sharing and topology modeling. Also, while these models can be used to define and reason about realistic scheduling problems, we show that they also provide a good basis for fast simulation, which is the typical method to evaluate scheduling algorithms, as demonstrated in our implementation of the SIMGRID simulation framework.


2012 ◽  
Vol 2012 ◽  
pp. 1-18 ◽  
Author(s):  
Xiaocheng Liu ◽  
Bin Chen ◽  
Xiaogang Qiu ◽  
Ying Cai ◽  
Kedi Huang

An increasing number of high performance computing parallel applications leverages the power of the cloud for parallel processing. How to schedule the parallel applications to improve the quality of service is the key to the successful host of parallel applications in the cloud. The large scale of the cloud makes the parallel job scheduling more complicated as even simple parallel job scheduling problem is NP-complete. In this paper, we propose a parallel job scheduling algorithm named MEASY. MEASY adopts migration and consolidation to enhance the most popular EASY scheduling algorithm. Our extensive experiments on well-known workloads show that our algorithm takes very good care of the quality of service. For two common parallel job scheduling objectives, our algorithm produces an up to 41.1% and an average of 23.1% improvement on the average response time; an up to 82.9% and an average of 69.3% improvement on the average slowdown. Our algorithm is robust even in terms that it allows inaccurate CPU usage estimation and high migration cost. Our approach involves trivial modification on EASY and requires no additional technique; it is practical and effective in the cloud environment.


Author(s):  
JANI KUNTESH KETAN ◽  
ARPITA SHAH

Grid computing is growing rapidly in the distributed heterogeneous systems for utilizing and sharing large-scale resources to solve complex scientific problems. Scheduling is the most recent topic used to achieve high performance in grid environments. It aims to find a suitable allocation of resources for each job. A typical problem which arises during this task is the decision of scheduling. It is about an effective utilization of processor to minimize tardiness time of a job, when it is being scheduled. Scheduling jobs to resources in grid computing is complicated due to the distributed and heterogeneous nature of the resources. The efficient scheduling of independent jobs in a heterogeneous computing environment is an important problem in domains such as grid computing. In general, finding optimal schedule for such an environment using the traditional sequential method is an NP-hard problem whereas heuristic approaches will provide near optimal solutions for complex problems. The Ant colony algorithm, which is one of the heuristic algorithms, suits well for the grid scheduling environment using stigmeric communication.


2020 ◽  
Vol 4 (4) ◽  
pp. 664-671
Author(s):  
Gabriella Icasia ◽  
Raras Tyasnurita ◽  
Etria Sepwardhani Purba

Examination Timetabling Problem is one of the optimization and combinatorial problems. It is proved to be a non-deterministic polynomial (NP)-hard problem. On a large scale of data, the examination timetabling problem becomes a complex problem and takes time if it solved manually. Therefore, heuristics exist to provide reasonable enough solutions and meet the constraints of the problem. In this study, a real-world dataset of Examination Timetabling (Toronto dataset) is solved using a Hill-Climbing and Tabu Search algorithm. Different from the approach in the literature, Tabu Search is a meta-heuristic method, but we implemented a Tabu Search within the hyper-heuristic framework. The main objective of this study is to provide a better understanding of the application of Hill-Climbing and Tabu Search in hyper-heuristics to solve timetabling problems. The results of the experiments show that Hill-Climbing and Tabu Search succeeded in automating the timetabling process by reducing the penalty 18-65% from the initial solution. Besides, we tested the algorithms within 10,000-100,000 iterations, and the results were compared with a previous study. Most of the solutions generated from this experiment are better compared to the previous study that also used Tabu Search algorithm.


Sign in / Sign up

Export Citation Format

Share Document