Resource and Performance Tradeoff for Task Scheduling of Parallel Reconfigurable Architectures

2019 ◽  
Vol 29 (02) ◽  
pp. 2050029 ◽  
Author(s):  
Chi-Chou Kao

In this paper, we propose a resource/performance tradeoff algorithm for task scheduling of parallel reconfigurable architectures. First, it uses unlimited resources to generate an optimal scheduling algorithm. Then, a relaxation algorithm is applied to satisfy the number of resources under increasing minimum performance. To demonstrate the performance of the proposed algorithm, we not only compare the existing methods with standard benchmarks but also implement on physical systems. The experimental results show that the proposed algorithms satisfy the requirements of the systems with limited resources.

1994 ◽  
Vol 47 (1) ◽  
pp. 70-88 ◽  
Author(s):  
Dionyssios Trivizas

A realistic runway capacity study for two major airports, namely Frankfurt (EDDF) and Chicago O'Hare (ORD) is presented, assessing the effect of optimal scheduling on the runway capacity and air traffic delays. The maximum position shift (MPS) runway scheduling algorithm, used in the study, was developed by Trivizas at the Massachusetts Institute of Technology (1987).EDDF is studied in the context of 160 major European airports, with a real traffic sample from 6 July, 1990. ORD is studied in the context of 26 major US airports using a large traffic sample from 1 March, 1989. Secondary airport traffic has been assigned to the geographically nearest major hub and time compression has been used to extrapolate to an artificially denser scenario.The results show that optimal scheduling can bring about capacity improvements of the order of 20 percent, which in turn reduce delays up to 70 percent. These results are the product of a dynamic traffic management process which has been visually validated by observing animated runway operations and monitor functions.The study has been conducted with TMSIM, a comprehensive, object-oriented simulation tool that allows one to build an understanding of the structure and functionality of the air traffic control system, by modelling its components, their functionality and interactions, and measuring component and system performance. It features interactive route network editing (using menu/mouse techniques), complete route and airport structure modelling, independent flight and ATC objects, 3-D animation, advanced algorithms for scheduling, routeing, flow management, airspace restructuring (sectorization) and performance (capacity and communications workload) analysis.


2017 ◽  
Vol 13 (06) ◽  
pp. 4
Author(s):  
Zhizhong Liu ◽  
Jingxuan Qin ◽  
Weiping Peng ◽  
Hao Chao

For the typical optimal problem of task scheduling in cloud computing, this paper proposes a novel resource scheduling algorithm based on Social Learning Optimization Algorithm (SLO). SLO is a new swarm intelligence algorithm which is proposed by simulating the evolution process of human intelligence and has better optimization mechanism and optimization performance. This paper proposes two learning operators for task scheduling in cloud computing after analyzing the characteristics of the problem of task scheduling; then, by introducing the Small Position Value (SPV) method, the two learning operators with continuous nature essence are enabled to solve the problem of task scheduling, and then the improved SLO is employed to solve the problem of cloud resource optimal scheduling. Finally, the performance of improved SLO is compared with existing research work on the CloudSim platform. Experimental results show that the approach proposed in this paper has better global optimization ability and convergence speed.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1874
Author(s):  
Yao Zhao ◽  
Jian Dong ◽  
Hongwei Liu ◽  
Jin Wu ◽  
Yanxin Liu

Directed acyclic graph (DAG)-aware task scheduling algorithms have been studied extensively in recent years, and these algorithms have achieved significant performance improvements in data-parallel analytic platforms. However, current DAG-aware task scheduling algorithms, among which HEFT and GRAPHENE are notable, pay little attention to the cache management policy, which plays a vital role in in-memory data-parallel systems such as Spark. Cache management policies that are designed for Spark exhibit poor performance in DAG-aware task-scheduling algorithms, which leads to cache misses and performance degradation. In this study, we propose a new cache management policy known as Long-Running Stage Set First (LSF), which makes full use of the task dependencies to optimize the cache management performance in DAG-aware scheduling algorithms. LSF calculates the caching and prefetching priorities of resilient distributed datasets according to their unprocessed workloads and significance in parallel scheduling, which are key factors in DAG-aware scheduling algorithms. Moreover, we present a cache-aware task scheduling algorithm based on LSF to reduce the resource fragmentation in computing. Experiments demonstrate that, compared to DAG-aware scheduling algorithms with LRU and MRD, the same algorithms with LSF improve the JCT by up to 42% and 30%, respectively. The proposed cache-aware scheduling algorithm also exhibits about 12% reduction in the average job completion time compared to GRAPHENE with LSF.


Author(s):  
Shailendra Raghuvanshi ◽  
Priyanka Dubey

Load balancing of non-preemptive independent tasks on virtual machines (VMs) is an important aspect of task scheduling in clouds. Whenever certain VMs are overloaded and remaining VMs are under loaded with tasks for processing, the load has to be balanced to achieve optimal machine utilization. In this paper, we propose an algorithm named honey bee behavior inspired load balancing, which aims to achieve well balanced load across virtual machines for maximizing the throughput. The proposed algorithm also balances the priorities of tasks on the machines in such a way that the amount of waiting time of the tasks in the queue is minimal. We have compared the proposed algorithm with existing load balancing and scheduling algorithms. The experimental results show that the algorithm is effective when compared with existing algorithms. Our approach illustrates that there is a significant improvement in average execution time and reduction in waiting time of tasks on queue using workflowsim simulator in JAVA.


Sign in / Sign up

Export Citation Format

Share Document