scholarly journals IMMEDIATE/BATCH MODE SCHEDULING ALGORITHMS FOR GRID COMPUTING: A REVIEW

Author(s):  
J.Y Maipan-uku ◽  
I Rabiu ◽  
Amit Mishra

Immediate/on-line and Batch mode heuristics are two methods used for scheduling in the computational grid environment. In the former, task is mapped onto a resource as soon as it arrives at the scheduler, while the later, tasks are not mapped onto resource as they arrive, instead they are collected into a set that is examined for mapping at prescheduled times called mapping events. This paper reviews the literature concerning Minimum Execution Time (MET) along with Minimum Completion Time (MCT) algorithms of online mode heuristics and more emphasis on Min-Min along with Max-Min algorithms of batch mode heuristics, while focusing on the details of their basic concepts, approaches, techniques, and open problems.

2021 ◽  
Vol 50 (1) ◽  
pp. 5-12
Author(s):  
Hani Alquhayz ◽  
Mahdi Jemmali

This paper focuses on the maximization of the minimum completion time on identical parallel processors. The objective of this maximization is to ensure fair distribution. Let a set of jobs to be assigned to several identical parallel processors. This problem is shown as NP-hard. The research work of this paper is based essentially on the comparison of the proposed heuristics with others cited in the literature review. Our heuristics are developed using essentially the randomization method and the iterative utilization of the knapsack problem to solve the studied problem. Heuristics are assessed by several instances represented in the experimental results. The results show that the knapsack based heuristic gives almost a similar performance than heuristic in a literature review but in better running time.  


Author(s):  
Nurcin Celik ◽  
Esfandyar Mazhari ◽  
John Canby ◽  
Omid Kazemi ◽  
Parag Sarfare ◽  
...  

Simulating large-scale systems usually entails exhaustive computational powers and lengthy execution times. The goal of this research is to reduce execution time of large-scale simulations without sacrificing their accuracy by partitioning a monolithic model into multiple pieces automatically and executing them in a distributed computing environment. While this partitioning allows us to distribute required computational power to multiple computers, it creates a new challenge of synchronizing the partitioned models. In this article, a partitioning methodology based on a modified Prim’s algorithm is proposed to minimize the overall simulation execution time considering 1) internal computation in each of the partitioned models and 2) time synchronization between them. In addition, the authors seek to find the most advantageous number of partitioned models from the monolithic model by evaluating the tradeoff between reduced computations vs. increased time synchronization requirements. In this article, epoch- based synchronization is employed to synchronize logical times of the partitioned simulations, where an appropriate time interval is determined based on the off-line simulation analyses. A computational grid framework is employed for execution of the simulations partitioned by the proposed methodology. The experimental results reveal that the proposed approach reduces simulation execution time significantly while maintaining the accuracy as compared with the monolithic simulation execution approach.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mahfooz Alam ◽  
Mahak ◽  
Raza Abbas Haidri ◽  
Dileep Kumar Yadav

Purpose Cloud users can access services at anytime from anywhere in the world. On average, Google now processes more than 40,000 searches every second, which is approximately 3.5 billion searches per day. The diverse and vast amounts of data are generated with the development of next-generation information technologies such as cryptocurrency, internet of things and big data. To execute such applications, it is needed to design an efficient scheduling algorithm that considers the quality of service parameters like utilization, makespan and response time. Therefore, this paper aims to propose a novel Efficient Static Task Allocation (ESTA) algorithm, which optimizes average utilization. Design/methodology/approach Cloud computing provides resources such as virtual machine, network, storage, etc. over the internet. Cloud computing follows the pay-per-use billing model. To achieve efficient task allocation, scheduling algorithm problems should be interacted and tackled through efficient task distribution on the resources. The methodology of ESTA algorithm is based on minimum completion time approach. ESTA intelligently maps the batch of independent tasks (cloudlets) on heterogeneous virtual machines and optimizes their utilization in infrastructure as a service cloud computing. Findings To evaluate the performance of ESTA, the simulation study is compared with Min-Min, load balancing strategy with migration cost, Longest job in the fastest resource-shortest job in the fastest resource, sufferage, minimum completion time (MCT), minimum execution time and opportunistic load balancing on account of makespan, utilization and response time. Originality/value The simulation result reveals that the ESTA algorithm consistently superior performs under varying of batch independent of cloudlets and the number of virtual machines’ test conditions.


Sign in / Sign up

Export Citation Format

Share Document