scholarly journals Multi-Dependency and Time Based Resource Scheduling Algorithm for Scientific Applications in Cloud Computing

Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1320
Author(s):  
Vijay Prakash ◽  
Seema Bawa ◽  
Lalit Garg

Workflow scheduling is one of the significant issues for scientific applications among virtual machine migration, database management, security, performance, fault tolerance, server consolidation, etc. In this paper, existing time-based scheduling algorithms, such as first come first serve (FCFS), min–min, max–min, and minimum completion time (MCT), along with dependency-based scheduling algorithm MaxChild have been considered. These time-based scheduling algorithms only compare the burst time of tasks. Based on the burst time, these schedulers, schedule the sub-tasks of the application on suitable virtual machines according to the scheduling criteria. During this process, not much attention was given to the proper utilization of the resources. A novel dependency and time-based scheduling algorithm is proposed that considers the parent to child (P2C) node dependencies, child to parent node dependencies, and the time of different tasks in the workflows. The proposed P2C algorithm emphasizes proper utilization of the resources and overcomes the limitations of these time-based schedulers. The scientific applications, such as CyberShake, Montage, Epigenomics, Inspiral, and SIPHT, are represented in terms of the workflow. The tasks can be represented as the nodes, and relationships between the tasks can be represented as the dependencies in the workflows. All the results have been validated by using the simulation-based environment created with the help of the WorkflowSim simulator for the cloud environment. It has been observed that the proposed approach outperforms the mentioned time and dependency-based scheduling algorithms in terms of the total execution time by efficiently utilizing the resources.

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mahfooz Alam ◽  
Mahak ◽  
Raza Abbas Haidri ◽  
Dileep Kumar Yadav

Purpose Cloud users can access services at anytime from anywhere in the world. On average, Google now processes more than 40,000 searches every second, which is approximately 3.5 billion searches per day. The diverse and vast amounts of data are generated with the development of next-generation information technologies such as cryptocurrency, internet of things and big data. To execute such applications, it is needed to design an efficient scheduling algorithm that considers the quality of service parameters like utilization, makespan and response time. Therefore, this paper aims to propose a novel Efficient Static Task Allocation (ESTA) algorithm, which optimizes average utilization. Design/methodology/approach Cloud computing provides resources such as virtual machine, network, storage, etc. over the internet. Cloud computing follows the pay-per-use billing model. To achieve efficient task allocation, scheduling algorithm problems should be interacted and tackled through efficient task distribution on the resources. The methodology of ESTA algorithm is based on minimum completion time approach. ESTA intelligently maps the batch of independent tasks (cloudlets) on heterogeneous virtual machines and optimizes their utilization in infrastructure as a service cloud computing. Findings To evaluate the performance of ESTA, the simulation study is compared with Min-Min, load balancing strategy with migration cost, Longest job in the fastest resource-shortest job in the fastest resource, sufferage, minimum completion time (MCT), minimum execution time and opportunistic load balancing on account of makespan, utilization and response time. Originality/value The simulation result reveals that the ESTA algorithm consistently superior performs under varying of batch independent of cloudlets and the number of virtual machines’ test conditions.


Resource allocation policies play a key role in determining the performance of cloud. Service providers in cloud computing have to provide services to many users simultaneously. So the job of allocating cloudlets to appropriate virtual machines is becoming one of the challenging issues of cloud computing. Many algorithms have been proposed to allocate cloudlets to the virtual machines. Here in our paper, we have represented cloudlet allocation problem as job assignment problem and we have proposed Hungarian algorithm based solution for allocating cloudlets to virtual machines. The main objective is to minimize total execution time of cloudlets. Proposed algorithm is implemented in Cloudsim-3.03 simulator. We have done comparative analysis of the simulation results of proposed algorithm with the existing First Come First Serve (FCFS) scheduling policy and Min-Min scheduling algorithm. Proposed algorithm performs better than the above mentioned algorithms in terms of total execution time and makespan time (finishing time of last cloudlet)


2019 ◽  
Vol 20 (2) ◽  
pp. 299-316
Author(s):  
Mandeep Kaur ◽  
Rajni Mohana

Large number of users are shifting to the cloud system for their different kind of needs. Hence the number of applications on public cloud are increasing day by day. Handling public cloud is becoming unmanageable in comparison to other counterparts. Though fog technology has reduced the load on centralized cloud resources to a remarkable extent, still load handled at cloud end is significantly high. Geographic partitioning of public cloud can resolve these issues by adding manageability and efficiency in this situation. Dividing public cloud in smaller partitions opens ways to manage resources and requests in a better way. But, partitioned clouds introduce different ends for submission and operations of tasks and virtual machines. We have tried to handle all these complexities in this paper. Proposed work is focused upon load balancing in the partitioned public cloud by combining centralized and decentralized approaches, assuming the presence of fog layer. A load balancer entity is used for decentralized load balancing at partitions and a controller entity is used for centralized level to balance the overall load at various partitions. In the proposed approach, it is assumed that jobs are segregated first. All the jobs which can be handled locally by fog resources are not forwarded to the cloud layer directly. Those are processed locally by decentralized fog resources. Selection of an appropriate Virtual Machine (VM) for filtered set of job, which are forwarded to cloud environment, is done in three steps. Initially, selecting the partition with a maximum available capacity of resources. Then finding the appropriate node with maximum available resources, within a selected partition. And finally, the VM with minimum execution time for a task is chosen. Results are compared with the results produced in this work with First Come First Serve (FCFS) and Shortest Job First (SJF) algorithms, implemented in same setup i.e. partitioned cloud. This paper compares the Waiting Time, Finish Time and Actual Run Time of tasks using these algorithms. After initial experimentation, it is found that in most of the cases, while comparing above parameters, the proposed approach is producing better results than FCFS algorithm. But results produced by SJF algorithm produce better results. To reduce the number of unhandled jobs, a new load state is introduced which checks load beyond conventional load states. Major objective of this approach is to reduce the need of runtime virtual machine migration and to reduce the wastage of resources, which may be occurring due to predefined values of threshold. The implementation is done using CloudSim.


The usage of cloud computing and its resources for the execution of scientific workflow is a rapidly increasing demand. The Scientific applications are generally large in scale; even a single scientific workflow includes more number of complex tasks. Execution of these tasks can be made successful only by deploying it in the cloud virtual machines, because only cloud environment can only provide very large number of computing assets. In cloud, every processing resource is given as Virtual Machine. Any scientific workflow deployed in the cloud needs large number of virtual machines so; huge amount of computational energy is spent by the virtual machines to execute multifaceted scientific workflows. Hence there arises the need to utilize the cloud resources in an energy efficient way. Also, if the virtual machines are planned to schedule in an energy efficient manner there is an increase of makepsan of the workflow which is going to be an important parameter for completing the workflow within the deadline. So, the need for executing scientific workflows in energy efficient way with reduced makespan becomes a major issue among the researchers. It also becomes very challenging task to executing a scientific workflow in within the given deadline of a task in the given workflow. To address these issues, a new Energy Aware workflow scheduling algorithm is proposed and designed with improved makespan for the execution of different scientific applications in cloud environment.


Author(s):  
Shailendra Raghuvanshi ◽  
Priyanka Dubey

Load balancing of non-preemptive independent tasks on virtual machines (VMs) is an important aspect of task scheduling in clouds. Whenever certain VMs are overloaded and remaining VMs are under loaded with tasks for processing, the load has to be balanced to achieve optimal machine utilization. In this paper, we propose an algorithm named honey bee behavior inspired load balancing, which aims to achieve well balanced load across virtual machines for maximizing the throughput. The proposed algorithm also balances the priorities of tasks on the machines in such a way that the amount of waiting time of the tasks in the queue is minimal. We have compared the proposed algorithm with existing load balancing and scheduling algorithms. The experimental results show that the algorithm is effective when compared with existing algorithms. Our approach illustrates that there is a significant improvement in average execution time and reduction in waiting time of tasks on queue using workflowsim simulator in JAVA.


Author(s):  
Satyasrikanth Palle ◽  
Shivashankar

Objective: The demand for Cellular based multimedia services is growing day by day, in order to fulfill such demand the present day cellular networks needs to be upgraded to support excessive capacity calls along with high data accessibility. Analysis of traffic and huge network size could become very challenging issue for the network operators for scheduling the available bandwidth between different users. In the proposed work a novel QoS Aware Multi Path scheduling algorithm for smooth CAC in wireless mobile networks. The performance of the proposed algorithm is assessed and compared with existing scheduling algorithms. The simulation results show that the proposed algorithm outperforms existing CAC algorithms in terms of throughput and delay. The CAC algorithm with scheduling increases end-to-end throughput and decreases end-to-end delay. Methods: The key idea to implement the proposed research work is to adopt spatial reuse concept of wireless sensor networks to mobile cellular networks. Spatial reusability enhances channel reuse when the node pairs are far away and distant. When Src and node b are communicating with each other, the other nodes in the discovered path should be idle without utilizing the channel. Instead the other nodes are able to communicate parallelly the end-to-end throughput can be improved with acceptable delay. Incorporating link scheduling algorithms to this key concept further enhances the end-to-end throughput with in the turnaround time. So, in this research work we have applied spatial reuse concept along with link scheduling algorithm to enhance end-to-end throughput with in turnaround time. The proposed algorithm not only ensures that a connection gets the required bandwidth at each mobile node on its way by scheduling required slots to meet the QoS requirements. By considering the bandwidth requirement of the mobile connections, the CAC module at the BS not only considers the bandwidth requirement but also conforming the constrains of system dealy and jitter are met. Result: To verify the feasibility and effectiveness of our proposed work, with respect to scheduling the simulation results clearly shows the throughput improvement with Call Admission Control. The number of dropped calls is significantly less and successful calls are more with CAC. The percentage of dropped calls is reduced by 9 % and successful calls are improved by 91%. The simulation is also conducted on time constraint and ratio of dropped calls are shown. The total time taken to forward the packets and the ration of dropped calls is less when compared to non CAC. On a whole the CAC with scheduling algorithms out performs existing scheduling algorithms. Conclusion: In this research work we have proposed a novel QoS aware scheduling algorithm that provides QoS in Wireless Cellular Networks using Call Admission Control (CAC). The simulation results show that the end-to-end throughput has been increased by 91% when CAC is used. The proposed algorithm is also compared with existing link scheduling algorithms. The results reveal that CAC with scheduling algorithm can be used in Mobile Cellular Networks in order to reduce packet drop ratio. The algorithm is also used to send the packets within acceptable delay.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1400
Author(s):  
Muhammad Adnan ◽  
Jawaid Iqbal ◽  
Abdul Waheed ◽  
Noor Ul Amin ◽  
Mahdi Zareei ◽  
...  

Modern vehicles are equipped with various sensors, onboard units, and devices such as Application Unit (AU) that support routing and communication. In VANETs, traffic management and Quality of Service (QoS) are the main research dimensions to be considered while designing VANETs architectures. To cope with the issues of QoS faced by the VANETs, we design an efficient SDN-based architecture where we focus on the QoS of VANETs. In this paper, QoS is achieved by a priority-based scheduling algorithm in which we prioritize traffic flow messages in the safety queue and non-safety queue. In the safety queue, the messages are prioritized based on deadline and size using the New Deadline and Size of data method (NDS) with constrained location and deadline. In contrast, the non-safety queue is prioritized based on First Come First Serve (FCFS) method. For the simulation of our proposed scheduling algorithm, we use a well-known cloud computing framework CloudSim toolkit. The simulation results of safety messages show better performance than non-safety messages in terms of execution time.


Author(s):  
Takeshi D. Itoh ◽  
Takaaki Horinouchi ◽  
Hiroki Uchida ◽  
Koichi Takahashi ◽  
Haruka Ozaki

In automated laboratories consisting of multiple different types of instruments, scheduling algorithms are useful for determining the optimal allocations of instruments to minimize the time required to complete experimental procedures. However, previous studies on scheduling algorithms for laboratory automation have not emphasized the time constraints by mutual boundaries (TCMBs) among operations, which is important in procedures involving live cells or unstable biomolecules. Here, we define the “scheduling for laboratory automation in biology” (S-LAB) problem as a scheduling problem for automated laboratories in which operations with TCMBs are performed by multiple different instruments. We formulate an S-LAB problem as a mixed-integer programming (MIP) problem and propose a scheduling method using the branch-and-bound algorithm. Simulations show that our method can find the optimal schedules of S-LAB problems that minimize overall execution time while satisfying the TCMBs. Furthermore, we propose the use of our scheduling method for the simulation-based design of job definitions and laboratory configurations.


2014 ◽  
Vol 519-520 ◽  
pp. 108-113 ◽  
Author(s):  
Jun Chen ◽  
Bo Li ◽  
Er Fei Wang

This paper studies resource reservation mechanisms in the strict parallel computing grid,and proposed to support the parallel strict resource reservation request scheduling model and algorithms, FCFS and EASY backfill analysis of two important parallel scheduling algorithm, given four parallel scheduling algorithms supporting resource reservation. Simulation results of four algorithms of resource utilization, job bounded slowdown factor and the success rate of Advanced Reservation (AR) jobs were studied. The results show that the EASY backfill + firstfit algorithm can ensure QoS of AR jobs while taking into account the performance of good non-AR jobs.


2014 ◽  
Vol 1046 ◽  
pp. 508-511
Author(s):  
Jian Rong Zhu ◽  
Yi Zhuang ◽  
Jing Li ◽  
Wei Zhu

How to reduce energy consumption while improving utility of datacenter is one of the key technologies in the cloud computing environment. In this paper, we use energy consumption and utility of data center as objective functions to set up a virtual machine scheduling model based on multi-objective optimization VMSA-MOP, and design a virtual machine scheduling algorithm based on NSGA-2 to solve the model. Experimental results show that compared with other virtual machine scheduling algorithms, our algorithm can obtain relatively optimal scheduling results.


Sign in / Sign up

Export Citation Format

Share Document