Genetic-Based Task Scheduling Algorithm with Dynamic Virtual Machine Generation in Cloud Computing

2021 ◽  
pp. 165-174
Author(s):  
Ahmed A. A. Gad-Elrab ◽  
Tamer A.A. Alzohairy ◽  
Kamal R. Raslan ◽  
Farouk A. Emara

Recently, cloud computing has become the most common platform in the computing world. scheduling is one of the most important mechanism for managing cloud resources. Scheduling mechanism is a mechanism for scheduling user tasks among datacenters, host and virtual machines (VMs) and is an NP completeness problem. Most of existing mechanisms are heuristic and meta-heuristic methods, developed to address a part of scheduling problem and did not consider the dynamic creation of VMs by taking into account the required resources for a user task and the capabilities of a set of available hosts. To deal with this dynamic behavior, this paper introduces a new mechanism that uses a genetic algorithm (GA) for establishing a flexible scheduling mechanism that can adapt the dynamic number of VMs based on the required resources by user tasks and the available resources of hosts. Simulation results show that the proposed algorithm can distribute any number of user tasks on the available resources and it achieves better performance than existing algorithms in terms of response time, makespan, FlowTime, throughput, and resource utilization.

Author(s):  
Ge Weiqing ◽  
Cui Yanru

Background: In order to make up for the shortcomings of the traditional algorithm, Min-Min and Max-Min algorithm are combined on the basis of the traditional genetic algorithm. Methods: In this paper, a new cloud computing task scheduling algorithm is proposed, which introduces Min-Min and Max-Min algorithm to generate initialization population, and selects task completion time and load balancing as double fitness functions, which improves the quality of initialization population, algorithm search ability and convergence speed. Results: The simulation results show that the algorithm is superior to the traditional genetic algorithm and is an effective cloud computing task scheduling algorithm. Conclusion: Finally, this paper proposes the possibility of the fusion of the two quadratively improved algorithms and completes the preliminary fusion of the algorithm, but the simulation results of the new algorithm are not ideal and need to be further studied.


2019 ◽  
Vol 2019 (1) ◽  
pp. 41-48 ◽  
Author(s):  
Karunakaran V

Due to diversity of services with respect to technology and resources, it is challenging to choose virtual machines (VM) from various data centres with varied features like cost minimization, reduced energy consumption, optimal response time and so on in cloud Infrastructure as a Service (IaaS) environment. The solutions available in the market are exhaustive computationally and aggregates multiple objectives to procure single trade-off that affects the solution quality inversely. This paper describes a hybrid algorithm that facilitates VM selection for scheduling applications based on Gravitational Search and Non-dominated Sorting Genetic Algorithm (GSA and NSGA). The efficiency of the proposed algorithm is verified by the simulation results.


Processes ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 1514
Author(s):  
Aroosa Mubeen ◽  
Muhammad Ibrahim ◽  
Nargis Bibi ◽  
Mohammad Baz ◽  
Habib Hamam ◽  
...  

According to the research, many task scheduling approaches have been proposed like GA, ACO, etc., which have improved the performance of the cloud data centers concerning various scheduling parameters. The task scheduling problem is NP-hard, as the key reason is the number of solutions/combinations grows exponentially with the problem size, e.g., the number of tasks and the number of computing resources. Thus, it is always challenging to have complete optimal scheduling of the user tasks. In this research, we proposed an adaptive load-balanced task scheduling (ALTS) approach for cloud computing. The proposed task scheduling algorithm maps all incoming tasks to the available VMs in a load-balanced way to reduce the makespan, maximize resource utilization, and adaptively minimize the SLA violation. The performance of the proposed task scheduling algorithm is evaluated and compared with the state-of-the-art task scheduling ACO, GA, and GAACO approaches concerning average resource utilization (ARUR), Makespan, and SLA violation. The proposed approach has revealed significant improvements concerning the makespan, SLA violation, and resource utilization against the compared approaches.


2018 ◽  
Vol 7 (1) ◽  
pp. 16-19
Author(s):  
Anupama Gupta ◽  
Kulveer Kaur ◽  
Rajvir Kaur

Cloud computing is the architecture in which cloudlets are executed by the virtual machines. The most applicable virtual machines are selected on the basis of execution time and failure rate. Due to virtual machine overloading, the execution time and energy consumption is increased at steady rate. In this paper, BFO technique is applied in which weight of each virtual machine is calculated and the virtual machine which has the maximum weight is selected on which cloudlet will be migrated. The performance of proposed algorithm is tested by implementing it in CloudSim and analyzing it in terms of execution time, energy consumption.


Cloud computing brings computing resources such as software and hardware, it serve service to the users through a network. Major concept of cloud computing is to share the marvellous storage section. In cloud computing, the user jobs are prepared and executed with appropriate resources to successfully deliver the services. There are large amount of task allocation techniques that are used to accomplish task planning. In order to improve the task scheduling technique, so we proposed method of efficient task scheduling algorithm. Optimization techniques are solving NP-hard problems is very famous. In this proposed technique, user tasks are stored in the order of queue methods. The priority is designed and allocated suitable resources for the task. New tasks are investigated and kept in the on-demand priority of queue. The output of the on-demand queue is given to the MWOA. It has been proved that this algorithm is capable to eliminate optimization problems and outperform the current algorithms. The method is proposed to the required more number of iterations is reduced. The proposed algorithm is compared with various scheduling algorithms such as, genetic algorithm, ant colony, standard grey wolf optimization and particle swarm optimization. The outcomes of tests indicate the better efficiency of the MWOA in expressions of makespan and energy consumption.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Shuzhen Wan ◽  
Lixin Qi

An important problem in cloud computing faces the challenge of scheduling tasks to virtual machines to meet the cost and time demands, while maintaining the Quality of Service (QoS). Allocating tasks into cloud resources is a difficult problem due to the uncertainty of consumers’ future requirements and the diversity of providers’ resources. Previous studies, either on modeling or scheduling approaches, can no longer offer a satisfactory solution. In this paper, we establish a resource allocation framework and propose a novel task scheduling algorithm. An improved coral reef optimization (ICRO) is proposed to deal with this task scheduling problem. In ICRO, the better-offspring and multicrossover strategies increase the convergent speed and improve the quality of solutions. In addition, a novel load balance-aware mutation enhances the load balance among virtual machines and adjusts the number of resources provided to users. Experimental results show that compared with other algorithms, ICRO can significantly reduce the makespan and cost of the scheduling, while maintaining a better load balance in the system.


Cloud Computing is a computing Paradigm in which services are provided by service providers on pay-per-use. Task Scheduling is the challenging issue in cloud computing. Task scheduling refers to allocating tasks to available resources to achieve better performance of the system. Here we have proposed a Heuristics algorithm to schedule tasks in given resources which satisfies the QoS of system taking Priority and Deadline of tasks as parameters. Our algorithm is compared with existing algorithms like EDF and TLD algorithms. Our algorithm provides better makespan, increases throughput and utilize resources well compared to existing algorithms


Author(s):  
Shailendra Raghuvanshi ◽  
Priyanka Dubey

Load balancing of non-preemptive independent tasks on virtual machines (VMs) is an important aspect of task scheduling in clouds. Whenever certain VMs are overloaded and remaining VMs are under loaded with tasks for processing, the load has to be balanced to achieve optimal machine utilization. In this paper, we propose an algorithm named honey bee behavior inspired load balancing, which aims to achieve well balanced load across virtual machines for maximizing the throughput. The proposed algorithm also balances the priorities of tasks on the machines in such a way that the amount of waiting time of the tasks in the queue is minimal. We have compared the proposed algorithm with existing load balancing and scheduling algorithms. The experimental results show that the algorithm is effective when compared with existing algorithms. Our approach illustrates that there is a significant improvement in average execution time and reduction in waiting time of tasks on queue using workflowsim simulator in JAVA.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1400
Author(s):  
Muhammad Adnan ◽  
Jawaid Iqbal ◽  
Abdul Waheed ◽  
Noor Ul Amin ◽  
Mahdi Zareei ◽  
...  

Modern vehicles are equipped with various sensors, onboard units, and devices such as Application Unit (AU) that support routing and communication. In VANETs, traffic management and Quality of Service (QoS) are the main research dimensions to be considered while designing VANETs architectures. To cope with the issues of QoS faced by the VANETs, we design an efficient SDN-based architecture where we focus on the QoS of VANETs. In this paper, QoS is achieved by a priority-based scheduling algorithm in which we prioritize traffic flow messages in the safety queue and non-safety queue. In the safety queue, the messages are prioritized based on deadline and size using the New Deadline and Size of data method (NDS) with constrained location and deadline. In contrast, the non-safety queue is prioritized based on First Come First Serve (FCFS) method. For the simulation of our proposed scheduling algorithm, we use a well-known cloud computing framework CloudSim toolkit. The simulation results of safety messages show better performance than non-safety messages in terms of execution time.


Sign in / Sign up

Export Citation Format

Share Document