scholarly journals Locust Inspired Algorithm for Cloudlet Scheduling in Cloud Computing Environments

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7308
Author(s):  
Mohammed Alaa Ala’anzy ◽  
Mohamed Othman ◽  
Zurina Mohd Hanapi ◽  
Mohamed A. Alrshah

Cloud computing is an emerging paradigm that offers flexible and seamless services for users based on their needs, including user budget savings. However, the involvement of a vast number of cloud users has made the scheduling of users’ tasks (i.e., cloudlets) a challenging issue in selecting suitable data centres, servers (hosts), and virtual machines (VMs). Cloudlet scheduling is an NP-complete problem that can be solved using various meta-heuristic algorithms, which are quite popular due to their effectiveness. Massive user tasks and rapid growth in cloud resources have become increasingly complex challenges; therefore, an efficient algorithm is necessary for allocating cloudlets efficiently to attain better execution times, resource utilisation, and waiting times. This paper proposes a cloudlet scheduling, locust inspired algorithm to reduce the average makespan and waiting time and to boost VM and server utilisation. The CloudSim toolkit was used to evaluate our algorithm’s efficiency, and the obtained results revealed that our algorithm outperforms other state-of-the-art nature-inspired algorithms, improving the average makespan, waiting time, and resource utilisation.

Author(s):  
Jun Zhao ◽  
Tingyu Sheng

Background: The Open Cloud Computing Alliance (OCCA) strives for more Cloud Computing Service Providers (CCSP) to join the alliance. OCCA only requires CCSP to provide virtual computing resources and does not care about the methods of the underlying implementation, which leads the open-source cloud computing to a larger scale and more efficient. Due to the differences in service modes and service categories, the cloud computing platforms formed by CCSP are heterogeneous. How to implement tasks across platforms and ensure the quality of migration are the key issue for sharing the OCCA platform. Methods: The Mobile Agent technology based on a domain is introduced. User tasks are encapsulated into Mobile agent packets by domain client, which realizes the migration of user tasks from one platform to another, and makes it possible to interoperate between OCCA virtual machines. To ensure the service quality of OCCA better, a five-layer logical model of R-OCCA with high commercial availability is proposed, which defines the service content of each layer and gives the setting of key parameters. This paper introduces the architectural composition and operational mechanism of the model, which carries out a qualitative analysis of the model, and establishes an experimental prototype to verify the feasibility of the model on the virtual machine platform. Results: Experiments show that it is feasible to implement Cloud Computing Alliance among cloud computing platforms through Mobile Agent under the existing technical conditions. Conclusion: To better guarantee the quality of OCCA service, a five-level R-OCCA logic model with strong commercial availability is proposed. The service content of each level is defined and the key parameters are given. From the CCSP income, the rationality of the model set is explained. The feasibility of the model was analyzed. The architectural composition and operational mechanisms of the model are introduced. The performance of the model was also analyzed.


2021 ◽  
Vol 17 (3) ◽  
pp. 197-218
Author(s):  
Karima Saidi ◽  
Ouassila Hioual ◽  
Abderrahim Siam

In this paper, we address the issue of resource allocation in a Cloud Computing environment. Since the need for cloud resources has led to the rapid growth of data centers and the waste of idle resources, high-power consumption has emerged. Therefore, we develop an approach that reduces energy consumption. Decision-making for adequate tasks and virtual machines (VMs) with their consolidation minimizes this latter. The aim of the proposed approach is energy efficiency. It consists of two processes; the first one allows the mapping of user tasks to VMs. Whereas, the second process consists of mapping virtual machines to the best location (physical machines). This paper focuses on this latter to develop a model by using a deep neural network and the ELECTRE methods supported by the K-nearest neighbor classifier. The experiments show that our model can produce promising results compared to other works of literature. This model also presents good scalability to improve the learning, allowing, thus, to achieve our objectives.


Author(s):  
Leila Helali ◽  
◽  
Mohamed Nazih Omri

Since its emergence, cloud computing has continued to evolve thanks to its ability to present computing as consumable services paid by use, and the possibilities of resource scaling that it offers according to client’s needs. Models and appropriate schemes for resource scaling through consolidation service have been considerably investigated,mainly, at the infrastructure level to optimize costs and energy consumption. Consolidation efforts at the SaaS level remain very restrained mostly when proprietary software are in hand. In order to fill this gap and provide software licenses elastically regarding the economic and energy-aware considerations in the context of distributed cloud computing systems, this work deals with dynamic software consolidation in commercial cloud data centers 𝑫𝑺𝟑𝑪. Our solution is based on heuristic algorithms and allows reallocating software licenses at runtime by determining the optimal amount of resources required for their execution and freed unused machines. Simulation results showed the efficiency of our solution in terms of energy by 68.85% savings and costs by 80.01% savings. It allowed to free up to 75% physical machines and 76.5% virtual machines and proved its scalability in terms of average execution time while varying the number of software and the number of licenses alternately.


2021 ◽  
pp. 165-174
Author(s):  
Ahmed A. A. Gad-Elrab ◽  
Tamer A.A. Alzohairy ◽  
Kamal R. Raslan ◽  
Farouk A. Emara

Recently, cloud computing has become the most common platform in the computing world. scheduling is one of the most important mechanism for managing cloud resources. Scheduling mechanism is a mechanism for scheduling user tasks among datacenters, host and virtual machines (VMs) and is an NP completeness problem. Most of existing mechanisms are heuristic and meta-heuristic methods, developed to address a part of scheduling problem and did not consider the dynamic creation of VMs by taking into account the required resources for a user task and the capabilities of a set of available hosts. To deal with this dynamic behavior, this paper introduces a new mechanism that uses a genetic algorithm (GA) for establishing a flexible scheduling mechanism that can adapt the dynamic number of VMs based on the required resources by user tasks and the available resources of hosts. Simulation results show that the proposed algorithm can distribute any number of user tasks on the available resources and it achieves better performance than existing algorithms in terms of response time, makespan, FlowTime, throughput, and resource utilization.


A vibrant on demand service of today’s era is cloud computing where one can utilize computer resources without indirect active management by user where one can use computing resources to achieve coherence in economic scale. Since cloud computing feel like Everything as a service so there should be highly scalable and reliable mechanisms to distribute the load evenly across the VMs evenly. Innumerable cloudlet mapping policies are presented in various research articles to achieve the high performance, better QOS and minimized task execution time but maximum are conventional approaches. No unconventional realistic scheduling algorithms is available which can schedule the tasks in heterogeneous manner. Since cloudlet scheduling is crucial metrics of cloud computing that has to be heightened by combining the different parameters. This paper tried to provide effectiveness and improvement in task scheduling using nature inspired Particle Swarm optimization (PSO) strategy. A powerful nature inspired load balancing mechanism is proposed in this paper which optimized makespan and throughput in environment of varying cloudlets and virtual machines results as compared to other conventional approaches. Proposed (EPSO) algorithm is with four scheduling policies namely FCFS, Round Robin (RR) and Shortest Job First (SJF) and get near twice good throughput percentage and minimized makespan in two different environments. Author used Cloud sim toolkit and some Open Source cloud packages to simulate the results of various scheduling components. Experimental results of various components are tested and simulated on java based CloudSim toolkit framework.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 4071-4075

Cloud computing is defined as the resource that can be delivered or accessed by the local host from the remote server via the internet. Cloud providers typically use a "pay-as-you-go" model. The evolution of cloud computing has led to the evolution of modern environment due to abundance and advancement of computing and communication infrastructure. During user request, and system response generation, an amount load will be assigned in the cloud computing, where it may be over or under load. Due to heavy load, power consumption and energy management problems are created, and it makes system failure and lead data loss. Though, an efficient load balancing method is compulsory to overcome all mentioned problems. The objective of this work is to develop a metaheuristic load balancing algorithm to migrate multi-server for load balancing and machine learning techniques is used to increase the cloud resource utilization and minimize the make-span time of the task. Using an unsupervised machine learning technique, it is possible to predict the correct response time and waiting time of the servers by getting the prior knowledge about the virtual machines and its clusters. And this work involves to calculate the accuracy rate of the Round-Robin load balancing algorithm and then compared it with a proposed algorithm. By this work, the response time and waiting time can be minimized and also it increases the resource utilization and minimize the make- span time of the task.


2021 ◽  
Vol 13 (2) ◽  
pp. 38-51
Author(s):  
Nasim Soltani Soulegan ◽  
◽  
Behrang Barekatain ◽  
Behzad Soleimani Neysiani

Cloud computing is considered a pattern for distributed and heterogeneous computing derived from many resources, and requests aim to share resources. Recently, cloud computing is graded among the top best technologies globally, which must be scheduled favorably to maximize providers’ profit and improve service quality for their customers. Scheduling specifies how users’ requests are assigned to virtual machines, and it plays a vital role in the efficiency and capability of the system. Its objective is to have a throughput or complete jobs in minimum time and the highest standard. Scheduling jobs in heterogeneous distributed systems is an NP-hard polynomial indecisive problem that is not solvable in polynomial time for real-time scheduling. The time complexity of jobs is growing exponentially, and this problem has a considerable effect on the quality of cloud services and providers’ efficiencies. The optimization of scheduling-related parameters using heuristic and meta-heuristic algorithms can reduce the search space complexity and execution time. This study intends to represent a fitness function to minimize time and cost parameters. The proposed method uses a multi-purposed weighted genetic algorithm that provides six basic parameters: utility, task execution cost, response time, wait time, Makespan, and throughput to provide comprehensive optimization. The proposed approach improved response and wait times, throughput, Makespan, and utility 16, 9, 7, 8 percentages, respectively, by only a one cost unit reduction, which is dispensable. As a result, both providers and users will experience better services. The statistical tests show that the achieved improvement is valid for 94% of experiments.


Author(s):  
Mohammed Yousuf Uddin ◽  
Hikmat Awad Abdeljaber ◽  
Tariq Ahamed Ahanger

Cloud computing is developing as a platform for next generation systems where users can pay as they use facilities of cloud computing like any other utilities. Cloud environment involves a set of virtual machines, which share the same computation facility and storage. Due to rapid rise in demand for cloud computing services several algorithms are being developed and experimented by the researchers in order to enhance the task scheduling process of the machines thereby offering optimal solution to the users by which the users can process the maximum number of tasks through minimal utilization of the resources. Task scheduling denotes a set of policies to regulate the task processed by a system. Virtual machine scheduling is essential for effective operations in distributed environment. The aim of this paper is to achieve efficient task scheduling of virtual machines, this study proposes a hybrid algorithm through integrating two prominent heuristic algorithms namely the BAT Algorithm and the Ant Colony Optimization (ACO) algorithm in order to optimize the virtual machine scheduling process. The performance evaluation of the three algorithms (BAT, ACO and Hybrid) reveal that the hybrid algorithm performs better when compared with that of the other two algorithms.


Author(s):  
Shailendra Raghuvanshi ◽  
Priyanka Dubey

Load balancing of non-preemptive independent tasks on virtual machines (VMs) is an important aspect of task scheduling in clouds. Whenever certain VMs are overloaded and remaining VMs are under loaded with tasks for processing, the load has to be balanced to achieve optimal machine utilization. In this paper, we propose an algorithm named honey bee behavior inspired load balancing, which aims to achieve well balanced load across virtual machines for maximizing the throughput. The proposed algorithm also balances the priorities of tasks on the machines in such a way that the amount of waiting time of the tasks in the queue is minimal. We have compared the proposed algorithm with existing load balancing and scheduling algorithms. The experimental results show that the algorithm is effective when compared with existing algorithms. Our approach illustrates that there is a significant improvement in average execution time and reduction in waiting time of tasks on queue using workflowsim simulator in JAVA.


Author(s):  
Ramandeep Kaur

A lot of research has been done in the field of cloud computing in computing domain.  For its effective performance, variety of algorithms has been proposed. The role of virtualization is significant and its performance is dependent on VM Migration and allocation. More of the energy is absorbed in cloud; therefore, the utilization of numerous algorithms is required for saving energy and efficiency enhancement in the proposed work. In the proposed work, green algorithm has been considered with meta heuristic algorithms, ABC (Artificial Bee colony .Every server has to perform different or same functions. A cloud computing infrastructure can be modelled as Primary Machineas a set of physical Servers/host PM1, PM2, PM3… PMn. The resources of cloud infrastructure can be used by the virtualization technology, which allows one to create several VMs on a physical server or host and therefore, lessens the hardware amount and enhances the resource utilization. The computing resource/node in cloud is used through the virtual machine. To address this problem, data centre resources have to be managed in resource -effective manner for driving Green Cloud computing that has been proposed in this work using Virtual machine concept with ABC and Neural Network optimization algorithm. The simulations have been carried out in CLOUDSIM environment and the parameters like SLA violations, Energy consumption and VM migrations along with their comparison with existing techniques will be performed.


Sign in / Sign up

Export Citation Format

Share Document