scholarly journals Hybrid Scheduling Strategy in Cloud Computing based on Optimization Algorithms

Author(s):  
Komal . ◽  
Gaurav Goel ◽  
Milanpreet Kaur

As a platform for offering on-demand services, cloud computing has increased in relevance and appeal. It has a pay-per-use model for its services. A cloud service provider's primary goal is to efficiently use resources by reducing execution time, cost, and other factors while increasing profit. As a result, effective scheduling algorithms remain a key issue in cloud computing, and this topic is categorized as an NP-complete problem. Researchers previously proposed several optimization techniques to address the NP-complete problem, but more work is needed in this area. This paper provides an overview of strategy for successful task scheduling based on a hybrid heuristic approach for both regular and larger workloads. The previous method handles the jobs adequately, but its performance degrades as the task size becomes larger. The proposed optimum scheduling method employs two distinct techniques to select the suitable VM for the specified job. To begin, it enhances the LJFP method by employing OSIG, an upgraded version of the Genetic Algorithm, to choose solutions with improved fitness factors, crossover, and mutation operators. This selection returns the best machines, and PSO then chooses one for a specific job. The appropriate machine is chosen depending on several factors, including the expected execution time, current load, and energy usage. The proposed algorithm's performance is assessed using two distinct cloud scenarios with various VMs and tasks, and overall execution time and energy usage are calculated. The proposed algorithm outperforms existing techniques in terms of energy and average execution time usage in both scenarios.

Author(s):  
Shikha Chaudhary ◽  
Saroj Hiranwal ◽  
C. P. Gupta

In cloud computing huge pool of resources are available and shared through internet. The scheduling is a core technique which determines the performance of a cloud computing system. The goal of scheduling is to allocate task to appropriate machine to achieve one or more QOS. To find the suitable resource among pool of resources to achieve the goal is an NP Complete problem. A new class of algorithm called nature inspired algorithm came into existence to find optimal solution.  In this paper we provide a survey as well as a comparative analysis of various existing nature inspired scheduling algorithms which are based on genetic algorithm and ant colony optimization algorithm. 


2017 ◽  
Vol 2017 ◽  
pp. 1-8 ◽  
Author(s):  
Yueguo Luo ◽  
Zhongyang Xiong ◽  
Guanghua Zhang

Tissue P systems are a class of computing models inspired by intercellular communication, where the rules are used in the nondeterministic maximally parallel manner. As we know, the execution time of each rule is the same in the system. However, the execution time of biochemical reactions is hard to control from a biochemical point of view. In this work, we construct a uniform and efficient solution to the SAT problem with tissue P systems in a time-free way for the first time. With the P systems constructed from the sizes of instances, the execution time of the rules has no influence on the computation results. As a result, we prove that such system is shown to be highly effective for NP-complete problem even in a time-free manner with communication rules of length at most 3.


2019 ◽  
Vol 19 (5-6) ◽  
pp. 773-789 ◽  
Author(s):  
GONZAGUE YERNAUX ◽  
WIM VANHOOF

AbstractAnti-unification refers to the process of generalizing two (or more) goals into a single, more general, goal that captures some of the structure that is common to all initial goals. In general one is typically interested in computing what is often called a most specific generalization, that is a generalization that captures a maximal amount of shared structure. In this work we address the problem of anti-unification in CLP, where goals can be seen as unordered sets of atoms and/or constraints. We show that while the concept of a most specific generalization can easily be defined in this context, computing it becomes an NP-complete problem. We subsequently introduce a generalization algorithm that computes a well-defined abstraction whose computation can be bound to a polynomial execution time. Initial experiments show that even a naive implementation of our algorithm produces acceptable generalizations in an efficient way.


Resource allocation policies play a key role in determining the performance of cloud. Service providers in cloud computing have to provide services to many users simultaneously. So the job of allocating cloudlets to appropriate virtual machines is becoming one of the challenging issues of cloud computing. Many algorithms have been proposed to allocate cloudlets to the virtual machines. Here in our paper, we have represented cloudlet allocation problem as job assignment problem and we have proposed Hungarian algorithm based solution for allocating cloudlets to virtual machines. The main objective is to minimize total execution time of cloudlets. Proposed algorithm is implemented in Cloudsim-3.03 simulator. We have done comparative analysis of the simulation results of proposed algorithm with the existing First Come First Serve (FCFS) scheduling policy and Min-Min scheduling algorithm. Proposed algorithm performs better than the above mentioned algorithms in terms of total execution time and makespan time (finishing time of last cloudlet)


2018 ◽  
Vol 7 (1) ◽  
pp. 16-19
Author(s):  
Anupama Gupta ◽  
Kulveer Kaur ◽  
Rajvir Kaur

Cloud computing is the architecture in which cloudlets are executed by the virtual machines. The most applicable virtual machines are selected on the basis of execution time and failure rate. Due to virtual machine overloading, the execution time and energy consumption is increased at steady rate. In this paper, BFO technique is applied in which weight of each virtual machine is calculated and the virtual machine which has the maximum weight is selected on which cloudlet will be migrated. The performance of proposed algorithm is tested by implementing it in CloudSim and analyzing it in terms of execution time, energy consumption.


2019 ◽  
Vol 11 (2) ◽  
pp. 34
Author(s):  
Flávia Pisani ◽  
Vanderson Martins do Rosario ◽  
Edson Borin

In this article, we work toward the answer to the question “is it worth processing a data stream on the device that collected it or should we send it somewhere else?”. As it is often the case in computer science, the response is “it depends”. To find out the cases where it is more profitable to stay in the device (which is part of the fog) or to go to a different one (for example, a device in the cloud), we propose two models that intend to help the user evaluate the cost of performing a certain computation on the fog or sending all the data to be handled by the cloud. In our generic mathematical model, the user can define a cost type (e.g., number of instructions, execution time, energy consumption) and plug in values to analyze test cases. As filters have a very important role in the future of the Internet of Things and can be implemented as lightweight programs capable of running on resource-constrained devices, this kind of procedure is the main focus of our study. Furthermore, our visual model guides the user in their decision by aiding the visualization of the proposed linear equations and their slope, which allows them to find if either fog or cloud computing is more profitable for their specific scenario. We validated our models by analyzing four benchmark instances (two applications using two different sets of parameters each) being executed on five datasets. We use execution time and energy consumption as the cost types for this investigation.


Author(s):  
Amjad Gawanmeh ◽  
Ahmad Alomari ◽  
Alain April ◽  
Ali Alwadi ◽  
Sazia Parvin

The era of cloud computing allowed the instant scale up of provided services into massive capacities without the need for investing in any new on site infrastructure. Hence, the interest of this type of services has been increased, in particular, by medium scale entities who can afford to completely outsource their data-center and their infrastructure. In addition, large companies may wish to provide support for wide range of load capacities, including peak ones, however, this will require very higher costs in order to build larger data centers internally. Cloud services can provide services for these companies according to their need whether in peak load capacity of low ones. Therefore, resource sharing and provisioning is considered one of the most challenging problems in cloud based services since these services have become more numerous and dynamic. As a result, assigning tasks and services requests into available resources has become a persistent problem in cloud computing, given the large number of variables, and the increasing types of services, demand, and requirement. Scheduling services using a limited number of resources is problem that has been under study since the evolution of cloud computing. However, there are several open areas for improvements due to the large number of optimization variables. In general, the scheduling of services on available resources is considered NP complete. As a result, several heuristic based methods were proposed in order to enhance the efficiency of cloud systems. Since the problem has several optimization parameters, there are still several improvements that can be done in this area. This chapter discusses the formalization of the problem of scheduling multiple tasks by single user and multiple users, and then presents a proposed solution for each individual case. First, an algorithm is presented and evaluated for optimum schedule that allocates a number of subtasks on a given number of resources; the algorithm was shown to be linear vs. number of users. Then, an algorithm is presented to address the problem of multiple users allocations, each, with multiple subtasks. The algorithm was design using the single user allocation algorithm as a selection function. Since, this problem is known to be NP complete, heuristic based methods are usually used in order to provide better solutions. Therefore, a green evolutionary based algorithm is proposed in order to address the problem of resource allocation with large number of users. In addition, the algorithm presents allocation schedule with better utility, while the execution time is linear vs. different parameters. The results obtained in this work show that it overcomes the outcome of one of the most efficient algorithms presented in this regard that was based on game theory. Further, this method works with no restrictions on the problem parameters as opposed to game theory methods that require certain parameters restrictions on cost vector or compaction time matrix. On the other hand, the main limitation of the proposed algorithm is that it is only applicable to the scheduling problem of multiple tasks that has one price vector and one execution time vector. However, scheduling multiple users, each with subtasks that have their own price and execution time vector, is very complex problem and beyond the scope of this work, hence it will be addressed in future work.


2021 ◽  
Vol 10 (4) ◽  
pp. 2320-2326
Author(s):  
Yasameen A. Ghani Alyouzbaki ◽  
Muaayed F. Al-Rawi

The cloud is the framework in which communication is connected with virtual machines, data centers, hosts, and brokers. The broker searches for a highly reliable cloudlet virtual machine for execution. Vulnerability can occur in the network because of which framework gets overburden. A research strategy is introduced in this article to expand the fault tolerance of the framework. The proposed approach improvement depends on the algorithm of ant colony optimization (ACO) that can choose the better virtual machine on which is to migrate the cloudlet to reduce the execution time and energy consumption. The efficiency of the proposed approach simulated in terms of execution time, energy consumption and examined with CloudSim. The introduction is provided in this article with a detailed description of cloud computing and, in addition, green cloud computing with its models. This article also discussed the virtual machine (VM) in more depth in the introduction section, which allows cloud service providers to supervise cloud resources competently while dispensing with the need for human oversight. Then the article submitted and explained the related works with their discussion and then it explained the novel proposed load balancing based on ACO technique and concluded that the execution time and energy consumption of the proposed technique is better than the three-threshold energy saving algorithm (TESA) technique that is commonly used in cloud load balancing.


Algorithms ◽  
2022 ◽  
Vol 15 (1) ◽  
pp. 22
Author(s):  
Virginia Niculescu ◽  
Robert Manuel Ştefănică

A general crossword grid generation is considered an NP-complete problem and theoretically it could be a good candidate to be used by cryptography algorithms. In this article, we propose a new algorithm for generating perfect crosswords grids (with no black boxes) that relies on using tries data structures, which are very important for reducing the time for finding the solutions, and offers good opportunity for parallelisation, too. The algorithm uses a special tries representation and it is very efficient, but through parallelisation the performance is improved to a level that allows the solution to be obtained extremely fast. The experiments were conducted using a dictionary of almost 700,000 words, and the solutions were obtained using the parallelised version with an execution time in the order of minutes. We demonstrate here that finding a perfect crossword grid could be solved faster than has been estimated before, if we use tries as supporting data structures together with parallelisation. Still, if the size of the dictionary is increased by a lot (e.g., considering a set of dictionaries for different languages—not only for one), or through a generalisation to a 3D space or multidimensional spaces, then the problem still could be investigated for a possible usage in cryptography.


2020 ◽  
Vol 16 (2) ◽  
pp. 65-81 ◽  
Author(s):  
Shadab Siddiqui ◽  
Manuj Darbari ◽  
Diwakar Yagyasen

Load balancing is a major research discipline in Cloud computing. The services are provided to users on pay as you go manner. Although a lot of algorithms have been proposed for load balancing, but performance is still an issue. The authors have proposed a new hybrid algorithm H_FAC to optimize the performance in cloud computing. The hybrid technique combines cuckoo search along with the Firefly algorithm of swarm intelligence. The benefit of using hybridization technique is that strength of one algorithm will overcome the shortcomings of other algorithms. Blockchain ID based Signature technique is used to ensure the authenticity of cloud service provider. The experimental results of H_FAC minimize the standard deviation, execution time significantly and improved throughput thereby optimizing the performance. The hybrid algorithm is also compared with other algorithms like ant colony optimization, artificial bee colony, round robin, FCFS and modified throttled. This approach helps the users to get the resources from authentic resource providers with a reduced execution time.


Sign in / Sign up

Export Citation Format

Share Document