scholarly journals Job Scheduling in Cloud Computing Using a Modified Harris Hawks Optimization and Simulated Annealing Algorithm

2020 ◽  
Vol 2020 ◽  
pp. 1-17 ◽  
Author(s):  
Ibrahim Attiya ◽  
Mohamed Abd Elaziz ◽  
Shengwu Xiong

In recent years, cloud computing technology has attracted extensive attention from both academia and industry. The popularity of cloud computing was originated from its ability to deliver global IT services such as core infrastructure, platforms, and applications to cloud customers over the web. Furthermore, it promises on-demand services with new forms of the pricing package. However, cloud job scheduling is still NP-complete and became more complicated due to some factors such as resource dynamicity and on-demand consumer application requirements. To fill this gap, this paper presents a modified Harris hawks optimization (HHO) algorithm based on the simulated annealing (SA) for scheduling jobs in the cloud environment. In the proposed HHOSA approach, SA is employed as a local search algorithm to improve the rate of convergence and quality of solution generated by the standard HHO algorithm. The performance of the HHOSA method is compared with that of state-of-the-art job scheduling algorithms, by having them all implemented on the CloudSim toolkit. Both standard and synthetic workloads are employed to analyze the performance of the proposed HHOSA algorithm. The obtained results demonstrate that HHOSA can achieve significant reductions in makespan of the job scheduling problem as compared to the standard HHO and other existing scheduling algorithms. Moreover, it converges faster when the search space becomes larger which makes it appropriate for large-scale scheduling problems.

Author(s):  
Meenakshi Garg ◽  
Gaurav Dhiman

In recent years, cloud computing technology has gained a great deal of interest from both academia and industry. Cloud computing's success benefited from its ability to offer global IT services such as core infrastructure, platforms, and applications to cloud customers around the web. It also promises on-demand offerings and new ways of pricing packages. However, cloud job scheduling is still NP-complete and has become more difficult due to certain factors such as resource dynamics and on-demand customer application requirements. To fill this void, this chapter presents the seagull optimization algorithm (SOA) for scheduling work in the cloud world. The efficiency of the SOA approach is compared to that of state-of-the-art job scheduling algorithms by having them all implemented in the CloudSim toolkit.


2011 ◽  
Vol 186 ◽  
pp. 636-639 ◽  
Author(s):  
Yan Cao ◽  
Jiang Du

Job-shop scheduling is one of the core research aspects of Manufacturing Execution System (MES). It is significant for improving the utilization of enterprise resources, enhancing product quality, shortening delivery periods, reducing product cost, and raising enterprise competitive power in market economy. In order to solve this problem, Simulated Annealing (SA) algorithm is improved to solve large-scale combinatorial problem of job-shop scheduling. To make the SA algorithm more effective to solve job-shop scheduling problems, a solution encoding mode, scheduling scheme generation, initial temperature selection, temperature updating function, Markov chain length, end rule, and so on of the improved SA algorithm are discussed that affect the computation speed and convergence of the SA algorithm. Finally, the improved SA algorithm is validated by a job–shop scheduling problem of 10 workpieces and 10 machines.


Author(s):  
Hang Dong ◽  
Boshi Wang ◽  
Bo Qiao ◽  
Wenqian Xing ◽  
Chuan Luo ◽  
...  

Capacity management has always been a great challenge for cloud platforms due to massive, heterogeneous on-demand instances running at different times. To better plan the capacity for the whole platform, a class of cloud computing instances have been released to collect computing demands beforehand. To use such instances, users are allowed to submit jobs to run for a pre-specified uninterrupted duration in a flexible range of time in the future with a discount compared to the normal on-demand instances. Proactively scheduling those pre-collected job requests considering the capacity status over the platform can greatly help balance the computing workloads along time. In this work, we formulate the scheduling problem for these pre-collected job requests under uncertain available capacity as a Prediction + Optimization problem with uncertainty in constraints, and propose an effective algorithm called Controlling under Uncertain Constraints (CUC), where the predicted capacity guides the optimization of job scheduling and job scheduling results are leveraged to improve the prediction of capacity through Bayesian optimization. The proposed formulation and solution are commonly applicable for proactively scheduling problems in cloud computing. Our extensive experiments on three public, industrial datasets shows that CUC has great potential for supporting high reliability in cloud platforms.


Author(s):  
FENG JIN ◽  
SHI-JI SONG ◽  
CHENG WU

Beam search algorithm, as an adaptation of branch and bound method, is regarded as one of the effective approaches in solving combinational optimization problems. In this paper, a new beam search algorithm for the large-scale permutation flow shop scheduling problem (FSP) is proposed. A new branching scheme is addressed and compared with the traditional branching scheme. With the new branching scheme, the number of partial schedules in the search tree can be greatly reduced. Based on a simple simulated annealing algorithm, partial schedules are globally evaluated. Numerical experiments show that good solutions of large-scale FSPs could be found with the proposed algorithm in a short time.


2018 ◽  
Vol 29 (1) ◽  
pp. 540-553 ◽  
Author(s):  
Iyad Abu Doush ◽  
Mohammed Azmi Al-Betar ◽  
Mohammed A. Awadallah ◽  
Abdelaziz I. Hammouri ◽  
Ra’ed M. Al-Khatib ◽  
...  

Abstract The patient admission scheduling (PAS) problem is an optimization problem in which we assign patients automatically to beds for a specific period of time while preserving their medical requirements and their preferences. In this paper, we present a novel solution to the PAS problem using the harmony search (HS) algorithm. We tailor the HS to solve the PAS problem by distributing patients to beds randomly in the harmony memory (HM) while respecting all hard constraints. The proposed algorithm uses five neighborhood strategies in the pitch adjustment stage. This technique helps in increasing the variations of the generated solutions by exploring more solutions in the search space. The PAS standard benchmark datasets are used in the evaluation. Initially, a sensitivity analysis of the HS algorithm is studied to show the effect of its control parameters on the HS performance. The proposed method is also compared with nine methods: non-linear great deluge (NLGD), simulated annealing with hyper-heuristic (HH-SA), improved with equal hyper-heuristic (HH-IE), simulated annealing (SA), tabu search (TS), simple random simulated annealing with dynamic heuristic (DHS-SA), simple random improvement with dynamic heuristic (DHS-OI), simple random great deluge with dynamic heuristic (DHS-GD), and biogeography-based optimization (BBO). The proposed HS algorithm is able to produce comparably competitive results when compared with these methods. This proves that the proposed HS is a very efficient alternative to the PAS problem, which can be efficiently used to solve many scheduling problems of a large-scale data.


Author(s):  
Valentin Tablan ◽  
Ian Roberts ◽  
Hamish Cunningham ◽  
Kalina Bontcheva

Cloud computing is increasingly being regarded as a key enabler of the ‘democratization of science’, because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research—GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost–benefit analysis and usage evaluation.


2021 ◽  
pp. 08-25
Author(s):  
Mustafa El .. ◽  
◽  
◽  
Aaras Y Y.Kraidi

The crowd-creation space is a manifestation of the development of innovation theory to a certain stage. With the creation of the crowd-creation space, the problem of optimizing the resource allocation of the crowd-creation space has become a research hotspot. The emergence of cloud computing provides a new idea for solving the problem of resource allocation. Common cloud computing resource allocation algorithms include genetic algorithms, simulated annealing algorithms, and ant colony algorithms. These algorithms have their obvious shortcomings, which are not conducive to solving the problem of optimal resource allocation for crowd-creation space computing. Based on this, this paper proposes an In the cloud computing environment, the algorithm for optimizing resource allocation for crowd-creation space computing adopts a combination of genetic algorithm and ant colony algorithm and optimizes it by citing some mechanisms of simulated annealing algorithm. The algorithm in this paper is an improved genetic ant colony algorithm (HGAACO). In this paper, the feasibility of the algorithm is verified through experiments. The experimental results show that with 20 tasks, the ant colony algorithm task allocation time is 93ms, the genetic ant colony algorithm time is 90ms, and the improved algorithm task allocation time proposed in this paper is 74ms, obviously superior. The algorithm proposed in this paper has a certain reference value for solving the creative space computing optimization resource allocation.


2018 ◽  
Author(s):  
Christopher McComb ◽  
Jonathan Cagan ◽  
Kenneth Kotovsky

Although insights uncovered by design cognition are often utilized to develop the methods used by human designers, using such insights to inform computational methodologies also has the potential to improve the performance of design algorithms. This paper uses insights from research on design cognition and design teams to inform a better simulated annealing search algorithm. Simulated annealing has already been established as a model of individual problem solving. This paper introduces the Heterogeneous Simulated Annealing Team (HSAT) algorithm, a multi-agent simulated annealing algorithm. Each agent controls an adaptive annealing schedule, allowing the team develop heterogeneous search strategies. Such diversity is a natural part of engineering design, and boosts performance in other multi-agent algorithms. Further, interaction between agents in HSAT is structured to mimic interaction between members of a design team. Performance is compared to several other simulated annealing algorithms, a random search algorithm, and a gradient-based algorithm. Compared to other algorithms, the team-based HSAT algorithm returns better average results with lower variance.


2018 ◽  
Vol 5 (2) ◽  
pp. 138-147
Author(s):  
Eka Nur Afifah ◽  
Alamsyah Alamsyah ◽  
Endang Sugiharti

Scheduling is one of the important part in production planning process. One of the factor that influence the smooth production process is raw material supply. Sugarcane supply as the main raw material in the making of sugar is the most important componen. The algorithm that used in this study was Simulated Annealing (SA) algorithm. SA apability to accept the bad or no better solution within certain time distinguist it from another local search algorithm. Aim of this study was to implement the SA algorithm in scheduling the sugarcane harvest process so that the amount of sugarcane harvest not so differ from mill capacity of the factory. Data used in this study were 60 data from sugarcane farms that ready to cut and mill capacity 1660 tons. Sugarcane harvest process in 19 days producing 33043,76 tons used SA algorithm and 27089,47 tons from factory actual result. Based on few experiments, obtained sugarcane harvest average by SA algorithm was 1651,63 tons per day and factory actual result was 1354,47 tons. Result of harvest scheduling used SA algorithm showed not so differ average from mill capacity of factory. Truck uses scheduling by SA algorithm showed average 119 trucks per day while from factory actual result was 156 trucks. With the same harvest time, SA algorithm result was greater  and the amount of used truck less than actual result of factory. Thus, can be concluded SA algorithm can make the scheduling of sugarcane harvest become more optimall compared to other methods applied by the factory nowdays.


2021 ◽  
Vol 28 (2) ◽  
pp. 101-109

Software testing is an important stage in the software development process, which is the key to ensure software quality and improve software reliability. Software fault localization is the most important part of software testing. In this paper, the fault localization problem is modeled as a combinatorial optimization problem, using the function call path as a starting point. A heuristic search algorithm based on hybrid genetic simulated annealing algorithm is used to locate software defects. Experimental results show that the fault localization method, which combines genetic algorithm, simulated annealing algorithm and function correlation analysis method, has a good effect on single fault localization and multi-fault localization. It greatly reduces the requirement of test case coverage and the burden of the testers, and improves the effect of fault localization.


Sign in / Sign up

Export Citation Format

Share Document