Discrete Island-Based Cuckoo Search with Highly Disruptive Polynomial Mutation and Opposition-Based Learning Strategy for Scheduling of Workflow Applications in Cloud Environments

Author(s):  
Noor Aldeen Alawad ◽  
Bilal H. Abed-alguni
2017 ◽  
Vol 29 (4) ◽  
pp. 1103-1123 ◽  
Author(s):  
Qixiang Liao ◽  
Shudao Zhou ◽  
Hanqing Shi ◽  
Weilai Shi

In order to address with the problem of the traditional or improved cuckoo search (CS) algorithm, we propose a dynamic adaptive cuckoo search with crossover operator (DACS-CO) algorithm. Normally, the parameters of the CS algorithm are kept constant or adapted by empirical equation that may result in decreasing the efficiency of the algorithm. In order to solve the problem, a feedback control scheme of algorithm parameters is adopted in cuckoo search; Rechenberg’s 1/5 criterion, combined with a learning strategy, is used to evaluate the evolution process. In addition, there are no information exchanges between individuals for cuckoo search algorithm. To promote the search progress and overcome premature convergence, the multiple-point random crossover operator is merged into the CS algorithm to exchange information between individuals and improve the diversification and intensification of the population. The performance of the proposed hybrid algorithm is investigated through different nonlinear systems, with the numerical results demonstrating that the method can estimate parameters accurately and efficiently. Finally, we compare the results with the standard CS algorithm, orthogonal learning cuckoo search algorithm (OLCS), an adaptive and simulated annealing operation with the cuckoo search algorithm (ACS-SA), a genetic algorithm (GA), a particle swarm optimization algorithm (PSO), and a genetic simulated annealing algorithm (GA-SA). Our simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.


Processes ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 1551
Author(s):  
Shuang Wang ◽  
Heming Jia ◽  
Laith Abualigah ◽  
Qingxin Liu ◽  
Rong Zheng

Aquila Optimizer (AO) and Harris Hawks Optimizer (HHO) are recently proposed meta-heuristic optimization algorithms. AO possesses strong global exploration capability but insufficient local exploitation ability. However, the exploitation phase of HHO is pretty good, while the exploration capability is far from satisfactory. Considering the characteristics of these two algorithms, an improved hybrid AO and HHO combined with a nonlinear escaping energy parameter and random opposition-based learning strategy is proposed, namely IHAOHHO, to improve the searching performance in this paper. Firstly, combining the salient features of AO and HHO retains valuable exploration and exploitation capabilities. In the second place, random opposition-based learning (ROBL) is added in the exploitation phase to improve local optima avoidance. Finally, the nonlinear escaping energy parameter is utilized better to balance the exploration and exploitation phases of IHAOHHO. These two strategies effectively enhance the exploration and exploitation of the proposed algorithm. To verify the optimization performance, IHAOHHO is comprehensively analyzed on 23 standard benchmark functions. Moreover, the practicability of IHAOHHO is also highlighted by four industrial engineering design problems. Compared with the original AO and HHO and five state-of-the-art algorithms, the results show that IHAOHHO has strong superior performance and promising prospects.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Yongli Liu ◽  
Zhonghui Wang ◽  
Hao Chao

Traditional fuzzy clustering is sensitive to initialization and ignores the importance difference between features, so the performance is not satisfactory. In order to improve clustering robustness and accuracy, in this paper, a feature-weighted fuzzy clustering algorithm based on multistrategy grey wolf optimization is proposed. This algorithm cannot only improve clustering accuracy by considering the different importance of features and assigning each feature different weight but also can easily obtain the global optimal solution and avoid the impact of the initialization process by implementing multistrategy grey wolf optimization. This multistrategy optimization includes three components, a population diversity initialization strategy, a nonlinear adjustment strategy of the convergence factor, and a generalized opposition-based learning strategy. They can enhance the population diversity, better balance exploration and exploitation, and further enhance the global search capability, respectively. In order to evaluate the clustering performance of our clustering algorithm, UCI datasets are selected for experiments. Experimental results show that this algorithm can achieve higher accuracy and stronger robustness.


Author(s):  
Sen Zhang ◽  
Qifang Luo ◽  
Yongquan Zhou

To overcome the poor population diversity and slow convergence rate of grey wolf optimizer (GWO), this paper introduces the elite opposition-based learning strategy and simplex method into GWO, and proposes a hybrid grey optimizer using elite opposition (EOGWO). The diversity of grey wolf population is increased and exploration ability is improved. The experiment results of 13 standard benchmark functions indicate that the proposed algorithm has strong global and local search ability, quick convergence rate and high accuracy. EOGWO is also effective and feasible in both low-dimensional and high-dimensional case. Compared to particle swarm optimization with chaotic search (CLSPSO), gravitational search algorithm (GSA), flower pollination algorithm (FPA), cuckoo search (CS) and bat algorithm (BA), the proposed algorithm shows a better optimization performance and robustness.


2019 ◽  
Vol 2019 ◽  
pp. 1-24 ◽  
Author(s):  
Tongyi Zheng ◽  
Weili Luo

Lightning attachment procedure optimization (LAPO) is a new global optimization algorithm inspired by the attachment procedure of lightning in nature. However, similar to other metaheuristic algorithms, LAPO also has its own disadvantages. To obtain better global searching ability, an enhanced version of LAPO called ELAPO has been proposed in this paper. A quasi-opposition-based learning strategy is incorporated to improve both exploration and exploitation abilities by considering an estimate and its opposite simultaneously. Moreover, a dimensional search enhancement strategy is proposed to intensify the exploitation ability of the algorithm. 32 benchmark functions including unimodal, multimodal, and CEC 2014 functions are utilized to test the effectiveness of the proposed algorithm. Numerical results indicate that ELAPO can provide better or competitive performance compared with the basic LAPO and other five state-of-the-art optimization algorithms.


Mathematics ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 1316
Author(s):  
Kanhua Yu ◽  
Lili Liu ◽  
Zhe Chen

A slime mould algorithm (SMA) is a new meta-heuristic algorithm, which can be widely used in practical engineering problems. In this paper, an improved slime mould algorithm (ESMA) is proposed to estimate the water demand of Nanchang City. Firstly, the opposition-based learning strategy and elite chaotic searching strategy are used to improve the SMA. By comparing the ESMA with other intelligent optimization algorithms in 23 benchmark test functions, it is verified that the ESMA has the advantages of fast convergence, high convergence precision, and strong robustness. Secondly, based on the data of historical water consumption and local economic structure of Nanchang, four estimation models, including linear, exponential, logarithmic, and hybrid, are established. The experiment takes the water consumption of Nanchang City from 2004 to 2019 as an example to analyze, and the estimation models are optimized using the ESMA to determine the model parameters, then the estimation models are tested. The simulation results show that all four models can obtain better prediction accuracy, and the proposed ESMA has the best effect on the hybrid prediction model, and the prediction accuracy is up to 97.705%. Finally, the water consumption of Nanchang in 2020–2024 is forecasted.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Liang Liang

In the last two decades, swarm intelligence optimization algorithms have been widely studied and applied to multiobjective optimization problems. In multiobjective optimization, reproduction operations and the balance of convergence and diversity are two crucial issues. Imperialist competitive algorithm (ICA) and sine cosine algorithm (SCA) are two potential algorithms for handling single-objective optimization problems, but the research of them in multiobjective optimization is scarce. In this paper, a fusion multiobjective empire split algorithm (FMOESA) is proposed. First, an initialization operation based on opposition-based learning strategy is hired to generate a good initial population. A new reproduction of offspring is introduced, which combines ICA and SCA. Besides, a novel power evaluation mechanism is proposed to identify individual performance, which takes into account both convergence and diversity of population. Experimental studies on several benchmark problems show that FMOESA is competitive compared with the state-of-the-art algorithms. Given both good performance and nice properties, the proposed algorithm could be an alternative tool when dealing with multiobjective optimization problems.


2021 ◽  
Author(s):  
Bilal H. Abed-alguni ◽  
David Paul

Abstract The Island Cuckoo Search ( i CSPM) algorithm is a new variation of Cuckoo Search (CS) that uses the island model and the Highly Disruptive Polynomial (HDP) mutation for solving a broad range of optimization problems. This article introduces an improved i CSPM algorithm called i CSPM with elite opposition-based learning and multiple mutation methods ( i CSPM2). i CSPM2 has three main characteristics. Firstly, it separates candidate solutions into a number of islands (sub-populations) and then divides the islands equally among four improved versions of CS: CS via Le'vy fights (CS1) [1], CS with HDPM mutation (CS10) [2], CS with Jaya mutation (CSJ) and CS with pitch adjustment mutation (CS11) [2]. Secondly, it uses Elite Opposition-based Learning (EOBL) to improve its convergence rate and exploration ability. Finally, it uses the Smallest Position Value (SPV) with scheduling problems to convert continuous candidate solutions into discrete ones. A set of 15 popular benchmark functions was used to compare the performance of iCSPM2 to the performance of the original i CSPM algorithm based on different experimental scenarios. Results indicate that i CSPM2 exhibits improved performance over i CSPM. However, the sensitivity analysis of i CSPM and i CSPM2 to their parameters indicates that their convergence behavior is sensitive to the island model parameters. Further, the single-objective IEEE CEC 2014 functions were used to evaluate and compare the performance of iCSPM2 to four well-known swarm optimization algorithms: DGWO [3], L-SHADE [4], MHDA [5] and FWA-DM [6]. The overall experimental and statistical results suggest that i CSPM2 has better performance than the four well-known swarm optimization algorithms. i CSPM2's performance was also compared to two powerful discrete optimization algorithms (GAIbH [7] and MASC [8]) using a set of Taillard's benchmark instances for the permutation flow shop scheduling problem. The results indicate that i CSPM2 performs better than GAIbH and MASC. The source code of i CSPM2 is publicly available at https://github.com/bilalh2021/iCSPM2


2017 ◽  
Vol 22 (24) ◽  
pp. 8353-8378 ◽  
Author(s):  
Mostafa Ghobaei-Arani ◽  
Ali Asghar Rahmanian ◽  
Mohammad Sadegh Aslanpour ◽  
Seyed Ebrahim Dashti

Sign in / Sign up

Export Citation Format

Share Document