scholarly journals A Tent Marine Predators Algorithm with Estimation Distribution Algorithm and Gaussian Random Walk for Continuous Optimization Problems

2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Chang-Jian Sun ◽  
Fang Gao

The marine predators algorithm (MPA) is a novel population-based optimization method that has been widely used in real-world optimization applications. However, MPA can easily fall into a local optimum because of the lack of population diversity in the late stage of optimization. To overcome this shortcoming, this paper proposes an MPA variant with a hybrid estimation distribution algorithm (EDA) and a Gaussian random walk strategy, namely, HEGMPA. The initial population is constructed using cubic mapping to enhance the diversity of individuals in the population. Then, EDA is adapted into MPA to modify the evolutionary direction using the population distribution information, thus improving the convergence performance of the algorithm. In addition, a Gaussian random walk strategy with medium solution is used to help the algorithm get rid of stagnation. The proposed algorithm is verified by simulation using the CEC2014 test suite. Simulation results show that the performance of HEGMPA is more competitive than other comparative algorithms, with significant improvements in terms of convergence accuracy and convergence speed.

2013 ◽  
Vol 427-429 ◽  
pp. 1934-1938
Author(s):  
Zhong Rong Zhang ◽  
Jin Peng Liu ◽  
Ke De Fei ◽  
Zhao Shan Niu

The aim is to improve the convergence of the algorithm, and increase the population diversity. Adaptively particles of groups fallen into local optimum is adjusted in order to realize global optimal. by judging groups spatial location of concentration and fitness variance. At the same time, the global factors are adjusted dynamically with the action of the current particle fitness. Four typical function optimization problems are drawn into simulation experiment. The results show that the improved particle swarm optimization algorithm is convergent, robust and accurate.


2020 ◽  
Vol 28 (1) ◽  
pp. 55-85
Author(s):  
Bo Song ◽  
Victor O.K. Li

Infinite population models are important tools for studying population dynamics of evolutionary algorithms. They describe how the distributions of populations change between consecutive generations. In general, infinite population models are derived from Markov chains by exploiting symmetries between individuals in the population and analyzing the limit as the population size goes to infinity. In this article, we study the theoretical foundations of infinite population models of evolutionary algorithms on continuous optimization problems. First, we show that the convergence proofs in a widely cited study were in fact problematic and incomplete. We further show that the modeling assumption of exchangeability of individuals cannot yield the transition equation. Then, in order to analyze infinite population models, we build an analytical framework based on convergence in distribution of random elements which take values in the metric space of infinite sequences. The framework is concise and mathematically rigorous. It also provides an infrastructure for studying the convergence of the stacking of operators and of iterating the algorithm which previous studies failed to address. Finally, we use the framework to prove the convergence of infinite population models for the mutation operator and the [Formula: see text]-ary recombination operator. We show that these operators can provide accurate predictions for real population dynamics as the population size goes to infinity, provided that the initial population is identically and independently distributed.


Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 884
Author(s):  
Petr Stodola ◽  
Karel Michenka ◽  
Jan Nohel ◽  
Marian Rybanský

The dynamic traveling salesman problem (DTSP) falls under the category of combinatorial dynamic optimization problems. The DTSP is composed of a primary TSP sub-problem and a series of TSP iterations; each iteration is created by changing the previous iteration. In this article, a novel hybrid metaheuristic algorithm is proposed for the DTSP. This algorithm combines two metaheuristic principles, specifically ant colony optimization (ACO) and simulated annealing (SA). Moreover, the algorithm exploits knowledge about the dynamic changes by transferring the information gathered in previous iterations in the form of a pheromone matrix. The significance of the hybridization, as well as the use of knowledge about the dynamic environment, is examined and validated on benchmark instances including small, medium, and large DTSP problems. The results are compared to the four other state-of-the-art metaheuristic approaches with the conclusion that they are significantly outperformed by the proposed algorithm. Furthermore, the behavior of the algorithm is analyzed from various points of view (including, for example, convergence speed to local optimum, progress of population diversity during optimization, and time dependence and computational complexity).


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Guoming Du ◽  
Yangbo Chen ◽  
Wei Sun

Complex nonlinear optimization problems are involved in optimal spatial search, such as location allocation problems that occur in multidimensional geographic space. Such search problems are generally difficult to solve by using traditional methods. The bat algorithm (BA) is an effective method for solving optimization problems. However, the solution of the standard BA is easily trapped at one of its local optimum values. The main cause of premature convergence is the loss of diversity in the population. The niche technique is an effective method to maintain the population diversity, to enhance the exploration of the new search domains, and to avoid premature convergence. In this paper, a geographic information system- (GIS-) based niche hybrid bat algorithm (NHBA) is proposed for solving the optimal spatial search. The NHBA is able to avoid the premature convergence and obtain the global optimal values. The GIS technique provides robust support for processing a substantial amount of geographical data. A case in Fangcun District, Guangzhou City, China, is used to test the NHBA. The comparative experiments illustrate that the BA, GA, FA, PSO, and NHBA algorithms outperform the brute-force algorithm in terms of computational efficiency, and the optimal solutions are more easily obtained with NHBA than with BA, GA, FA, and PSO. Moreover, the precision of NHBA is higher and the convergence of NHBA is faster than those of the other algorithms under the same conditions.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1637
Author(s):  
Mohammad H. Nadimi-Shahraki ◽  
Ali Fatahi ◽  
Hoda Zamani ◽  
Seyedali Mirjalili ◽  
Laith Abualigah

Moth-flame optimization (MFO) algorithm inspired by the transverse orientation of moths toward the light source is an effective approach to solve global optimization problems. However, the MFO algorithm suffers from issues such as premature convergence, low population diversity, local optima entrapment, and imbalance between exploration and exploitation. In this study, therefore, an improved moth-flame optimization (I-MFO) algorithm is proposed to cope with canonical MFO’s issues by locating trapped moths in local optimum via defining memory for each moth. The trapped moths tend to escape from the local optima by taking advantage of the adapted wandering around search (AWAS) strategy. The efficiency of the proposed I-MFO is evaluated by CEC 2018 benchmark functions and compared against other well-known metaheuristic algorithms. Moreover, the obtained results are statistically analyzed by the Friedman test on 30, 50, and 100 dimensions. Finally, the ability of the I-MFO algorithm to find the best optimal solutions for mechanical engineering problems is evaluated with three problems from the latest test-suite CEC 2020. The experimental and statistical results demonstrate that the proposed I-MFO is significantly superior to the contender algorithms and it successfully upgrades the shortcomings of the canonical MFO.


2012 ◽  
Vol 616-618 ◽  
pp. 2064-2067
Author(s):  
Yong Gang Che ◽  
Chun Yu Xiao ◽  
Chao Hai Kang ◽  
Ying Ying Li ◽  
Li Ying Gong

To solve the primary problems in genetic algorithms, such as slow convergence speed, poor local searching capability and easy prematurity, the immune mechanism is introduced into the genetic algorithm, and thus population diversity is maintained better, and the phenomena of premature convergence and oscillation are reduced. In order to compensate the defects of immune genetic algorithm, the Hénon chaotic map, which is introduced on the above basis, makes the generated initial population uniformly distributed in the solution space, eventually, the defect of data redundancy is reduced and the quality of evolution is improved. The proposed chaotic immune genetic algorithm is used to optimize the complex functions, and there is an analysis compared with the genetic algorithm and the immune genetic algorithm, the feasibility and effectiveness of the proposed algorithm are proved from the perspective of simulation experiments.


2021 ◽  
Vol 2021 ◽  
pp. 1-23
Author(s):  
Xiaodan Liang ◽  
Dong Wu ◽  
Yang Liu ◽  
Maowei He ◽  
Liling Sun

In the past few decades, metaheuristic algorithms (MA) have been developed tremendously and have been successfully applied in many fields. In recent years, a large number of new MA have been proposed. Slime mould algorithm (SMA) is a novel swarm-based intelligence optimization algorithm. SMA solves the optimization problem by imitating the foraging and movement behavior of slime mould. It can effectively obtain a promising global optimal solution. However, it still suffers some shortcomings such as the unstable convergence speed, the imprecise search accuracy, and incapability of identifying a local optimal solution when faced with complicated optimization problems. With the purpose of overcoming the shortcomings of SMA, this paper proposed a multistrategy enhanced version of SMA called ESMA. The three enhanced strategies are chaotic initialization strategy (CIS), orthogonal learning strategy (OLS), and boundary reset strategy (BRS). The CIS is used to generate an initial population with diversity in the early stage of ESMA, which can increase the convergence speed of the algorithm and the quality of the final solution. Then, the OLS is used to discover the useful information of the best solutions and offer a potential search direction, which enhances the local search ability and raises the convergence rate. Finally, the BRS is used to correct individual positions, which ensures the population diversity and enhances the overall search capabilities of ESMA. The performance of ESMA was validated on the 30 IEEE CEC2014 functions and three IIR model identification problems, compared with other nine well-regarded and state-of-the-art algorithms. Simulation results and analysis prove that the ESMA has a superior performance. The three strategies involved in ESMA have significantly improved the performance of the basic SMA.


2021 ◽  
Vol 11 (24) ◽  
pp. 11996
Author(s):  
Yingtong Lu ◽  
Yaofei Ma ◽  
Jiangyun Wang

The effectiveness of the Wolf Pack Algorithm (WPA) in high-dimensional discrete optimization problems has been verified in previous studies; however, it usually takes too long to obtain the best solution. This paper proposes the Multi-Population Parallel Wolf Pack Algorithm (MPPWPA), in which the size of the wolf population is reduced by dividing the population into multiple sub-populations that optimize independently at the same time. Using the approximate average division method, the population is divided into multiple equal mass sub-populations whose better individuals constitute an elite sub-population. Through the elite-mass population distribution, those better individuals are optimized twice by the elite sub-population and mass sub-populations, which can accelerate the convergence. In order to maintain the population diversity, population pretreatment is proposed. The sub-populations migrate according to a constant migration probability and the migration of sub-populations are equivalent to the re-division of the confluent population. Finally, the proposed algorithm is carried out in a synchronous parallel system. Through the simulation experiments on the task assignment of the UAV swarm in three scenarios whose dimensions of solution space are 8, 30 and 150, the MPPWPA is verified as being effective in improving the optimization performance.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0254239
Author(s):  
Xuan Chen ◽  
Feng Cheng ◽  
Cong Liu ◽  
Long Cheng ◽  
Yin Mao

Wolf Pack Algorithm (WPA) is a swarm intelligence algorithm that simulates the food searching process of wolves. It is widely used in various engineering optimization problems due to its global convergence and computational robustness. However, the algorithm has some weaknesses such as low convergence speed and easily falling into local optimum. To tackle the problems, we introduce an improved approach called OGL-WPA in this work, based on the employments of Opposition-based learning and Genetic algorithm with Levy’s flight. Specifically, in OGL-WPA, the population of wolves is initialized by opposition-based learning to maintain the diversity of the initial population during global search. Meanwhile, the leader wolf is selected by genetic algorithm to avoid falling into local optimum and the round-up behavior is optimized by Levy’s flight to coordinate the global exploration and local development capabilities. We present the detailed design of our algorithm and compare it with some other nature-inspired metaheuristic algorithms using various classical test functions. The experimental results show that the proposed algorithm has better global and local search capability, especially in the presence of multi-peak and high-dimensional functions.


Sign in / Sign up

Export Citation Format

Share Document