scholarly journals Improved Monarch Butterfly Optimization Algorithm Based on Opposition-Based Learning and Random Local Perturbation

Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-20 ◽  
Author(s):  
Lin Sun ◽  
Suisui Chen ◽  
Jiucheng Xu ◽  
Yun Tian

Many optimization problems have become increasingly complex, which promotes researches on the improvement of different optimization algorithms. The monarch butterfly optimization (MBO) algorithm has proven to be an effective tool to solve various kinds of optimization problems. However, in the basic MBO algorithm, the search strategy easily falls into local optima, causing premature convergence and poor performance on many complex optimization problems. To solve the issues, this paper develops a novel MBO algorithm based on opposition-based learning (OBL) and random local perturbation (RLP). Firstly, the OBL method is introduced to generate the opposition-based population coming from the original population. By comparing the opposition-based population with the original population, the better individuals are selected and pass to the next generation, and then this process can efficiently prevent the MBO from falling into a local optimum. Secondly, a new RLP is defined and introduced to improve the migration operator. This operation shares the information of excellent individuals and is helpful for guiding some poor individuals toward the optimal solution. A greedy strategy is employed to replace the elitist strategy to eliminate setting the elitist parameter in the basic MBO, and it can reduce a sorting operation and enhance the computational efficiency. Finally, an OBL and RLP-based improved MBO (OPMBO) algorithm with its complexity analysis is developed, following on which many experiments on a series of different dimensional benchmark functions are performed and the OPMBO is applied to clustering optimization on several public data sets. Experimental results demonstrate that the proposed algorithm can achieve the great optimization performance compared with a few state-of-the-art algorithms in most of the test cases.

2017 ◽  
Vol 2017 ◽  
pp. 1-23 ◽  
Author(s):  
Yuting Lu ◽  
Yongquan Zhou ◽  
Xiuli Wu

In this paper, a novel hybrid lightning search algorithm-simplex method (LSA-SM) is proposed to solve the shortcomings of lightning search algorithm (LSA) premature convergence and low computational accuracy and it is applied to function optimization and constrained engineering design optimization problems. The improvement adds two major optimization strategies. Simplex method (SM) iteratively optimizes the current worst step leaders to avoid the population searching at the edge, thus improving the convergence accuracy and rate of the algorithm. Elite opposition-based learning (EOBL) increases the diversity of population to avoid the algorithm falling into local optimum. LSA-SM is tested by 18 benchmark functions and five constrained engineering design problems. The results show that LSA-SM has higher computational accuracy, faster convergence rate, and stronger stability than other algorithms and can effectively solve the problem of constrained nonlinear optimization in reality.


2018 ◽  
Vol 6 (3) ◽  
pp. 354-367 ◽  
Author(s):  
Abdelmonem M. Ibrahim ◽  
Mohamed A. Tawhid

Abstract In this study, we propose a new hybrid algorithm consisting of two meta-heuristic algorithms; Differential Evolution (DE) and the Monarch Butterfly Optimization (MBO). This hybrid is called DEMBO. Both of the meta-heuristic algorithms are typically used to solve nonlinear systems and unconstrained optimization problems. DE is a common metaheuristic algorithm that searches large areas of candidate space. Unfortunately, it often requires more significant numbers of function evaluations to get the optimal solution. As for MBO, it is known for its time-consuming fitness functions, but it traps at the local minima. In order to overcome all of these disadvantages, we combine the DE with MBO and propose DEMBO which can obtain the optimal solutions for the majority of nonlinear systems as well as unconstrained optimization problems. We apply our proposed algorithm, DEMBO, on nine different, unconstrained optimization problems and eight well-known nonlinear systems. Our results, when compared with other existing algorithms in the literature, demonstrate that DEMBO gives the best results for the majority of the nonlinear systems and unconstrained optimization problems. As such, the experimental results demonstrate the efficiency of our hybrid algorithm in comparison to the known algorithms. Highlights This paper proposes a new hybridization of differential evolution and monarch butterfly optimization. Solve system of nonlinear equations and unconstrained optimization problem. The efficiency and effectiveness of our algorithm are provided. Experimental results prove the superiority of our algorithm over the state-of-the-arts.


Mathematics ◽  
2019 ◽  
Vol 7 (11) ◽  
pp. 1056 ◽  
Author(s):  
Feng ◽  
Yu ◽  
Wang

As a significant subset of the family of discrete optimization problems, the 0-1 knapsack problem (0-1 KP) has received considerable attention among the relevant researchers. The monarch butterfly optimization (MBO) is a recent metaheuristic algorithm inspired by the migration behavior of monarch butterflies. The original MBO is proposed to solve continuous optimization problems. This paper presents a novel monarch butterfly optimization with a global position updating operator (GMBO), which can address 0-1 KP known as an NP-complete problem. The global position updating operator is incorporated to help all the monarch butterflies rapidly move towards the global best position. Moreover, a dichotomy encoding scheme is adopted to represent monarch butterflies for solving 0-1 KP. In addition, a specific two-stage repair operator is used to repair the infeasible solutions and further optimize the feasible solutions. Finally, Orthogonal Design (OD) is employed in order to find the most suitable parameters. Two sets of low-dimensional 0-1 KP instances and three kinds of 15 high-dimensional 0-1 KP instances are used to verify the ability of the proposed GMBO. An extensive comparative study of GMBO with five classical and two state-of-the-art algorithms is carried out. The experimental results clearly indicate that GMBO can achieve better solutions on almost all the 0-1 KP instances and significantly outperforms the rest.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0254239
Author(s):  
Xuan Chen ◽  
Feng Cheng ◽  
Cong Liu ◽  
Long Cheng ◽  
Yin Mao

Wolf Pack Algorithm (WPA) is a swarm intelligence algorithm that simulates the food searching process of wolves. It is widely used in various engineering optimization problems due to its global convergence and computational robustness. However, the algorithm has some weaknesses such as low convergence speed and easily falling into local optimum. To tackle the problems, we introduce an improved approach called OGL-WPA in this work, based on the employments of Opposition-based learning and Genetic algorithm with Levy’s flight. Specifically, in OGL-WPA, the population of wolves is initialized by opposition-based learning to maintain the diversity of the initial population during global search. Meanwhile, the leader wolf is selected by genetic algorithm to avoid falling into local optimum and the round-up behavior is optimized by Levy’s flight to coordinate the global exploration and local development capabilities. We present the detailed design of our algorithm and compare it with some other nature-inspired metaheuristic algorithms using various classical test functions. The experimental results show that the proposed algorithm has better global and local search capability, especially in the presence of multi-peak and high-dimensional functions.


2018 ◽  
Author(s):  
Cácio L. N. A. Bezerra ◽  
Cácio L. N. A. Bezerra ◽  
Fábio G. B. C. Costa ◽  
Lucas V. Bazante ◽  
Pedro V. M. Carvalho ◽  
...  

Flower Pollination Algorithm (FPA) has been widely used to solve optimization problems. However, it faces the problem of stagnation in local optimum. Several approaches have been proposed to deal with this problem. To improve the performance of the FPA, this paper presents a new variant that combines FPA and two variants of the Opposition Based Learning (OBL), such as Quasi OBL (QOBL) and Elite OBL (EOBL). To evaluate this proposal, 10 benchmark functions were used. In addition, the proposed algorithm was compared with original FPA and three variants such as FA–EOBL, SBFPA and DE–FPA. The proposal presented significant results.


2017 ◽  
Vol 2017 ◽  
pp. 1-20 ◽  
Author(s):  
Chiwen Qu ◽  
Shi’an Zhao ◽  
Yanming Fu ◽  
Wei He

Chicken swarm optimization is a new intelligent bionic algorithm, simulating the chicken swarm searching for food in nature. Basic algorithm is likely to fall into a local optimum and has a slow convergence rate. Aiming at these deficiencies, an improved chicken swarm optimization algorithm based on elite opposition-based learning is proposed. In cock swarm, random search based on adaptive t distribution is adopted to replace that based on Gaussian distribution so as to balance the global exploitation ability and local development ability of the algorithm. In hen swarm, elite opposition-based learning is introduced to promote the population diversity. Dimension-by-dimension greedy search mode is used to do local search for individual of optimal chicken swarm in order to improve optimization precision. According to the test results of 18 standard test functions and 2 engineering structure optimization problems, this algorithm has better effect on optimization precision and speed compared with basic chicken algorithm and other intelligent optimization algorithms.


Sign in / Sign up

Export Citation Format

Share Document