OWMA: An improved self-regulatory woodpecker mating algorithm using opposition-based learning and allocation of local memory for solving optimization problems

2021 ◽  
Vol 40 (1) ◽  
pp. 919-946
Author(s):  
Morteza Karimzadeh Parizi ◽  
Farshid Keynia ◽  
Amid Khatibi bardsiri

Success of metaheuristic algorithms depends on the efficient balance between of exploration and exploitation phases. Any optimization algorithm requires a combination of diverse exploration and proper exploitation to avoid local optima. This paper proposes a new improved version of the Woodpecker Mating Algorithm (WMA), based on opposition-based learning, known as the OWMA aiming to develop exploration and exploitation capacities and establish a simultaneous balance between these two phases. This improvement consists of three major mechanisms, the first of which is the new Distance Opposition-based Learning (DOBL) mechanism for improving exploration, diversity, and convergence. The second mechanism is the allocation of local memory of personal experiences of search agents for developing the exploitation capacity. The third mechanism is the use of a self-regulatory and dynamic method for setting the Hα parameter to improve the Running Away function (RA) performance. The ability of the proposed algorithm to solve 23 benchmark mathematical functions was evaluated and compared to that of a series of the latest and most popular metaheuristic methods reviewed in the research literature. The proposed algorithm is also used as a Multi-Layer Perceptron (MLP) neural network trainer to solve the classification problem on four biomedical datasets and three function approximation datasets. In addition, the OWMA algorithm was evaluated in five optimization problems constrained by the real world. The simulation results proved the superior and promising performance of the proposed algorithm in the majority of evaluations. The results prove the superiority and promising performance of the proposed algorithm in solving very complicated optimization problems.

Processes ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 1551
Author(s):  
Shuang Wang ◽  
Heming Jia ◽  
Laith Abualigah ◽  
Qingxin Liu ◽  
Rong Zheng

Aquila Optimizer (AO) and Harris Hawks Optimizer (HHO) are recently proposed meta-heuristic optimization algorithms. AO possesses strong global exploration capability but insufficient local exploitation ability. However, the exploitation phase of HHO is pretty good, while the exploration capability is far from satisfactory. Considering the characteristics of these two algorithms, an improved hybrid AO and HHO combined with a nonlinear escaping energy parameter and random opposition-based learning strategy is proposed, namely IHAOHHO, to improve the searching performance in this paper. Firstly, combining the salient features of AO and HHO retains valuable exploration and exploitation capabilities. In the second place, random opposition-based learning (ROBL) is added in the exploitation phase to improve local optima avoidance. Finally, the nonlinear escaping energy parameter is utilized better to balance the exploration and exploitation phases of IHAOHHO. These two strategies effectively enhance the exploration and exploitation of the proposed algorithm. To verify the optimization performance, IHAOHHO is comprehensively analyzed on 23 standard benchmark functions. Moreover, the practicability of IHAOHHO is also highlighted by four industrial engineering design problems. Compared with the original AO and HHO and five state-of-the-art algorithms, the results show that IHAOHHO has strong superior performance and promising prospects.


2021 ◽  
Vol 18 (6) ◽  
pp. 7076-7109
Author(s):  
Shuang Wang ◽  
◽  
Heming Jia ◽  
Qingxin Liu ◽  
Rong Zheng ◽  
...  

<abstract> <p>This paper introduces an improved hybrid Aquila Optimizer (AO) and Harris Hawks Optimization (HHO) algorithm, namely IHAOHHO, to enhance the searching performance for global optimization problems. In the IHAOHHO, valuable exploration and exploitation capabilities of AO and HHO are retained firstly, and then representative-based hunting (RH) and opposition-based learning (OBL) strategies are added in the exploration and exploitation phases to effectively improve the diversity of search space and local optima avoidance capability of the algorithm, respectively. To verify the optimization performance and the practicability, the proposed algorithm is comprehensively analyzed on standard and CEC2017 benchmark functions and three engineering design problems. The experimental results show that the proposed IHAOHHO has more superior global search performance and faster convergence speed compared to the basic AO and HHO and selected state-of-the-art meta-heuristic algorithms.</p> </abstract>


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1637
Author(s):  
Mohammad H. Nadimi-Shahraki ◽  
Ali Fatahi ◽  
Hoda Zamani ◽  
Seyedali Mirjalili ◽  
Laith Abualigah

Moth-flame optimization (MFO) algorithm inspired by the transverse orientation of moths toward the light source is an effective approach to solve global optimization problems. However, the MFO algorithm suffers from issues such as premature convergence, low population diversity, local optima entrapment, and imbalance between exploration and exploitation. In this study, therefore, an improved moth-flame optimization (I-MFO) algorithm is proposed to cope with canonical MFO’s issues by locating trapped moths in local optimum via defining memory for each moth. The trapped moths tend to escape from the local optima by taking advantage of the adapted wandering around search (AWAS) strategy. The efficiency of the proposed I-MFO is evaluated by CEC 2018 benchmark functions and compared against other well-known metaheuristic algorithms. Moreover, the obtained results are statistically analyzed by the Friedman test on 30, 50, and 100 dimensions. Finally, the ability of the I-MFO algorithm to find the best optimal solutions for mechanical engineering problems is evaluated with three problems from the latest test-suite CEC 2020. The experimental and statistical results demonstrate that the proposed I-MFO is significantly superior to the contender algorithms and it successfully upgrades the shortcomings of the canonical MFO.


Author(s):  
Ali Kaveh ◽  
Majid Ilchi Ghazaan ◽  
Arash Asadi

Water Strider Algorithm (WSA) is a new metaheuristic method that is inspired by the life cycle of water striders. This study attempts to enhance the performance of the WSA in order to improve solution accuracy, reliability, and convergence speed. The new method, called improved water strider algorithm (IWSA), is tested in benchmark mathematical functions and some structural optimization problems. In the proposed algorithm, the standard WSA is augmented by utilizing an opposition-based learning method for the initial population as well as a mutation technique borrowed from the genetic algorithm. By employing Generalized Space Transformation Search (GSTS) as an opposition-based learning method, more promising regions of the search space are explored; therefore, the precision of the results is enhanced. By adding a mutation to the WSA, the method is helped to escape from local optimums which is essential for engineering design problems as well as complex mathematical optimization problems. First, the viability of IWSA is demonstrated by optimizing benchmark mathematical functions, and then it is applied to three skeletal structures to investigate its efficiency in structural design problems. IWSA is compared to the standard WSA and some other state-of-the-art metaheuristic algorithms. The results show the competence and robustness of the IWSA as an optimization algorithm in mathematical functions as well as in the field of structural optimization.


2016 ◽  
Vol 2016 ◽  
pp. 1-13 ◽  
Author(s):  
Xian Shan ◽  
Kang Liu ◽  
Pei-Liang Sun

Bat Algorithm (BA) is a swarm intelligence algorithm which has been intensively applied to solve academic and real life optimization problems. However, due to the lack of good balance between exploration and exploitation, BA sometimes fails at finding global optimum and is easily trapped into local optima. In order to overcome the premature problem and improve the local searching ability of Bat Algorithm for optimization problems, we propose an improved BA called OBMLBA. In the proposed algorithm, a modified search equation with more useful information from the search experiences is introduced to generate a candidate solution, and Lévy Flight random walk is incorporated with BA in order to avoid being trapped into local optima. Furthermore, the concept of opposition based learning (OBL) is embedded to BA to enhance the diversity and convergence capability. To evaluate the performance of the proposed approach, 16 benchmark functions have been employed. The results obtained by the experiments demonstrate the effectiveness and efficiency of OBMLBA for global optimization problems. Comparisons with some other BA variants and other state-of-the-art algorithms have shown the proposed approach significantly improves the performance of BA. Performances of the proposed algorithm on large scale optimization problems and real world optimization problems are not discussed in the paper, and it will be studied in the future work.


2019 ◽  
Vol 6 (3) ◽  
pp. 243-259 ◽  
Author(s):  
Seyed Mostafa Bozorgi ◽  
Samaneh Yazdani

Abstract The whale optimization algorithm (WOA) is a new bio-inspired meta-heuristic algorithm which is presented based on the social hunting behavior of humpback whales. WOA suffers premature convergence that causes it to trap in local optima. In order to overcome this limitation of WOA, in this paper WOA is hybridized with differential evolution (DE) which has good exploration ability for function optimization problems. The proposed method is named Improved WOA (IWOA). The proposed method, combines exploitation of WOA with exploration of DE and therefore provides a promising candidate solution. In addition, IWOA+ is presented in this paper which is an extended form of IWOA. IWOA+ utilizes re-initialization and adaptive parameter which controls the whole search process to obtain better solutions. IWOA and IWOA+ are validated on a set of 25 benchmark functions, and they are compared with PSO, DE, BBO, DE/BBO, PSO/GSA, SCA, MFO and WOA. Furthermore, the effects of dimensionality and population size on the performance of our proposed algorithms are studied. The results demonstrate that IWOA and IWOA+ outperform the other algorithms in terms of quality of the final solution and convergence rate. Highlights The exploration ability of WOA is improved via hybridizing it with DE's mutation. A new adaptive strategy is utilized for balancing the exploration and exploitation abilities. Re-initialization is used to increase the diversity of population. Two improvements are presented for WOA through balancing its exploration and exploitation. The results show that the proposed algorithms can improve the performance of WOA significantly.


Author(s):  
Ross H. Miller ◽  
Brian R. Umberger ◽  
Graham E. Caldwell

Neuromuscular control of complex, multi-segment movements is often investigated with a modeling and computer simulation approach. Because a given movement can be completed using many different coordination choices, an optimization framework is often used to determine the muscle activity patterns that best accomplish the movement task. This framework is well suited to simulating movements whose performance objectives are easily specified by mathematical functions, such as vertical jumping for maximum height. However, these motion simulations tend to be difficult optimization problems because they contain many local optima that add complexity to the solution domain of the objective function [1–3].


Author(s):  
Prachi Agrawal ◽  
Talari Ganesh ◽  
Ali Wagdy Mohamed

AbstractThis article proposes a novel binary version of recently developed Gaining Sharing knowledge-based optimization algorithm (GSK) to solve binary optimization problems. GSK algorithm is based on the concept of how humans acquire and share knowledge during their life span. A binary version of GSK named novel binary Gaining Sharing knowledge-based optimization algorithm (NBGSK) depends on mainly two binary stages: binary junior gaining sharing stage and binary senior gaining sharing stage with knowledge factor 1. These two stages enable NBGSK for exploring and exploitation of the search space efficiently and effectively to solve problems in binary space. Moreover, to enhance the performance of NBGSK and prevent the solutions from trapping into local optima, NBGSK with population size reduction (PR-NBGSK) is introduced. It decreases the population size gradually with a linear function. The proposed NBGSK and PR-NBGSK applied to set of knapsack instances with small and large dimensions, which shows that NBGSK and PR-NBGSK are more efficient and effective in terms of convergence, robustness, and accuracy.


2021 ◽  
Author(s):  
Moritz Mühlenthaler ◽  
Alexander Raß ◽  
Manuel Schmitt ◽  
Rolf Wanka

AbstractMeta-heuristics are powerful tools for solving optimization problems whose structural properties are unknown or cannot be exploited algorithmically. We propose such a meta-heuristic for a large class of optimization problems over discrete domains based on the particle swarm optimization (PSO) paradigm. We provide a comprehensive formal analysis of the performance of this algorithm on certain “easy” reference problems in a black-box setting, namely the sorting problem and the problem OneMax. In our analysis we use a Markov model of the proposed algorithm to obtain upper and lower bounds on its expected optimization time. Our bounds are essentially tight with respect to the Markov model. We show that for a suitable choice of algorithm parameters the expected optimization time is comparable to that of known algorithms and, furthermore, for other parameter regimes, the algorithm behaves less greedy and more explorative, which can be desirable in practice in order to escape local optima. Our analysis provides a precise insight on the tradeoff between optimization time and exploration. To obtain our results we introduce the notion of indistinguishability of states of a Markov chain and provide bounds on the solution of a recurrence equation with non-constant coefficients by integration.


Sign in / Sign up

Export Citation Format

Share Document