scholarly journals An Improved Moth-Flame Optimization Algorithm with Adaptation Mechanism to Solve Numerical and Mechanical Engineering Problems

Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1637
Author(s):  
Mohammad H. Nadimi-Shahraki ◽  
Ali Fatahi ◽  
Hoda Zamani ◽  
Seyedali Mirjalili ◽  
Laith Abualigah

Moth-flame optimization (MFO) algorithm inspired by the transverse orientation of moths toward the light source is an effective approach to solve global optimization problems. However, the MFO algorithm suffers from issues such as premature convergence, low population diversity, local optima entrapment, and imbalance between exploration and exploitation. In this study, therefore, an improved moth-flame optimization (I-MFO) algorithm is proposed to cope with canonical MFO’s issues by locating trapped moths in local optimum via defining memory for each moth. The trapped moths tend to escape from the local optima by taking advantage of the adapted wandering around search (AWAS) strategy. The efficiency of the proposed I-MFO is evaluated by CEC 2018 benchmark functions and compared against other well-known metaheuristic algorithms. Moreover, the obtained results are statistically analyzed by the Friedman test on 30, 50, and 100 dimensions. Finally, the ability of the I-MFO algorithm to find the best optimal solutions for mechanical engineering problems is evaluated with three problems from the latest test-suite CEC 2020. The experimental and statistical results demonstrate that the proposed I-MFO is significantly superior to the contender algorithms and it successfully upgrades the shortcomings of the canonical MFO.

2022 ◽  
Vol 19 (3) ◽  
pp. 2240-2285
Author(s):  
Shihong Yin ◽  
◽  
Qifang Luo ◽  
Yanlian Du ◽  
Yongquan Zhou ◽  
...  

<abstract> <p>The slime mould algorithm (SMA) is a metaheuristic algorithm recently proposed, which is inspired by the oscillations of slime mould. Similar to other algorithms, SMA also has some disadvantages such as insufficient balance between exploration and exploitation, and easy to fall into local optimum. This paper, an improved SMA based on dominant swarm with adaptive t-distribution mutation (DTSMA) is proposed. In DTSMA, the dominant swarm is used improved the SMA's convergence speed, and the adaptive t-distribution mutation balances is used enhanced the exploration and exploitation ability. In addition, a new exploitation mechanism is hybridized to increase the diversity of populations. The performances of DTSMA are verified on CEC2019 functions and eight engineering design problems. The results show that for the CEC2019 functions, the DTSMA performances are best; for the engineering problems, DTSMA obtains better results than SMA and many algorithms in the literature when the constraints are satisfied. Furthermore, DTSMA is used to solve the inverse kinematics problem for a 7-DOF robot manipulator. The overall results show that DTSMA has a strong optimization ability. Therefore, the DTSMA is a promising metaheuristic optimization for global optimization problems.</p> </abstract>


2021 ◽  
Vol 2021 ◽  
pp. 1-22
Author(s):  
An-Di Tang ◽  
Shang-Qin Tang ◽  
Tong Han ◽  
Huan Zhou ◽  
Lei Xie

Slime mould algorithm (SMA) is a population-based metaheuristic algorithm inspired by the phenomenon of slime mould oscillation. The SMA is competitive compared to other algorithms but still suffers from the disadvantages of unbalanced exploitation and exploration and is easy to fall into local optima. To address these shortcomings, an improved variant of SMA named MSMA is proposed in this paper. Firstly, a chaotic opposition-based learning strategy is used to enhance population diversity. Secondly, two adaptive parameter control strategies are proposed to balance exploitation and exploration. Finally, a spiral search strategy is used to help SMA get rid of local optimum. The superiority of MSMA is verified in 13 multidimensional test functions and 10 fixed-dimensional test functions. In addition, two engineering optimization problems are used to verify the potential of MSMA to solve real-world optimization problems. The simulation results show that the proposed MSMA outperforms other comparative algorithms in terms of convergence accuracy, convergence speed, and stability.


Processes ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 2276
Author(s):  
Mohammad H. Nadimi-Shahraki ◽  
Ali Fatahi ◽  
Hoda Zamani ◽  
Seyedali Mirjalili ◽  
Laith Abualigah ◽  
...  

Moth–flame optimization (MFO) is a prominent swarm intelligence algorithm that demonstrates sufficient efficiency in tackling various optimization tasks. However, MFO cannot provide competitive results for complex optimization problems. The algorithm sinks into the local optimum due to the rapid dropping of population diversity and poor exploration. Hence, in this article, a migration-based moth–flame optimization (M-MFO) algorithm is proposed to address the mentioned issues. In M-MFO, the main focus is on improving the position of unlucky moths by migrating them stochastically in the early iterations using a random migration (RM) operator, maintaining the solution diversification by storing new qualified solutions separately in a guiding archive, and, finally, exploiting around the positions saved in the guiding archive using a guided migration (GM) operator. The dimensionally aware switch between these two operators guarantees the convergence of the population toward the promising zones. The proposed M-MFO was evaluated on the CEC 2018 benchmark suite on dimension 30 and compared against seven well-known variants of MFO, including LMFO, WCMFO, CMFO, CLSGMFO, LGCMFO, SMFO, and ODSFMFO. Then, the top four latest high-performing variants were considered for the main experiments with different dimensions, 30, 50, and 100. The experimental evaluations proved that the M-MFO provides sufficient exploration ability and population diversity maintenance by employing migration strategy and guiding archive. In addition, the statistical results analyzed by the Friedman test proved that the M-MFO demonstrates competitive performance compared to the contender algorithms used in the experiments.


2013 ◽  
Vol 427-429 ◽  
pp. 1934-1938
Author(s):  
Zhong Rong Zhang ◽  
Jin Peng Liu ◽  
Ke De Fei ◽  
Zhao Shan Niu

The aim is to improve the convergence of the algorithm, and increase the population diversity. Adaptively particles of groups fallen into local optimum is adjusted in order to realize global optimal. by judging groups spatial location of concentration and fitness variance. At the same time, the global factors are adjusted dynamically with the action of the current particle fitness. Four typical function optimization problems are drawn into simulation experiment. The results show that the improved particle swarm optimization algorithm is convergent, robust and accurate.


2022 ◽  
Vol 19 (1) ◽  
pp. 473-512
Author(s):  
Rong Zheng ◽  
◽  
Heming Jia ◽  
Laith Abualigah ◽  
Qingxin Liu ◽  
...  

<abstract> <p>Arithmetic optimization algorithm (AOA) is a newly proposed meta-heuristic method which is inspired by the arithmetic operators in mathematics. However, the AOA has the weaknesses of insufficient exploration capability and is likely to fall into local optima. To improve the searching quality of original AOA, this paper presents an improved AOA (IAOA) integrated with proposed forced switching mechanism (FSM). The enhanced algorithm uses the random math optimizer probability (<italic>RMOP</italic>) to increase the population diversity for better global search. And then the forced switching mechanism is introduced into the AOA to help the search agents jump out of the local optima. When the search agents cannot find better positions within a certain number of iterations, the proposed FSM will make them conduct the exploratory behavior. Thus the cases of being trapped into local optima can be avoided effectively. The proposed IAOA is extensively tested by twenty-three classical benchmark functions and ten CEC2020 test functions and compared with the AOA and other well-known optimization algorithms. The experimental results show that the proposed algorithm is superior to other comparative algorithms on most of the test functions. Furthermore, the test results of two training problems of multi-layer perceptron (MLP) and three classical engineering design problems also indicate that the proposed IAOA is highly effective when dealing with real-world problems.</p> </abstract>


Author(s):  
Jiarui Zhou ◽  
Junshan Yang ◽  
Ling Lin ◽  
Zexuan Zhu ◽  
Zhen Ji

Particle swarm optimization (PSO) is a swarm intelligence algorithm well known for its simplicity and high efficiency on various problems. Conventional PSO suffers from premature convergence due to the rapid convergence speed and lack of population diversity. It is easy to get trapped in local optima. For this reason, improvements are made to detect stagnation during the optimization and reactivate the swarm to search towards the global optimum. This chapter imposes the reflecting bound-handling scheme and von Neumann topology on PSO to increase the population diversity. A novel crown jewel defense (CJD) strategy is introduced to restart the swarm when it is trapped in a local optimum region. The resultant algorithm named LCJDPSO-rfl is tested on a group of unimodal and multimodal benchmark functions with rotation and shifting. Experimental results suggest that the LCJDPSO-rfl outperforms state-of-the-art PSO variants on most of the functions.


Author(s):  
Xiaohui Yuan ◽  
Zhihuan Chen ◽  
Yanbin Yuan ◽  
Yuehua Huang ◽  
Xiaopan Zhang

A novel strength Pareto gravitational search algorithm (SPGSA) is proposed to solve multi-objective optimization problems. This SPGSA algorithm utilizes the strength Pareto concept to assign the fitness values for agents and uses a fine-grained elitism selection mechanism to keep the population diversity. Furthermore, the recombination operators are modeled in this approach to decrease the possibility of trapping in local optima. Experiments are conducted on a series of benchmark problems that are characterized by difficulties in local optimality, nonuniformity, and nonconvexity. The results show that the proposed SPGSA algorithm performs better in comparison with other related works. On the other hand, the effectiveness of two subtle means added to the GSA are verified, i.e. the fine-grained elitism selection and the use of SBX and PMO operators. Simulation results show that these measures not only improve the convergence ability of original GSA, but also preserve the population diversity adequately, which enables the SPGSA algorithm to have an excellent ability that keeps a desirable balance between the exploitation and exploration so as to accelerate the convergence speed to the true Pareto-optimal front.


Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 884
Author(s):  
Petr Stodola ◽  
Karel Michenka ◽  
Jan Nohel ◽  
Marian Rybanský

The dynamic traveling salesman problem (DTSP) falls under the category of combinatorial dynamic optimization problems. The DTSP is composed of a primary TSP sub-problem and a series of TSP iterations; each iteration is created by changing the previous iteration. In this article, a novel hybrid metaheuristic algorithm is proposed for the DTSP. This algorithm combines two metaheuristic principles, specifically ant colony optimization (ACO) and simulated annealing (SA). Moreover, the algorithm exploits knowledge about the dynamic changes by transferring the information gathered in previous iterations in the form of a pheromone matrix. The significance of the hybridization, as well as the use of knowledge about the dynamic environment, is examined and validated on benchmark instances including small, medium, and large DTSP problems. The results are compared to the four other state-of-the-art metaheuristic approaches with the conclusion that they are significantly outperformed by the proposed algorithm. Furthermore, the behavior of the algorithm is analyzed from various points of view (including, for example, convergence speed to local optimum, progress of population diversity during optimization, and time dependence and computational complexity).


Author(s):  
Shoubao Su ◽  
Zhaorui Zhai ◽  
Chishe Wang ◽  
Kaimeng Ding

The traditional fractional-order particle swarm optimization (FOPSO) algorithm depends on the fractional order [Formula: see text], and it is easy to fall into local optimum. To overcome these disadvantages, a novel perspective with PID gains tuning procedure is proposed by combining the time factor with FOPSO, i.e. a new fractional-order particle swarm optimization called TFFV-PSO, which reduces the dependence on the fractional order to enhance the ability of particles to escape from local optimums. According to its influence on the performance of the algorithm, the time factor is varied with population diversity parameters to balance the exploration and exploitation capabilities of the particle swarm, so as to adjust the convergence speed of the algorithm, then it follows that a better convergence performance will be obtained. The improved method is tested on several benchmark functions and applied to tune the PID controller parameters. The experimental results and the comparison with previous other methods show that our proposed TFFV-PSO provides an adequate velocity of convergence and a satisfying accuracy, as well as even better robustness.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Guoming Du ◽  
Yangbo Chen ◽  
Wei Sun

Complex nonlinear optimization problems are involved in optimal spatial search, such as location allocation problems that occur in multidimensional geographic space. Such search problems are generally difficult to solve by using traditional methods. The bat algorithm (BA) is an effective method for solving optimization problems. However, the solution of the standard BA is easily trapped at one of its local optimum values. The main cause of premature convergence is the loss of diversity in the population. The niche technique is an effective method to maintain the population diversity, to enhance the exploration of the new search domains, and to avoid premature convergence. In this paper, a geographic information system- (GIS-) based niche hybrid bat algorithm (NHBA) is proposed for solving the optimal spatial search. The NHBA is able to avoid the premature convergence and obtain the global optimal values. The GIS technique provides robust support for processing a substantial amount of geographical data. A case in Fangcun District, Guangzhou City, China, is used to test the NHBA. The comparative experiments illustrate that the BA, GA, FA, PSO, and NHBA algorithms outperform the brute-force algorithm in terms of computational efficiency, and the optimal solutions are more easily obtained with NHBA than with BA, GA, FA, and PSO. Moreover, the precision of NHBA is higher and the convergence of NHBA is faster than those of the other algorithms under the same conditions.


Sign in / Sign up

Export Citation Format

Share Document