hybrid mutation operator
Recently Published Documents


TOTAL DOCUMENTS

9
(FIVE YEARS 3)

H-INDEX

5
(FIVE YEARS 1)

PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0250951
Author(s):  
Xuxu Zhong ◽  
Meijun Duan ◽  
Xiao Zhang ◽  
Peng Cheng

Differential evolution (DE) is favored by scholars for its simplicity and efficiency, but its ability to balance exploration and exploitation needs to be enhanced. In this paper, a hybrid differential evolution with gaining-sharing knowledge algorithm (GSK) and harris hawks optimization (HHO) is proposed, abbreviated as DEGH. Its main contribution lies are as follows. First, a hybrid mutation operator is constructed in DEGH, in which the two-phase strategy of GSK, the classical mutation operator “rand/1” of DE and the soft besiege rule of HHO are used and improved, forming a double-insurance mechanism for the balance between exploration and exploitation. Second, a novel crossover probability self-adaption strategy is proposed to strengthen the internal relation among mutation, crossover and selection of DE. On this basis, the crossover probability and scaling factor jointly affect the evolution of each individual, thus making the proposed algorithm can better adapt to various optimization problems. In addition, DEGH is compared with eight state-of-the-art DE algorithms on 32 benchmark functions. Experimental results show that the proposed DEGH algorithm is significantly superior to the compared algorithms.


Author(s):  
Esra'a Alkafaween ◽  
Ahmad B. A. Hassanat

Genetic algorithm (GA) is an efficient tool for solving optimization problems by evolving solutions, as it mimics the Darwinian theory of natural evolution. The mutation operator is one of the key success factors in GA, as it is considered the exploration operator of GA. Various mutation operators exist to solve hard combinatorial problems such as the TSP. In this paper, we propose a hybrid mutation operator called "IRGIBNNM", this mutation is a combination of two existing mutations; a knowledgebased mutation, and a random-based mutation. We also improve the existing “select best mutation” strategy using the proposed mutation. We conducted several experiments on twelve benchmark Symmetric traveling salesman problem (STSP) instances. The results of our experiments show the efficiency of the proposed mutation, particularly when we use it with some other mutations.


2011 ◽  
Vol 15 (10) ◽  
pp. 2041-2055 ◽  
Author(s):  
Karthik Sindhya ◽  
Sauli Ruuska ◽  
Tomi Haanpää ◽  
Kaisa Miettinen

2010 ◽  
Vol 171-172 ◽  
pp. 555-560 ◽  
Author(s):  
Peng Chen ◽  
Xin Li ◽  
Shi Bin Ren ◽  
Wen Hua Li

In this paper, for multi-objective, with strategy of antibody concentration with sufficiency vector distance, a novel immune selection operator was given, and a scale variable hybrid mutation operator, which introduced the variable scale method in the hybrid mutation, was given also. The algorithm was applied in the optimization design of filter. To apply the algorithm, passive power filter structure was introduced first, and then optimization objective and constraints of filter was given as follows. With the optimization objective and constraints, for an experiment circuit, calculate using the algorithm and experiment was carried on. At last, Comparison of the algorithm with genetic algorithm and nonlinear programming was given, it shows that the algorithm has better performance of self-regulation, and could quickly converge to the global optimum. These results prove that the algorithm is effective and practicable using in multi-objective optimization design.


Author(s):  
JIANYONG CHEN ◽  
QIUZHEN LIN ◽  
QINGBIN HU

In this paper, a novel clonal algorithm applied in multiobjecitve optimization (NCMO) is presented, which is designed from the improvement of search operators, i.e. dynamic mutation probability, dynamic simulated binary crossover (D-SBX) operator and hybrid mutation operator combining with Gaussian and polynomial mutations (GP-HM) operator. The main notion of these approaches is to perform more coarse-grained search at initial stage in order to speed up the convergence toward the Pareto-optimal front. Once the solutions are getting close to the Pareto-optimal front, more fine-grained search is performed in order to reduce the gaps between the solutions and the Pareto-optimal front. Based on this purpose, a cooling schedule is adopted in these approaches, reducing the parameters gradually to a minimal threshold, the aim of which is to keep a desirable balance between fine-grained search and coarse-grained search. By this means, the exploratory capabilities of NCMO are enhanced. When compared with various state-of-the-art multiobjective optimization algorithms developed recently, simulation results show that NCMO has remarkable performance.


Sign in / Sign up

Export Citation Format

Share Document