Hybrid Dynamical Evolutionary Algorithm and Time Complexity Analysis

2012 ◽  
Vol 263-266 ◽  
pp. 2344-2348
Author(s):  
Hui Ying Li ◽  
Yi Lai Zhang ◽  
Xing Xu

The dynamical evolutionary algorithm (DEA) is a new evolutionary algorithm based on the theory of statistical mechanics, however, DEA converges slowly and often converge at local optima for some function optimization problems. In this paper, a hybrid dynamical evolutionary algorithm (HDEA) with multi-parent crossover and differential evolution mutation is proposed for accelerating convergence velocity and easily escaping suboptimal solutions. Moreover, the population of HDEA is initialized by chaos. In order to confirm the effectiveness of our algorithm, HDEA is applied to solve the typical numerical function minimization problems. The computational complexity of HDEA is analyzed, and the experimental results show that HDEA outperforms the DEA in the aspect of convergence velocity and precision, even the two algorithms have the similar time complexity.

2011 ◽  
Vol 2011 ◽  
pp. 1-12 ◽  
Author(s):  
Lhassane Idoumghar ◽  
Mahmoud Melkemi ◽  
René Schott ◽  
Maha Idrissi Aouad

The paper presents a novel hybrid evolutionary algorithm that combines Particle Swarm Optimization (PSO) and Simulated Annealing (SA) algorithms. When a local optimal solution is reached with PSO, all particles gather around it, and escaping from this local optima becomes difficult. To avoid premature convergence of PSO, we present a new hybrid evolutionary algorithm, called HPSO-SA, based on the idea that PSO ensures fast convergence, while SA brings the search out of local optima because of its strong local-search ability. The proposed HPSO-SA algorithm is validated on ten standard benchmark multimodal functions for which we obtained significant improvements. The results are compared with these obtained by existing hybrid PSO-SA algorithms. In this paper, we provide also two versions of HPSO-SA (sequential and distributed) for minimizing the energy consumption in embedded systems memories. The two versions, of HPSO-SA, reduce the energy consumption in memories from 76% up to 98% as compared to Tabu Search (TS). Moreover, the distributed version of HPSO-SA provides execution time saving of about 73% up to 84% on a cluster of 4 PCs.


2021 ◽  
Vol 6 (4 (114)) ◽  
pp. 6-14
Author(s):  
Maan Afathi

The main purpose of using the hybrid evolutionary algorithm is to reach optimal values and achieve goals that traditional methods cannot reach and because there are different evolutionary computations, each of them has different advantages and capabilities. Therefore, researchers integrate more than one algorithm into a hybrid form to increase the ability of these algorithms to perform evolutionary computation when working alone. In this paper, we propose a new algorithm for hybrid genetic algorithm (GA) and particle swarm optimization (PSO) with fuzzy logic control (FLC) approach for function optimization. Fuzzy logic is applied to switch dynamically between evolutionary algorithms, in an attempt to improve the algorithm performance. The HEF hybrid evolutionary algorithms are compared to GA, PSO, GAPSO, and PSOGA. The comparison uses a variety of measurement functions. In addition to strongly convex functions, these functions can be uniformly distributed or not, and are valuable for evaluating our approach. Iterations of 500, 1000, and 1500 were used for each function. The HEF algorithm’s efficiency was tested on four functions. The new algorithm is often the best solution, HEF accounted for 75 % of all the tests. This method is superior to conventional methods in terms of efficiency


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Li Mao ◽  
Yu Mao ◽  
Changxi Zhou ◽  
Chaofeng Li ◽  
Xiao Wei ◽  
...  

Artificial bee colony (ABC) algorithm has good performance in discovering the optimal solutions to difficult optimization problems, but it has weak local search ability and easily plunges into local optimum. In this paper, we introduce the chemotactic behavior of Bacterial Foraging Optimization into employed bees and adopt the principle of moving the particles toward the best solutions in the particle swarm optimization to improve the global search ability of onlooker bees and gain a hybrid artificial bee colony (HABC) algorithm. To obtain a global optimal solution efficiently, we make HABC algorithm converge rapidly in the early stages of the search process, and the search range contracts dynamically during the late stages. Our experimental results on 16 benchmark functions of CEC 2014 show that HABC achieves significant improvement at accuracy and convergence rate, compared with the standard ABC, best-so-far ABC, directed ABC, Gaussian ABC, improved ABC, and memetic ABC algorithms.


1998 ◽  
Vol 6 (2) ◽  
pp. 185-196 ◽  
Author(s):  
Stefan Droste ◽  
Thomas Jansen ◽  
Ingo Wegener

Evolutionary algorithms (EAs) are heuristic randomized algorithms which, by many impressive experiments, have been proven to behave quite well for optimization problems of various kinds. In this paper a rigorous theoretical complexity analysis of the (1 + 1) evolutionary algorithm for separable functions with Boolean inputs is given. Different mutation rates are compared, and the use of the crossover operator is investigated. The main contribution is not the result that the expected run time of the (1 + 1) evolutionary algorithm is Θ(n ln n) for separable functions with n variables but the methods by which this result can be proven rigorously.


2014 ◽  
Vol 2014 ◽  
pp. 1-14 ◽  
Author(s):  
J. J. Jamian ◽  
M. N. Abdullah ◽  
H. Mokhlis ◽  
M. W. Mustafa ◽  
A. H. A. Bakar

The Particle Swarm Optimization (PSO) Algorithm is a popular optimization method that is widely used in various applications, due to its simplicity and capability in obtaining optimal results. However, ordinary PSOs may be trapped in the local optimal point, especially in high dimensional problems. To overcome this problem, an efficient Global Particle Swarm Optimization (GPSO) algorithm is proposed in this paper, based on a new updated strategy of the particle position. This is done through sharing information of particle position between the dimensions (variables) at any iteration. The strategy can enhance the exploration capability of the GPSO algorithm to determine the optimum global solution and avoid traps at the local optimum. The proposed GPSO algorithm is validated on a 12-benchmark mathematical function and compared with three different types of PSO techniques. The performance of this algorithm is measured based on the solutions’ quality, convergence characteristics, and their robustness after 50 trials. The simulation results showed that the new updated strategy in GPSO assists in realizing a better optimum solution with the smallest standard deviation value compared to other techniques. It can be concluded that the proposed GPSO method is a superior technique for solving high dimensional numerical function optimization problems.


Symmetry ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 922 ◽  
Author(s):  
Yuji Du ◽  
Fanfan Xu

As a meta-heuristic algoriTthm, particle swarm optimization (PSO) has the advantages of having a simple principle, few required parameters, easy realization and strong adaptability. However, it is easy to fall into a local optimum in the early stage of iteration. Aiming at this shortcoming, this paper presents a hybrid multi-step probability selection particle swarm optimization with sine chaotic inertial weight and symmetric tangent chaotic acceleration coefficients (MPSPSO-ST), which can strengthen the overall performance of PSO to a large extent. Firstly, we propose a hybrid multi-step probability selection update mechanism (MPSPSO), which skillfully uses a multi-step process and roulette wheel selection to improve the performance. In order to achieve a good balance between global search capability and local search capability to further enhance the performance of the method, we also design sine chaotic inertial weight and symmetric tangent chaotic acceleration coefficients inspired by chaos mechanism and trigonometric functions, which are integrated into the MPSPSO-ST algorithm. This strategy enables the diversity of the swarm to be preserved to discourage premature convergence. To evaluate the effectiveness of the MPSPSO-ST algorithm, we conducted extensive experiments with 20 classic benchmark functions. The experimental results show that the MPSPSO-ST algorithm has faster convergence speed, higher optimization accuracy and better robustness, which is competitive in solving numerical optimization problems and outperforms a lot of classical PSO variants and well-known optimization algorithms.


Sign in / Sign up

Export Citation Format

Share Document