scholarly journals Particle Swarm and Bacterial Foraging Inspired Hybrid Artificial Bee Colony Algorithm for Numerical Function Optimization

2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Li Mao ◽  
Yu Mao ◽  
Changxi Zhou ◽  
Chaofeng Li ◽  
Xiao Wei ◽  
...  

Artificial bee colony (ABC) algorithm has good performance in discovering the optimal solutions to difficult optimization problems, but it has weak local search ability and easily plunges into local optimum. In this paper, we introduce the chemotactic behavior of Bacterial Foraging Optimization into employed bees and adopt the principle of moving the particles toward the best solutions in the particle swarm optimization to improve the global search ability of onlooker bees and gain a hybrid artificial bee colony (HABC) algorithm. To obtain a global optimal solution efficiently, we make HABC algorithm converge rapidly in the early stages of the search process, and the search range contracts dynamically during the late stages. Our experimental results on 16 benchmark functions of CEC 2014 show that HABC achieves significant improvement at accuracy and convergence rate, compared with the standard ABC, best-so-far ABC, directed ABC, Gaussian ABC, improved ABC, and memetic ABC algorithms.

2014 ◽  
Vol 2014 ◽  
pp. 1-14 ◽  
Author(s):  
J. J. Jamian ◽  
M. N. Abdullah ◽  
H. Mokhlis ◽  
M. W. Mustafa ◽  
A. H. A. Bakar

The Particle Swarm Optimization (PSO) Algorithm is a popular optimization method that is widely used in various applications, due to its simplicity and capability in obtaining optimal results. However, ordinary PSOs may be trapped in the local optimal point, especially in high dimensional problems. To overcome this problem, an efficient Global Particle Swarm Optimization (GPSO) algorithm is proposed in this paper, based on a new updated strategy of the particle position. This is done through sharing information of particle position between the dimensions (variables) at any iteration. The strategy can enhance the exploration capability of the GPSO algorithm to determine the optimum global solution and avoid traps at the local optimum. The proposed GPSO algorithm is validated on a 12-benchmark mathematical function and compared with three different types of PSO techniques. The performance of this algorithm is measured based on the solutions’ quality, convergence characteristics, and their robustness after 50 trials. The simulation results showed that the new updated strategy in GPSO assists in realizing a better optimum solution with the smallest standard deviation value compared to other techniques. It can be concluded that the proposed GPSO method is a superior technique for solving high dimensional numerical function optimization problems.


Symmetry ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 922 ◽  
Author(s):  
Yuji Du ◽  
Fanfan Xu

As a meta-heuristic algoriTthm, particle swarm optimization (PSO) has the advantages of having a simple principle, few required parameters, easy realization and strong adaptability. However, it is easy to fall into a local optimum in the early stage of iteration. Aiming at this shortcoming, this paper presents a hybrid multi-step probability selection particle swarm optimization with sine chaotic inertial weight and symmetric tangent chaotic acceleration coefficients (MPSPSO-ST), which can strengthen the overall performance of PSO to a large extent. Firstly, we propose a hybrid multi-step probability selection update mechanism (MPSPSO), which skillfully uses a multi-step process and roulette wheel selection to improve the performance. In order to achieve a good balance between global search capability and local search capability to further enhance the performance of the method, we also design sine chaotic inertial weight and symmetric tangent chaotic acceleration coefficients inspired by chaos mechanism and trigonometric functions, which are integrated into the MPSPSO-ST algorithm. This strategy enables the diversity of the swarm to be preserved to discourage premature convergence. To evaluate the effectiveness of the MPSPSO-ST algorithm, we conducted extensive experiments with 20 classic benchmark functions. The experimental results show that the MPSPSO-ST algorithm has faster convergence speed, higher optimization accuracy and better robustness, which is competitive in solving numerical optimization problems and outperforms a lot of classical PSO variants and well-known optimization algorithms.


2015 ◽  
Vol 26 (10) ◽  
pp. 1550109 ◽  
Author(s):  
Zakaria N. Alqattan ◽  
Rosni Abdullah

Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).


2013 ◽  
Vol 427-429 ◽  
pp. 1934-1938
Author(s):  
Zhong Rong Zhang ◽  
Jin Peng Liu ◽  
Ke De Fei ◽  
Zhao Shan Niu

The aim is to improve the convergence of the algorithm, and increase the population diversity. Adaptively particles of groups fallen into local optimum is adjusted in order to realize global optimal. by judging groups spatial location of concentration and fitness variance. At the same time, the global factors are adjusted dynamically with the action of the current particle fitness. Four typical function optimization problems are drawn into simulation experiment. The results show that the improved particle swarm optimization algorithm is convergent, robust and accurate.


2019 ◽  
Vol 2019 ◽  
pp. 1-18 ◽  
Author(s):  
Xingwang Huang ◽  
Chaopeng Li ◽  
Yunming Pu ◽  
Bingyan He

Quantum-behaved bat algorithm with mean best position directed (QMBA) is a novel variant of bat algorithm (BA) with good performance. However, the QMBA algorithm generates all stochastic coefficients with uniform probability distribution, which can only provide a relatively small search range, so it still faces a certain degree of premature convergence. In order to help bats escape from the local optimum, this article proposes a novel Gaussian quantum bat algorithm with mean best position directed (GQMBA), which applies Gaussian probability distribution to generate random number sequences. Applying Gaussian distribution instead of uniform distribution to generate random coefficients in GQMBA is an effective technique to promote the performance in avoiding premature convergence. In this article, the combination of QMBA and Gaussian probability distribution is applied to solve the numerical function optimization problem. Nineteen benchmark functions are employed and compared with other algorithms to evaluate the accuracy and performance of GQMBA. The experimental results show that, in most cases, the proposed GQMBA algorithm can provide better search performance.


Sign in / Sign up

Export Citation Format

Share Document