scholarly journals Adaptive Multiswarm Comprehensive Learning Particle Swarm Optimization

Information ◽  
2018 ◽  
Vol 9 (7) ◽  
pp. 173 ◽  
Author(s):  
Xiang Yu ◽  
Claudio Estevez

Multiswarm comprehensive learning particle swarm optimization (MSCLPSO) is a multiobjective metaheuristic recently proposed by the authors. MSCLPSO uses multiple swarms of particles and externally stores elitists that are nondominated solutions found so far. MSCLPSO can approximate the true Pareto front in one single run; however, it requires a large number of generations to converge, because each swarm only optimizes the associated objective and does not learn from any search experience outside the swarm. In this paper, we propose an adaptive particle velocity update strategy for MSCLPSO to improve the search efficiency. Based on whether the elitists are indifferent or complex on each dimension, each particle adaptively determines whether to just learn from some particle in the same swarm, or additionally from the difference of some pair of elitists for the velocity update on that dimension, trying to achieve a tradeoff between optimizing the associated objective and exploring diverse regions of the Pareto set. Experimental results on various two-objective and three-objective benchmark optimization problems with different dimensional complexity characteristics demonstrate that the adaptive particle velocity update strategy improves the search performance of MSCLPSO significantly and is able to help MSCLPSO locate the true Pareto front more quickly and obtain better distributed nondominated solutions over the entire Pareto front.

2010 ◽  
Vol 1 (1) ◽  
pp. 42-63 ◽  
Author(s):  
Wen Fung Leong ◽  
Gary G. Yen

In this article, the authors propose a particle swarm optimization (PSO) for constrained optimization. The proposed PSO adopts a multiobjective approach to constraint handling. Procedures to update the feasible and infeasible personal best are designed to encourage finding feasible regions and convergence toward the Pareto front. In addition, the infeasible nondominated solutions are stored in the global best archive to exploit the hidden information for guiding the particles toward feasible regions. Furthermore, the number of feasible personal best in the personal best memory and the scalar constraint violations of personal best and global best are used to adapt the acceleration constants in the PSO flight equations. The purpose is to find more feasible particles and search for better solutions during the process. The mutation procedure is applied to encourage global and fine-tune local searches. The simulation results indicate that the proposed constrained PSO is highly competitive, achieving promising performance.


Entropy ◽  
2019 ◽  
Vol 21 (9) ◽  
pp. 827 ◽  
Author(s):  
E. J. Solteiro Pires ◽  
J. A. Tenreiro Machado ◽  
P. B. de Moura Oliveira

Particle swarm optimization (PSO) is a search algorithm inspired by the collective behavior of flocking birds and fishes. This algorithm is widely adopted for solving optimization problems involving one objective. The evaluation of the PSO progress is usually measured by the fitness of the best particle and the average fitness of the particles. When several objectives are considered, the PSO may incorporate distinct strategies to preserve nondominated solutions along the iterations. The performance of the multiobjective PSO (MOPSO) is usually evaluated by considering the resulting swarm at the end of the algorithm. In this paper, two indices based on the Shannon entropy are presented, to study the swarm dynamic evolution during the MOPSO execution. The results show that both indices are useful for analyzing the diversity and convergence of multiobjective algorithms.


2018 ◽  
Vol 2018 ◽  
pp. 1-17 ◽  
Author(s):  
Xueying Lv ◽  
Yitian Wang ◽  
Junyi Deng ◽  
Guanyu Zhang ◽  
Liu Zhang

In this study, an improved eliminate particle swarm optimization (IEPSO) is proposed on the basis of the last-eliminated principle to solve optimization problems in engineering design. During optimization, the IEPSO enhances information communication among populations and maintains population diversity to overcome the limitations of classical optimization algorithms in solving multiparameter, strong coupling, and nonlinear engineering optimization problems. These limitations include advanced convergence and the tendency to easily fall into local optimization. The parameters involved in the imported “local-global information sharing” term are analyzed, and the principle of parameter selection for performance is determined. The performances of the IEPSO and classical optimization algorithms are then tested by using multiple sets of classical functions to verify the global search performance of the IEPSO. The simulation test results and those of the improved classical optimization algorithms are compared and analyzed to verify the advanced performance of the IEPSO algorithm.


2018 ◽  
Vol 232 ◽  
pp. 03039
Author(s):  
Taowei Chen ◽  
Yiming Yu ◽  
Kun Zhao

Particle swarm optimization(PSO) algorithm has been widely applied in solving multi-objective optimization problems(MOPs) since it was proposed. However, PSO algorithms updated the velocity of each particle using a single search strategy, which may be difficult to obtain approximate Pareto front for complex MOPs. In this paper, inspired by the theory of P system, a multi-objective particle swarm optimization (PSO) algorithm based on the framework of membrane system(PMOPSO) is proposed to solve MOPs. According to the hierarchical structure, objects and rules of P system, the PSO approach is used in elementary membranes to execute multiple search strategy. And non-dominated sorting and crowding distance is used in skin membrane for improving speed of convergence and maintaining population diversity by evolutionary rules. Compared with other multi-objective optimization algorithm including MOPSO, dMOPSO, SMPSO, MMOPSO, MOEA/D, SPEA2, PESA2, NSGAII on a benchmark series function, the experimental results indicate that the proposed algorithm is not only feasible and effective but also have a better convergence to true Pareto front.


2016 ◽  
Vol 2016 ◽  
pp. 1-15 ◽  
Author(s):  
Di Zhou ◽  
Yajun Li ◽  
Bin Jiang ◽  
Jun Wang

Due to its fast convergence and population-based nature, particle swarm optimization (PSO) has been widely applied to address the multiobjective optimization problems (MOPs). However, the classical PSO has been proved to be not a global search algorithm. Therefore, there may exist the problem of not being able to converge to global optima in the multiobjective PSO-based algorithms. In this paper, making full use of the global convergence property of quantum-behaved particle swarm optimization (QPSO), a novel multiobjective QPSO algorithm based on the ring model is proposed. Based on the ring model, the position-update strategy is improved to address MOPs. The employment of a novel communication mechanism between particles effectively slows down the descent speed of the swarm diversity. Moreover, the searching ability is further improved by adjusting the position of local attractor. Experiment results show that the proposed algorithm is highly competitive on both convergence and diversity in solving the MOPs. In addition, the advantage becomes even more obvious with the number of objectives increasing.


2014 ◽  
Vol 687-691 ◽  
pp. 1420-1424
Author(s):  
Hai Tao Han ◽  
Wan Feng Ji ◽  
Yao Qing Zhang ◽  
De Peng Sha

Two main requirements of the optimization problems are included: one is finding the global minimum, the other is obtaining fast convergence speed. As heuristic algorithm and swarm intelligence algorithm, both particle swarm optimization and genetic algorithm are widely used in vehicle path planning because of their favorable search performance. This paper analyzes the characteristics and the same and different points of two algorithms as well as making simulation experiment under the same operational environment and threat states space. The result shows that particle swarm optimization is superior to genetic algorithm in searching speed and convergence.


2017 ◽  
Vol 40 (6) ◽  
pp. 2039-2053 ◽  
Author(s):  
Jaouher Chrouta ◽  
Abderrahmen Zaafouri ◽  
Mohamed Jemli

In this paper, a new methodology to develop an Optimal Fuzzy model (OptiFel) using an improved Multi-swarm Particle Swarm Optimization (MsPSO) algorithm is proposed with a new adaptive inertia weight based on Grey relational analysis. Since the classical MsPSO suffers from premature convergence and can be trapped into local optima, which significantly affects the model accuracy, a modified MsPSO algorithm is presented here. The most important advantage of the proposed algorithm is the adjustment of fewer parameters in which the main parameter is the inertia weight. In fact, the control of this parameter could facilitate the convergence and prevent an explosion of the swarm. The performance of the proposed algorithm is evaluated by adopting standard tests and indicators which are reported in the specialized literature. The proposed Grey MsPSO is first applied to solve the optimization problems of six benchmark functions and then, compared with the other nine variants of particle swarm optimization. In order to demonstrate the higher search performance of the proposed algorithm, the comparison is then made via two performance tests such as the standard deviation and central processing unit time. To further validate the generalization ability of the Improved OptiFel approach, the proposed algorithm is secondly applied on the Box–Jenkins Gas Furnace system and on a irrigation station prototype. A comparative study based on Mean Square Error is then performed between the proposed approach and other existing methods. As a result, the improved Grey MsPSO is well adopted to find an optimal model for the real processes with high accuracy and strong generalization ability.


Author(s):  
JINGXUAN WEI ◽  
YUPING WANG

In this paper, an infeasible elitist based particle swarm optimization is proposed for solving constrained optimization problems. Firstly, an infeasible elitist preservation strategy is proposed, which keeps some infeasible solutions with smaller rank values at the early stage of evolution regardless of how large the constraint violations are, and keep some infeasible solutions with smaller constraint violations and rank values at the later stage of evolution. In this manner, the true Pareto front will be found easier. Secondly, in order to find a set of diversity and uniformly distributed Pareto optimal solutions, a new crowding distance function is designed. It can assign large function values not only for the particles located in the sparse regions of the objective space but also for the crowded particles located near to the boundary of the Pareto front as well. Thirdly, a new mutation operator with two phases is proposed. In the first phase, the particles whose constraint violations are less than the threshold value will be used to compute the total force, then the force will be used as a mutation direction, being helpful to find the better solutions along this direction. In order to guarantee the convergence of the algorithm, the second phase of mutation is proposed. Finally, the convergence of the algorithm is proved. The comparative study shows that the proposed algorithm can generate widespread and uniformly distributed Pareto fronts and outperforms those compared algorithms.


2013 ◽  
Vol 22 (03) ◽  
pp. 1350015 ◽  
Author(s):  
HEMING XU ◽  
YINGLIN WANG ◽  
XIN XU

For multiobjective particle swarm optimization (MOPSO), two particles may be incomparable, i. e., not dominated by each other. The personal best and the global best for the particle become less optimal, thus the convergence becomes slow. Even worse, an archive of a limited size can not cover the entire region dominated by the Pareto front, the uncovered region can contain unidentifiable nondominated solutions that are not optimal, and thus the precision the algorithm achieves encounters a plateau. Therefore we propose dimensional update, i. e., evaluating the particle's fitness after updating each variable of its position. Separate consideration of the impact of each variable decreases the occurrence of incomparable relations, thus improves the performance. Experimental results validate the efficiency of our algorithm.


Information ◽  
2019 ◽  
Vol 10 (1) ◽  
pp. 22
Author(s):  
Shouwen Chen

Motivated by concepts in quantum mechanics and particle swarm optimization (PSO), quantum-behaved particle swarm optimization (QPSO) was proposed as a variant of PSO with better global search ability. In this paper, a QPSO with weighted mean personal best position and adaptive local attractor (ALA-QPSO) is proposed to simultaneously enhance the search performance of QPSO and acquire good global optimal ability. In ALA-QPSO, the weighted mean personal best position is obtained by distinguishing the difference of the effect of the particles with different fitness, and the adaptive local attractor is calculated using the sum of squares of deviations of the particles’ fitness values as the coefficient of the linear combination of the particle best known position and the entire swarm’s best known position. The proposed ALA-QPSO algorithm is tested on twelve benchmark functions, and compared with the basic Artificial Bee Colony and the other four QPSO variants. Experimental results show that ALA-QPSO performs better than those compared method in all of the benchmark functions in terms of better global search capability and faster convergence rate.


Sign in / Sign up

Export Citation Format

Share Document