A cooperative particle swarm optimizer with stochastic movements for computationally expensive numerical optimization problems

2016 ◽  
Vol 13 ◽  
pp. 68-82 ◽  
Author(s):  
Thi Thuy Ngo ◽  
Ali Sadollah ◽  
Joong Hoon Kim
2015 ◽  
Vol 24 (05) ◽  
pp. 1550017 ◽  
Author(s):  
Aderemi Oluyinka Adewumi ◽  
Akugbe Martins Arasomwan

This paper presents an improved particle swarm optimization (PSO) technique for global optimization. Many variants of the technique have been proposed in literature. However, two major things characterize many of these variants namely, static search space and velocity limits, which bound their flexibilities in obtaining optimal solutions for many optimization problems. Furthermore, the problem of premature convergence persists in many variants despite the introduction of additional parameters such as inertia weight and extra computation ability. This paper proposes an improved PSO algorithm without inertia weight. The proposed algorithm dynamically adjusts the search space and velocity limits for the swarm in each iteration by picking the highest and lowest values among all the dimensions of the particles, calculates their absolute values and then uses the higher of the two values to define a new search range and velocity limits for next iteration. The efficiency and performance of the proposed algorithm was shown using popular benchmark global optimization problems with low and high dimensions. Results obtained demonstrate better convergence speed and precision, stability, robustness with better global search ability when compared with six recent variants of the original algorithm.


Author(s):  
T. O. Ting ◽  
H. C. Ting ◽  
T. S. Lee

In this work, a hybrid Taguchi-Particle Swarm Optimization (TPSO) is proposed to solve global numerical optimization problems with continuous and discrete variables. This hybrid algorithm combines the well-known Particle Swarm Optimization Algorithm with the established Taguchi method, which has been an important tool for robust design. This paper presents the improvements obtained despite the simplicity of the hybridization process. The Taguchi method is run only once in every PSO iteration and therefore does not give significant impact in terms of computational cost. The method creates a more diversified population, which also contributes to the success of avoiding premature convergence. The proposed method is effectively applied to solve 13 benchmark problems. This study’s results show drastic improvements in comparison with the standard PSO algorithm involving continuous and discrete variables on high dimensional benchmark functions.


2015 ◽  
pp. 1246-1276
Author(s):  
Wen Fung Leong ◽  
Yali Wu ◽  
Gary G. Yen

Generally, constraint-handling techniques are designed for evolutionary algorithms to solve Constrained Multiobjective Optimization Problems (CMOPs). Most Multiojective Particle Swarm Optimization (MOPSO) designs adopt these existing constraint-handling techniques to deal with CMOPs. In this chapter, the authors present a constrained MOPSO in which the information related to particles' infeasibility and feasibility status is utilized effectively to guide the particles to search for feasible solutions and to improve the quality of the optimal solution found. The updating of personal best archive is based on the particles' Pareto ranks and their constraint violations. The infeasible global best archive is adopted to store infeasible nondominated solutions. The acceleration constants are adjusted depending on the personal bests' and selected global bests' infeasibility and feasibility statuses. The personal bests' feasibility statuses are integrated to estimate the mutation rate in the mutation procedure. The simulation results indicate that the proposed constrained MOPSO is highly competitive in solving selected benchmark problems.


Mathematics ◽  
2019 ◽  
Vol 7 (6) ◽  
pp. 521 ◽  
Author(s):  
Fanrong Kong ◽  
Jianhui Jiang ◽  
Yan Huang

As a powerful tool in optimization, particle swarm optimizers have been widely applied to many different optimization areas and drawn much attention. However, for large-scale optimization problems, the algorithms exhibit poor ability to pursue satisfactory results due to the lack of ability in diversity maintenance. In this paper, an adaptive multi-swarm particle swarm optimizer is proposed, which adaptively divides a swarm into several sub-swarms and a competition mechanism is employed to select exemplars. In this way, on the one hand, the diversity of exemplars increases, which helps the swarm preserve the exploitation ability. On the other hand, the number of sub-swarms adaptively changes from a large value to a small value, which helps the algorithm make a suitable balance between exploitation and exploration. By employing several peer algorithms, we conducted comparisons to validate the proposed algorithm on a large-scale optimization benchmark suite of CEC 2013. The experiments results demonstrate the proposed algorithm is effective and competitive to address large-scale optimization problems.


Mathematics ◽  
2019 ◽  
Vol 7 (5) ◽  
pp. 414 ◽  
Author(s):  
Weian Guo ◽  
Lei Zhu ◽  
Lei Wang ◽  
Qidi Wu ◽  
Fanrong Kong

Diversity maintenance is crucial for particle swarm optimizer’s (PSO) performance. However, the update mechanism for particles in the conventional PSO is poor in the performance of diversity maintenance, which usually results in a premature convergence or a stagnation of exploration in the searching space. To help particle swarm optimization enhance the ability in diversity maintenance, many works have proposed to adjust the distances among particles. However, such operators will result in a situation where the diversity maintenance and fitness evaluation are conducted in the same distance-based space. Therefore, it also brings a new challenge in trade-off between convergence speed and diversity preserving. In this paper, a novel PSO is proposed that employs competitive strategy and entropy measurement to manage convergence operator and diversity maintenance respectively. The proposed algorithm was applied to the large-scale optimization benchmark suite on CEC 2013 and the results demonstrate the proposed algorithm is feasible and competitive to address large scale optimization problems.


Sign in / Sign up

Export Citation Format

Share Document