scholarly journals Parallel and Cooperative Particle Swarm Optimizer for Multimodal Problems

2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Geng Zhang ◽  
Yangmin Li

Although the original particle swarm optimizer (PSO) method and its related variant methods show some effectiveness for solving optimization problems, it may easily get trapped into local optimum especially when solving complex multimodal problems. Aiming to solve this issue, this paper puts forward a novel method called parallel and cooperative particle swarm optimizer (PCPSO). In case that the interacting of the elements inD-dimensional function vectorX=[x1,x2,…,xd,…,xD]is independent, cooperative particle swarm optimizer (CPSO) is used. Based on this, the PCPSO is presented to solve real problems. Since the dimension cannot be split into several lower dimensional search spaces in real problems because of the interacting of the elements, PCPSO exploits the cooperation of two parallel CPSO algorithms by orthogonal experimental design (OED) learning. Firstly, the CPSO algorithm is used to generate two locally optimal vectors separately; then the OED is used to learn the merits of these two vectors and creates a better combination of them to generate further search. Experimental studies on a set of test functions show that PCPSO exhibits better robustness and converges much closer to the global optimum than several other peer algorithms.

2017 ◽  
Vol 23 (8) ◽  
pp. 985-1001 ◽  
Author(s):  
Ali MORTAZAVI ◽  
Vedat TOĞAN ◽  
Ayhan NUHOĞLU

This study investigates the performances of the integrated particle swarm optimizer (iPSO) algorithm in the layout and sizing optimization of truss structures. The iPSO enhances the standard PSO algorithm employing both the concept of weighted particle and the improved fly-back method to handle optimization constraints. The performance of the recent algorithm is tested on a series of well-known truss structures weight minimization problems including mixed design search spaces (i.e. with both discrete and continuous variables) over various types of constraints (i.e. nodal dis­placements, element stresses and buckling criterion). The results demonstrate the validity of the proposed approach in dealing with combined layout and size optimization problems.


2012 ◽  
Vol 433-440 ◽  
pp. 4526-4529 ◽  
Author(s):  
Hua Li Wu ◽  
Jin Hua Wu ◽  
Ai Li Liu

PSO has been widely used in continuous optimization problems, but in discrete domain the research and application is very little. By redefining the position and speed of particles and related operations, the discrete particle swarm algorithm can be constructed. Due to the weak capacity of local search of PSO and be easy to constringe the local optimum, it is combined with simulated annealing and the hybrid discrete PSO is constructed using the characteristics that simulated annealing can accept some ungraded solution under the control of certain probability,finally the algorithm is applied to solving the traveling salesman problem successfully. The simulation results show that the hybrid discrete PSO can get better optimization effect, which validates the effectiveness of the method.


2021 ◽  
Vol 2021 ◽  
pp. 1-24
Author(s):  
Sami Zdiri ◽  
Jaouher Chrouta ◽  
Abderrahmen Zaafouri

In this study, a modified version of multiswarm particle swarm optimization algorithm (MsPSO) is proposed. However, the classical MsPSO algorithm causes premature stagnation due to the limitation of particle diversity; as a result, it is simple to slip into a local optimum. To overcome the above feebleness, this work presents a heterogeneous multiswarm PSO algorithm based on adaptive inertia weight strategies called (A-MsPSO). The MsPSO’s main advantages are that it is simple to use and that there are few settings to alter. In the MsPSO method, the inertia weight is a key parameter affecting considerably convergence, exploration, and exploitation. In this manuscript, an adaptive inertia weight is adopted to ameliorate the global search ability of the classical MsPSO algorithm. Its performance is based on exploration, which is defined as an algorithm’s capacity to search through a variety of search spaces. It also aids in determining the best ideal capability for searching a small region and determining the candidate answer. If a swarm discovers a global best location during iterations, the inertia weight is increased, and exploration in that direction is enhanced. The standard tests and indications provided in the specialized literature are used to show the efficiency of the proposed algorithm. Furthermore, findings of comparisons between A-MsPSO and six other common PSO algorithms show that our proposal has a highly promising performance for handling various types of optimization problems, leading to both greater solution accuracy and more efficient solution times.


2013 ◽  
Vol 427-429 ◽  
pp. 1934-1938
Author(s):  
Zhong Rong Zhang ◽  
Jin Peng Liu ◽  
Ke De Fei ◽  
Zhao Shan Niu

The aim is to improve the convergence of the algorithm, and increase the population diversity. Adaptively particles of groups fallen into local optimum is adjusted in order to realize global optimal. by judging groups spatial location of concentration and fitness variance. At the same time, the global factors are adjusted dynamically with the action of the current particle fitness. Four typical function optimization problems are drawn into simulation experiment. The results show that the improved particle swarm optimization algorithm is convergent, robust and accurate.


2015 ◽  
Vol 24 (05) ◽  
pp. 1550017 ◽  
Author(s):  
Aderemi Oluyinka Adewumi ◽  
Akugbe Martins Arasomwan

This paper presents an improved particle swarm optimization (PSO) technique for global optimization. Many variants of the technique have been proposed in literature. However, two major things characterize many of these variants namely, static search space and velocity limits, which bound their flexibilities in obtaining optimal solutions for many optimization problems. Furthermore, the problem of premature convergence persists in many variants despite the introduction of additional parameters such as inertia weight and extra computation ability. This paper proposes an improved PSO algorithm without inertia weight. The proposed algorithm dynamically adjusts the search space and velocity limits for the swarm in each iteration by picking the highest and lowest values among all the dimensions of the particles, calculates their absolute values and then uses the higher of the two values to define a new search range and velocity limits for next iteration. The efficiency and performance of the proposed algorithm was shown using popular benchmark global optimization problems with low and high dimensions. Results obtained demonstrate better convergence speed and precision, stability, robustness with better global search ability when compared with six recent variants of the original algorithm.


Mekatronika ◽  
2021 ◽  
Vol 3 (1) ◽  
pp. 35-43
Author(s):  
K. M. Ang ◽  
Z. S. Yeap ◽  
C. E. Chow ◽  
W. Cheng ◽  
W. H. Lim

Different variants of particle swarm optimization (PSO) algorithms were introduced in recent years with various improvements to tackle different types of optimization problems more robustly. However, the conventional initialization scheme tends to generate an initial population with relatively inferior solution due to the random guess mechanism. In this paper, a PSO variant known as modified PSO with chaotic initialization scheme is introduced to solve unconstrained global optimization problems more effectively, by generating a more promising initial population. Experimental studies are conducted to assess and compare the optimization performance of the proposed algorithm with four existing well-establised PSO variants using seven test functions. The proposed algorithm is observed to outperform its competitors in solving the selected test problems.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Li Mao ◽  
Yu Mao ◽  
Changxi Zhou ◽  
Chaofeng Li ◽  
Xiao Wei ◽  
...  

Artificial bee colony (ABC) algorithm has good performance in discovering the optimal solutions to difficult optimization problems, but it has weak local search ability and easily plunges into local optimum. In this paper, we introduce the chemotactic behavior of Bacterial Foraging Optimization into employed bees and adopt the principle of moving the particles toward the best solutions in the particle swarm optimization to improve the global search ability of onlooker bees and gain a hybrid artificial bee colony (HABC) algorithm. To obtain a global optimal solution efficiently, we make HABC algorithm converge rapidly in the early stages of the search process, and the search range contracts dynamically during the late stages. Our experimental results on 16 benchmark functions of CEC 2014 show that HABC achieves significant improvement at accuracy and convergence rate, compared with the standard ABC, best-so-far ABC, directed ABC, Gaussian ABC, improved ABC, and memetic ABC algorithms.


Author(s):  
Jiarui Zhou ◽  
Junshan Yang ◽  
Ling Lin ◽  
Zexuan Zhu ◽  
Zhen Ji

Particle swarm optimization (PSO) is a swarm intelligence algorithm well known for its simplicity and high efficiency on various problems. Conventional PSO suffers from premature convergence due to the rapid convergence speed and lack of population diversity. It is easy to get trapped in local optima. For this reason, improvements are made to detect stagnation during the optimization and reactivate the swarm to search towards the global optimum. This chapter imposes the reflecting bound-handling scheme and von Neumann topology on PSO to increase the population diversity. A novel crown jewel defense (CJD) strategy is introduced to restart the swarm when it is trapped in a local optimum region. The resultant algorithm named LCJDPSO-rfl is tested on a group of unimodal and multimodal benchmark functions with rotation and shifting. Experimental results suggest that the LCJDPSO-rfl outperforms state-of-the-art PSO variants on most of the functions.


2015 ◽  
pp. 1246-1276
Author(s):  
Wen Fung Leong ◽  
Yali Wu ◽  
Gary G. Yen

Generally, constraint-handling techniques are designed for evolutionary algorithms to solve Constrained Multiobjective Optimization Problems (CMOPs). Most Multiojective Particle Swarm Optimization (MOPSO) designs adopt these existing constraint-handling techniques to deal with CMOPs. In this chapter, the authors present a constrained MOPSO in which the information related to particles' infeasibility and feasibility status is utilized effectively to guide the particles to search for feasible solutions and to improve the quality of the optimal solution found. The updating of personal best archive is based on the particles' Pareto ranks and their constraint violations. The infeasible global best archive is adopted to store infeasible nondominated solutions. The acceleration constants are adjusted depending on the personal bests' and selected global bests' infeasibility and feasibility statuses. The personal bests' feasibility statuses are integrated to estimate the mutation rate in the mutation procedure. The simulation results indicate that the proposed constrained MOPSO is highly competitive in solving selected benchmark problems.


Mathematics ◽  
2019 ◽  
Vol 7 (6) ◽  
pp. 521 ◽  
Author(s):  
Fanrong Kong ◽  
Jianhui Jiang ◽  
Yan Huang

As a powerful tool in optimization, particle swarm optimizers have been widely applied to many different optimization areas and drawn much attention. However, for large-scale optimization problems, the algorithms exhibit poor ability to pursue satisfactory results due to the lack of ability in diversity maintenance. In this paper, an adaptive multi-swarm particle swarm optimizer is proposed, which adaptively divides a swarm into several sub-swarms and a competition mechanism is employed to select exemplars. In this way, on the one hand, the diversity of exemplars increases, which helps the swarm preserve the exploitation ability. On the other hand, the number of sub-swarms adaptively changes from a large value to a small value, which helps the algorithm make a suitable balance between exploitation and exploration. By employing several peer algorithms, we conducted comparisons to validate the proposed algorithm on a large-scale optimization benchmark suite of CEC 2013. The experiments results demonstrate the proposed algorithm is effective and competitive to address large-scale optimization problems.


Sign in / Sign up

Export Citation Format

Share Document