scholarly journals An Improved Particle Swarm Optimization with Gaussian Disturbance

2018 ◽  
Vol 232 ◽  
pp. 03015
Author(s):  
Changjun Wen ◽  
Changlian Liu ◽  
Heng Zhang ◽  
Hongliang Wang

The particle swarm optimization (PSO) is a widely used tool for solving optimization problems in the field of engineering technology. However, PSO is likely to fall into local optimum, which has the disadvantages of slow convergence speed and low convergence precision. In view of the above shortcomings, a particle swarm optimization with Gaussian disturbance is proposed. With introducing the Gaussian disturbance in the self-cognition part and social cognition part of the algorithm, this method can improve the convergence speed and precision of the algorithm, which can also improve the ability of the algorithm to escape the local optimal solution. The algorithm is simulated by Griewank function after the several evolutionary modes of GDPSO algorithm are analyzed. The experimental results show that the convergence speed and the optimization precision of the GDPSO is better than that of PSO.

Author(s):  
Shengyu Pei

How to solve constrained optimization problems constitutes an important part of the research on optimization problems. In this paper, a hybrid immune clonal particle swarm optimization multi-objective algorithm is proposed to solve constrained optimization problems. In the proposed algorithm, the population is first initialized with the theory of good point set. Then, differential evolution is adopted to improve the local optimal solution of each particle, with immune clonal strategy incorporated to improve each particle. As a final step, sub-swarm is used to enhance the position and velocity of individual particle. The new algorithm has been tested on 24 standard test functions and three engineering optimization problems, whose results show that the new algorithm has good performance in both robustness and convergence.


Author(s):  
Anuj Chandila ◽  
Shailesh Tiwari ◽  
K. K. Mishra ◽  
Akash Punhani

This article describes how optimization is a process of finding out the best solutions among all available solutions for a problem. Many randomized algorithms have been designed to identify optimal solutions in optimization problems. Among these algorithms evolutionary programming, evolutionary strategy, genetic algorithm, particle swarm optimization and genetic programming are widely accepted for the optimization problems. Although a number of randomized algorithms are available in literature for solving optimization problems yet their design objectives are same. Each algorithm has been designed to meet certain goals like minimizing total number of fitness evaluations to capture nearly optimal solutions, to capture diverse optimal solutions in multimodal solutions when needed and also to avoid the local optimal solution in multi modal problems. This article discusses a novel optimization algorithm named as Environmental Adaption Method (EAM) foable 3r solving the optimization problems. EAM is designed to reduce the overall processing time for retrieving optimal solution of the problem, to improve the quality of solutions and particularly to avoid being trapped in local optima. The results of the proposed algorithm are compared with the latest version of existing algorithms such as particle swarm optimization (PSO-TVAC), and differential evolution (SADE) on benchmark functions and the proposed algorithm proves its effectiveness over the existing algorithms in all the taken cases.


2019 ◽  
Vol 10 (1) ◽  
pp. 107-131
Author(s):  
Anuj Chandila ◽  
Shailesh Tiwari ◽  
K. K. Mishra ◽  
Akash Punhani

This article describes how optimization is a process of finding out the best solutions among all available solutions for a problem. Many randomized algorithms have been designed to identify optimal solutions in optimization problems. Among these algorithms evolutionary programming, evolutionary strategy, genetic algorithm, particle swarm optimization and genetic programming are widely accepted for the optimization problems. Although a number of randomized algorithms are available in literature for solving optimization problems yet their design objectives are same. Each algorithm has been designed to meet certain goals like minimizing total number of fitness evaluations to capture nearly optimal solutions, to capture diverse optimal solutions in multimodal solutions when needed and also to avoid the local optimal solution in multi modal problems. This article discusses a novel optimization algorithm named as Environmental Adaption Method (EAM) foable 3r solving the optimization problems. EAM is designed to reduce the overall processing time for retrieving optimal solution of the problem, to improve the quality of solutions and particularly to avoid being trapped in local optima. The results of the proposed algorithm are compared with the latest version of existing algorithms such as particle swarm optimization (PSO-TVAC), and differential evolution (SADE) on benchmark functions and the proposed algorithm proves its effectiveness over the existing algorithms in all the taken cases.


2013 ◽  
Vol 333-335 ◽  
pp. 1374-1378
Author(s):  
Shu Xia Dong ◽  
Liang Tang

According to the defect of falling into a local optimum when dealing with multimodal problems with basic particle swarm optimization, a dynamic neighborhood particle swarm optimization with external archive (EA-DPSO) is proposed. The Ring topology, All topology and Von Neumann topology are adopted, and dynamically refining particle history optimal position, and then store them on the external archive. In terms of particles characteristics in the external archive, a kind of effective extract mechanism method is designed to choose learning sample. Three peak problems as simulation function are chosen and the results show that EA-DPSO can effectively jump out of local optimal solution. Therefore, it can be seen as an effective algorithm for solving multimodal problems.


2013 ◽  
Vol 427-429 ◽  
pp. 1934-1938
Author(s):  
Zhong Rong Zhang ◽  
Jin Peng Liu ◽  
Ke De Fei ◽  
Zhao Shan Niu

The aim is to improve the convergence of the algorithm, and increase the population diversity. Adaptively particles of groups fallen into local optimum is adjusted in order to realize global optimal. by judging groups spatial location of concentration and fitness variance. At the same time, the global factors are adjusted dynamically with the action of the current particle fitness. Four typical function optimization problems are drawn into simulation experiment. The results show that the improved particle swarm optimization algorithm is convergent, robust and accurate.


2019 ◽  
Vol 61 (4) ◽  
pp. 177-185
Author(s):  
Moritz Mühlenthaler ◽  
Alexander Raß

Abstract A discrete particle swarm optimization (PSO) algorithm is a randomized search heuristic for discrete optimization problems. A fundamental question about randomized search heuristics is how long it takes, in expectation, until an optimal solution is found. We give an overview of recent developments related to this question for discrete PSO algorithms. In particular, we give a comparison of known upper and lower bounds of expected runtimes and briefly discuss the techniques used to obtain these bounds.


2016 ◽  
Vol 11 (1) ◽  
pp. 58-67 ◽  
Author(s):  
S Sarathambekai ◽  
K Umamaheswari

Discrete particle swarm optimization is one of the most recently developed population-based meta-heuristic optimization algorithm in swarm intelligence that can be used in any discrete optimization problems. This article presents a discrete particle swarm optimization algorithm to efficiently schedule the tasks in the heterogeneous multiprocessor systems. All the optimization algorithms share a common algorithmic step, namely population initialization. It plays a significant role because it can affect the convergence speed and also the quality of the final solution. The random initialization is the most commonly used method in majority of the evolutionary algorithms to generate solutions in the initial population. The initial good quality solutions can facilitate the algorithm to locate the optimal solution or else it may prevent the algorithm from finding the optimal solution. Intelligence should be incorporated to generate the initial population in order to avoid the premature convergence. This article presents a discrete particle swarm optimization algorithm, which incorporates opposition-based technique to generate initial population and greedy algorithm to balance the load of the processors. Make span, flow time, and reliability cost are three different measures used to evaluate the efficiency of the proposed discrete particle swarm optimization algorithm for scheduling independent tasks in distributed systems. Computational simulations are done based on a set of benchmark instances to assess the performance of the proposed algorithm.


2019 ◽  
Vol 2019 ◽  
pp. 1-22 ◽  
Author(s):  
Hao Li ◽  
Hongbin Jin ◽  
Hanzhong Wang ◽  
Yanyan Ma

For the first time , the Holonic Particle Swarm Optimization (HPSO ) algorithm applies multiagent theory about the improvement in the PSO algorithm and achieved good results. In order to further improve the performance of the algorithm, this paper proposes an improved Adaptive Holonic Particle Swarm Optimization (AHPSO) algorithm. Firstly, a brief review of the HPSO algorithm is carried out, and the HPSO algorithm can be further studied in three aspects: grouping strategy, iteration number setting, and state switching discrimination. The HPSO algorithm uses an approximately uniform grouping strategy that is the simplest but does not consider the connections between particles. And if the particles with larger or smaller differences are grouped together in different search stages, the search efficiency will be improved. Therefore, this paper proposes a grouping strategy based on information entropy and system clustering and combines two grouping strategies with corresponding search methods. The performance of the HPSO algorithm depends on the setting of the number of iterations. If it is too small, it is difficult to search for the optimal and it wastes so many computing resources. Therefore, this paper constructs an adaptive termination condition that causes the particles to terminate spontaneously after convergence. The HPSO algorithm only performs a conversion from extensive search to exact search and still has the potential to fall into local optimum. This paper proposes a state switching condition to improve the probability that the algorithm jumps out of the local optimum. Finally, AHPSO and HPSO are compared by using 22 groups of standard test functions. AHPSO is faster in convergence than HPSO, and the number of iterations of AHPSO convergence is employed in HPSO. At this point, there exists a large gap between HPSO and the optimal solution, i.e., AHPSO can have better algorithm efficiency without setting the number of iterations.


2011 ◽  
Vol 383-390 ◽  
pp. 7208-7213
Author(s):  
De Kun Tan

To overcome the shortage of standard Particle Swarm Optimization(SPSO) on premature convergence, Quantum-behaved Particle Swarm Optimization (QPSO) is presented to solve engineering constrained optimization problem. QPSO algorithm is a novel PSO algorithm model in terms of quantum mechanics. The model is based on Delta potential, and we think the particle has the behavior of quanta. Because the particle doesn’t have a certain trajectory, it has more randomicity than the particle which has fixed path in PSO, thus the QPSO more easily escapes from local optima, and has more capability to seek the global optimal solution. In the period of iterative optimization, outside point method is used to deal with those particles that violate the constraints. Furthermore, compared with other intelligent algorithms, the QPSO is verified by two instances of engineering constrained optimization, experimental results indicate that the algorithm performs better in terms of accuracy and robustness.


Sign in / Sign up

Export Citation Format

Share Document