scholarly journals Population Statistics for Particle Swarm Optimization on Problems Subject to Noise

2021 ◽  
Author(s):  
◽  
Juan Rada-Vilela

<p>Particle Swarm Optimization (PSO) is a metaheuristic where a swarm of particles explores the search space of an optimization problem to find good solutions. However, if the problem is subject to noise, the quality of the resulting solutions significantly deteriorates. The literature has attributed such a deterioration to particles suffering from inaccurate memories and from the incorrect selection of their neighborhood best solutions. For both cases, the incorporation of noise mitigation mechanisms has improved the quality of the results, but the analyses beyond such improvements often fall short of empirical evidence supporting their claims in terms other than the quality of the results. Furthermore, there is not even evidence showing the extent to which inaccurate memories and incorrect selection affect the particles in the swarm. Therefore, the performance of PSO on noisy optimization problems remains largely unexplored. The overall goal of this thesis is to study the effect of noise on PSO beyond the known deterioration of its results in order to develop more efficient noise mitigation mechanisms. Based on the allocation of function evaluations by the noise mitigation mechanisms, we distinguish three groups of PSO algorithms as: single-evaluation, which sacrifice the accuracy of the objective values over performing more iterations; resampling-based, which sacrifice performing more iterations over better estimating the objective values; and hybrids, which merge methods from the previous two. With an empirical approach, we study and analyze the performance of existing and new PSO algorithms from each group on 20 large-scale benchmark functions subject to different levels of multiplicative Gaussian noise. Throughout the search process, we compute a set of 16 population statistics that measure different characteristics of the swarms and provide useful information that we utilize to design better PSO algorithms. Our study identifies and defines deception, blindness and disorientation as three conditions from which particles suffer in noisy optimization problems. The population statistics for different PSO algorithms reveal that particles often suffer from large proportions of deception, blindness and disorientation, and show that reducing these three conditions would lead to better results. The sensitivity of PSO to noisy optimization problems is confirmed and highlights the importance of noise mitigation mechanisms. The population statistics for single-evaluation PSO algorithms show that the commonly used evaporation mechanism produces too much disorientation, leading to divergent behaviour and to the worst results within the group. Two better algorithms are designed, the first utilizes probabilistic updates to reduce disorientation, and the second computes a centroid solution as the neighborhood best solution to reduce deception. The population statistics for resampling-based PSO algorithms show that basic resampling still leads to large proportions of deception and blindness, and its results are the worst within the group. Two better algorithms are designed to reduce deception and blindness. The first provides better estimates of the personal best solutions, and the second provides even better estimates of a few solutions from which the neighborhood best solutions are selected. However, an existing PSO algorithm is the best within the group as it strives to asymptotically minimize deception by sequentially reducing both blindness and disorientation. The population statistics for hybrid PSO algorithms show that they provide the best results thanks to a combined reduction of deception, blindness and disorientation. Amongst the hybrids, we find a promising algorithm whose simplicity, flexibility and quality of results questions the importance of overly complex methods designed to minimize deception. Overall, our research presents a thorough study to design, evaluate and tune PSO algorithms to address optimization problems subject to noise.</p>

2021 ◽  
Author(s):  
◽  
Juan Rada-Vilela

<p>Particle Swarm Optimization (PSO) is a metaheuristic where a swarm of particles explores the search space of an optimization problem to find good solutions. However, if the problem is subject to noise, the quality of the resulting solutions significantly deteriorates. The literature has attributed such a deterioration to particles suffering from inaccurate memories and from the incorrect selection of their neighborhood best solutions. For both cases, the incorporation of noise mitigation mechanisms has improved the quality of the results, but the analyses beyond such improvements often fall short of empirical evidence supporting their claims in terms other than the quality of the results. Furthermore, there is not even evidence showing the extent to which inaccurate memories and incorrect selection affect the particles in the swarm. Therefore, the performance of PSO on noisy optimization problems remains largely unexplored. The overall goal of this thesis is to study the effect of noise on PSO beyond the known deterioration of its results in order to develop more efficient noise mitigation mechanisms. Based on the allocation of function evaluations by the noise mitigation mechanisms, we distinguish three groups of PSO algorithms as: single-evaluation, which sacrifice the accuracy of the objective values over performing more iterations; resampling-based, which sacrifice performing more iterations over better estimating the objective values; and hybrids, which merge methods from the previous two. With an empirical approach, we study and analyze the performance of existing and new PSO algorithms from each group on 20 large-scale benchmark functions subject to different levels of multiplicative Gaussian noise. Throughout the search process, we compute a set of 16 population statistics that measure different characteristics of the swarms and provide useful information that we utilize to design better PSO algorithms. Our study identifies and defines deception, blindness and disorientation as three conditions from which particles suffer in noisy optimization problems. The population statistics for different PSO algorithms reveal that particles often suffer from large proportions of deception, blindness and disorientation, and show that reducing these three conditions would lead to better results. The sensitivity of PSO to noisy optimization problems is confirmed and highlights the importance of noise mitigation mechanisms. The population statistics for single-evaluation PSO algorithms show that the commonly used evaporation mechanism produces too much disorientation, leading to divergent behaviour and to the worst results within the group. Two better algorithms are designed, the first utilizes probabilistic updates to reduce disorientation, and the second computes a centroid solution as the neighborhood best solution to reduce deception. The population statistics for resampling-based PSO algorithms show that basic resampling still leads to large proportions of deception and blindness, and its results are the worst within the group. Two better algorithms are designed to reduce deception and blindness. The first provides better estimates of the personal best solutions, and the second provides even better estimates of a few solutions from which the neighborhood best solutions are selected. However, an existing PSO algorithm is the best within the group as it strives to asymptotically minimize deception by sequentially reducing both blindness and disorientation. The population statistics for hybrid PSO algorithms show that they provide the best results thanks to a combined reduction of deception, blindness and disorientation. Amongst the hybrids, we find a promising algorithm whose simplicity, flexibility and quality of results questions the importance of overly complex methods designed to minimize deception. Overall, our research presents a thorough study to design, evaluate and tune PSO algorithms to address optimization problems subject to noise.</p>


Author(s):  
Mohammed Ajuji ◽  
Aliyu Abubakar ◽  
Datti, Useni Emmanuel

Nature-inspired algorithms are very popular tools for solving optimization problems inspired by nature. However, there is no guarantee that optimal solution can be obtained using a randomly selected algorithm. As such, the problem can be addressed using trial and error via the use of different optimization algorithms. Therefore, the proposed study in this paper analyzes the time-complexity and efficacy of some nature-inspired algorithms which includes Artificial Bee Colony, Bat Algorithm and Particle Swarm Optimization. For each algorithm used, experiments were conducted several times with iterations and comparative analysis was made. The result obtained shows that Artificial Bee Colony outperformed other algorithms in terms of the quality of the solution, Particle Swarm Optimization is time efficient while Artificial Bee Colony yield a worst case scenario in terms of time complexity.


2021 ◽  
Vol 257 ◽  
pp. 01036
Author(s):  
Fengming Zhang ◽  
Lingyan Que ◽  
Xinxin Zhang ◽  
Fumian Wang ◽  
Bing Wang

Large-scale distributed generation grid-connection brings huge economic and environmental benefits, but also threatens the stability of the grid. To make the grid consume a higher proportion of distributed generation, it is necessary to optimize the location and capacity of the distributed generation connected to the grid. Firstly, the uncertainty analysis model of wind speed, illumination intensity, and load of grid is established. Secondly, a distributed generation location and capacity planning model with the lowest annual comprehensive cost as the objective function is constructed. Then, a novel fractional particle swarm optimization algorithm is proposed, and the performance of the algorithm on complex optimization problems is tested. Finally, the simulation results of the IEEE 33-bus system example verify the rationality of the established model and the effectiveness of the proposed algorithm.


2012 ◽  
Vol 479-481 ◽  
pp. 344-347
Author(s):  
Zhuo Li ◽  
Xue Luo Qu

Particle Swarm Optimization (PSO) is a novel artificial intelligent technique proposed by Eberhart and Kennedy which is a type of Swarm Intelligence. PSO is simulated as population-based stochastic optimization influenced by the social behavior of bird flocks. In past decades, more and more researcher has been targeting to improve the original PSO for solving various problems and it has great potential to be done further. This paper reviews the progress of PSO research so far, and the recent achievements for application to large-scale optimization problems.


Sign in / Sign up

Export Citation Format

Share Document