Population statistics for particle swarm optimization: Resampling methods in noisy optimization problems

2014 ◽  
Vol 17 ◽  
pp. 37-59 ◽  
Author(s):  
Juan Rada-Vilela ◽  
Mark Johnston ◽  
Mengjie Zhang
2021 ◽  
Author(s):  
◽  
Juan Rada-Vilela

<p>Particle Swarm Optimization (PSO) is a metaheuristic where a swarm of particles explores the search space of an optimization problem to find good solutions. However, if the problem is subject to noise, the quality of the resulting solutions significantly deteriorates. The literature has attributed such a deterioration to particles suffering from inaccurate memories and from the incorrect selection of their neighborhood best solutions. For both cases, the incorporation of noise mitigation mechanisms has improved the quality of the results, but the analyses beyond such improvements often fall short of empirical evidence supporting their claims in terms other than the quality of the results. Furthermore, there is not even evidence showing the extent to which inaccurate memories and incorrect selection affect the particles in the swarm. Therefore, the performance of PSO on noisy optimization problems remains largely unexplored. The overall goal of this thesis is to study the effect of noise on PSO beyond the known deterioration of its results in order to develop more efficient noise mitigation mechanisms. Based on the allocation of function evaluations by the noise mitigation mechanisms, we distinguish three groups of PSO algorithms as: single-evaluation, which sacrifice the accuracy of the objective values over performing more iterations; resampling-based, which sacrifice performing more iterations over better estimating the objective values; and hybrids, which merge methods from the previous two. With an empirical approach, we study and analyze the performance of existing and new PSO algorithms from each group on 20 large-scale benchmark functions subject to different levels of multiplicative Gaussian noise. Throughout the search process, we compute a set of 16 population statistics that measure different characteristics of the swarms and provide useful information that we utilize to design better PSO algorithms. Our study identifies and defines deception, blindness and disorientation as three conditions from which particles suffer in noisy optimization problems. The population statistics for different PSO algorithms reveal that particles often suffer from large proportions of deception, blindness and disorientation, and show that reducing these three conditions would lead to better results. The sensitivity of PSO to noisy optimization problems is confirmed and highlights the importance of noise mitigation mechanisms. The population statistics for single-evaluation PSO algorithms show that the commonly used evaporation mechanism produces too much disorientation, leading to divergent behaviour and to the worst results within the group. Two better algorithms are designed, the first utilizes probabilistic updates to reduce disorientation, and the second computes a centroid solution as the neighborhood best solution to reduce deception. The population statistics for resampling-based PSO algorithms show that basic resampling still leads to large proportions of deception and blindness, and its results are the worst within the group. Two better algorithms are designed to reduce deception and blindness. The first provides better estimates of the personal best solutions, and the second provides even better estimates of a few solutions from which the neighborhood best solutions are selected. However, an existing PSO algorithm is the best within the group as it strives to asymptotically minimize deception by sequentially reducing both blindness and disorientation. The population statistics for hybrid PSO algorithms show that they provide the best results thanks to a combined reduction of deception, blindness and disorientation. Amongst the hybrids, we find a promising algorithm whose simplicity, flexibility and quality of results questions the importance of overly complex methods designed to minimize deception. Overall, our research presents a thorough study to design, evaluate and tune PSO algorithms to address optimization problems subject to noise.</p>


2021 ◽  
Author(s):  
◽  
Juan Rada-Vilela

<p>Particle Swarm Optimization (PSO) is a metaheuristic where a swarm of particles explores the search space of an optimization problem to find good solutions. However, if the problem is subject to noise, the quality of the resulting solutions significantly deteriorates. The literature has attributed such a deterioration to particles suffering from inaccurate memories and from the incorrect selection of their neighborhood best solutions. For both cases, the incorporation of noise mitigation mechanisms has improved the quality of the results, but the analyses beyond such improvements often fall short of empirical evidence supporting their claims in terms other than the quality of the results. Furthermore, there is not even evidence showing the extent to which inaccurate memories and incorrect selection affect the particles in the swarm. Therefore, the performance of PSO on noisy optimization problems remains largely unexplored. The overall goal of this thesis is to study the effect of noise on PSO beyond the known deterioration of its results in order to develop more efficient noise mitigation mechanisms. Based on the allocation of function evaluations by the noise mitigation mechanisms, we distinguish three groups of PSO algorithms as: single-evaluation, which sacrifice the accuracy of the objective values over performing more iterations; resampling-based, which sacrifice performing more iterations over better estimating the objective values; and hybrids, which merge methods from the previous two. With an empirical approach, we study and analyze the performance of existing and new PSO algorithms from each group on 20 large-scale benchmark functions subject to different levels of multiplicative Gaussian noise. Throughout the search process, we compute a set of 16 population statistics that measure different characteristics of the swarms and provide useful information that we utilize to design better PSO algorithms. Our study identifies and defines deception, blindness and disorientation as three conditions from which particles suffer in noisy optimization problems. The population statistics for different PSO algorithms reveal that particles often suffer from large proportions of deception, blindness and disorientation, and show that reducing these three conditions would lead to better results. The sensitivity of PSO to noisy optimization problems is confirmed and highlights the importance of noise mitigation mechanisms. The population statistics for single-evaluation PSO algorithms show that the commonly used evaporation mechanism produces too much disorientation, leading to divergent behaviour and to the worst results within the group. Two better algorithms are designed, the first utilizes probabilistic updates to reduce disorientation, and the second computes a centroid solution as the neighborhood best solution to reduce deception. The population statistics for resampling-based PSO algorithms show that basic resampling still leads to large proportions of deception and blindness, and its results are the worst within the group. Two better algorithms are designed to reduce deception and blindness. The first provides better estimates of the personal best solutions, and the second provides even better estimates of a few solutions from which the neighborhood best solutions are selected. However, an existing PSO algorithm is the best within the group as it strives to asymptotically minimize deception by sequentially reducing both blindness and disorientation. The population statistics for hybrid PSO algorithms show that they provide the best results thanks to a combined reduction of deception, blindness and disorientation. Amongst the hybrids, we find a promising algorithm whose simplicity, flexibility and quality of results questions the importance of overly complex methods designed to minimize deception. Overall, our research presents a thorough study to design, evaluate and tune PSO algorithms to address optimization problems subject to noise.</p>


2021 ◽  
Author(s):  
Moritz Mühlenthaler ◽  
Alexander Raß ◽  
Manuel Schmitt ◽  
Rolf Wanka

AbstractMeta-heuristics are powerful tools for solving optimization problems whose structural properties are unknown or cannot be exploited algorithmically. We propose such a meta-heuristic for a large class of optimization problems over discrete domains based on the particle swarm optimization (PSO) paradigm. We provide a comprehensive formal analysis of the performance of this algorithm on certain “easy” reference problems in a black-box setting, namely the sorting problem and the problem OneMax. In our analysis we use a Markov model of the proposed algorithm to obtain upper and lower bounds on its expected optimization time. Our bounds are essentially tight with respect to the Markov model. We show that for a suitable choice of algorithm parameters the expected optimization time is comparable to that of known algorithms and, furthermore, for other parameter regimes, the algorithm behaves less greedy and more explorative, which can be desirable in practice in order to escape local optima. Our analysis provides a precise insight on the tradeoff between optimization time and exploration. To obtain our results we introduce the notion of indistinguishability of states of a Markov chain and provide bounds on the solution of a recurrence equation with non-constant coefficients by integration.


Author(s):  
Malek Sarhani ◽  
Stefan Voß

AbstractBio-inspired optimization aims at adapting observed natural behavioral patterns and social phenomena towards efficiently solving complex optimization problems, and is nowadays gaining much attention. However, researchers recently highlighted an inconsistency between the need in the field and the actual trend. Indeed, while nowadays it is important to design innovative contributions, an actual trend in bio-inspired optimization is to re-iterate the existing knowledge in a different form. The aim of this paper is to fill this gap. More precisely, we start first by highlighting new examples for this problem by considering and describing the concepts of chunking and cooperative learning. Second, by considering particle swarm optimization (PSO), we present a novel bridge between these two notions adapted to the problem of feature selection. In the experiments, we investigate the practical importance of our approach while exploring both its strength and limitations. The results indicate that the approach is mainly suitable for large datasets, and that further research is needed to improve the computational efficiency of the approach and to ensure the independence of the sub-problems defined using chunking.


Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4613
Author(s):  
Shah Fahad ◽  
Shiyou Yang ◽  
Rehan Ali Khan ◽  
Shafiullah Khan ◽  
Shoaib Ahmed Khan

Electromagnetic design problems are generally formulated as nonlinear programming problems with multimodal objective functions and continuous variables. These can be solved by either a deterministic or a stochastic optimization algorithm. Recently, many intelligent optimization algorithms, such as particle swarm optimization (PSO), genetic algorithm (GA) and artificial bee colony (ABC), have been proposed and applied to electromagnetic design problems with promising results. However, there is no universal algorithm which can be used to solve engineering design problems. In this paper, a stochastic smart quantum particle swarm optimization (SQPSO) algorithm is introduced. In the proposed SQPSO, to tackle the premature convergence problem in order to improve the global search ability, a smart particle and a memory archive are adopted instead of mutation operations. Moreover, to enhance the exploration searching ability, a new set of random numbers and control parameters are introduced. Experimental results validate that the adopted control policy in this work can achieve a good balance between exploration and exploitation. Finally, the SQPSO has been tested on well-known optimization benchmark functions and implemented on the electromagnetic TEAM workshop problem 22. The simulation result shows an outstanding capability of the proposed algorithm in speeding convergence compared to other algorithms.


2021 ◽  
Vol 11 (2) ◽  
pp. 839
Author(s):  
Shaofei Sun ◽  
Hongxin Zhang ◽  
Xiaotong Cui ◽  
Liang Dong ◽  
Muhammad Saad Khan ◽  
...  

This paper focuses on electromagnetic information security in communication systems. Classical correlation electromagnetic analysis (CEMA) is known as a powerful way to recover the cryptographic algorithm’s key. In the classical method, only one byte of the key is used while the other bytes are considered as noise, which not only reduces the efficiency but also is a waste of information. In order to take full advantage of useful information, multiple bytes of the key are used. We transform the key into a multidimensional form, and each byte of the key is considered as a dimension. The problem of the right key searching is transformed into the problem of optimizing correlation coefficients of key candidates. The particle swarm optimization (PSO) algorithm is particularly more suited to solve the optimization problems with high dimension and complex structure. In this paper, we applied the PSO algorithm into CEMA to solve multidimensional problems, and we also add a mutation operator to the optimization algorithm to improve the result. Here, we have proposed a multibyte correlation electromagnetic analysis based on particle swarm optimization. We verified our method on a universal test board that is designed for research and development on hardware security. We implemented the Advanced Encryption Standard (AES) cryptographic algorithm on the test board. Experimental results have shown that our method outperforms the classical method; it achieves approximately 13.72% improvement for the corresponding case.


Sign in / Sign up

Export Citation Format

Share Document