scholarly journals Opposition-Based Barebones Particle Swarm for Constrained Nonlinear Optimization Problems

2012 ◽  
Vol 2012 ◽  
pp. 1-12 ◽  
Author(s):  
Hui Wang

This paper presents a modified barebones particle swarm optimization (OBPSO) to solve constrained nonlinear optimization problems. The proposed approach OBPSO combines barebones particle swarm optimization (BPSO) and opposition-based learning (OBL) to improve the quality of solutions. A novel boundary search strategy is used to approach the boundary between the feasible and infeasible search region. Moreover, an adaptive penalty method is employed to handle constraints. To verify the performance of OBPSO, a set of well-known constrained benchmark functions is used in the experiments. Simulation results show that our approach achieves a promising performance.

2013 ◽  
Vol 409-410 ◽  
pp. 1611-1614
Author(s):  
Lei Chen

Particle swarm optimization (PSO) is a global algorithm which is inspired by birds flocking and fish schooling. PSO has shown good search ability in many complex optimization problems, but premature convergence is still a main problem. A novel hybrid PSO(NHPSO) was proposed, which employed hybrid strategies, including dynamic step length (DSL) and opposition-based learning (OBL). DSL is helpful to enhance local search ability of PSO, and OBL is beneficial for improving the quality of candidate solutions. In order to verify the performance of NHPSO, we test it on several benchmark functions. The simulation results demonstrate the effectiveness and efficiency of our approach.


Author(s):  
Mohammed Ajuji ◽  
Aliyu Abubakar ◽  
Datti, Useni Emmanuel

Nature-inspired algorithms are very popular tools for solving optimization problems inspired by nature. However, there is no guarantee that optimal solution can be obtained using a randomly selected algorithm. As such, the problem can be addressed using trial and error via the use of different optimization algorithms. Therefore, the proposed study in this paper analyzes the time-complexity and efficacy of some nature-inspired algorithms which includes Artificial Bee Colony, Bat Algorithm and Particle Swarm Optimization. For each algorithm used, experiments were conducted several times with iterations and comparative analysis was made. The result obtained shows that Artificial Bee Colony outperformed other algorithms in terms of the quality of the solution, Particle Swarm Optimization is time efficient while Artificial Bee Colony yield a worst case scenario in terms of time complexity.


2021 ◽  
Author(s):  
◽  
Juan Rada-Vilela

<p>Particle Swarm Optimization (PSO) is a metaheuristic where a swarm of particles explores the search space of an optimization problem to find good solutions. However, if the problem is subject to noise, the quality of the resulting solutions significantly deteriorates. The literature has attributed such a deterioration to particles suffering from inaccurate memories and from the incorrect selection of their neighborhood best solutions. For both cases, the incorporation of noise mitigation mechanisms has improved the quality of the results, but the analyses beyond such improvements often fall short of empirical evidence supporting their claims in terms other than the quality of the results. Furthermore, there is not even evidence showing the extent to which inaccurate memories and incorrect selection affect the particles in the swarm. Therefore, the performance of PSO on noisy optimization problems remains largely unexplored. The overall goal of this thesis is to study the effect of noise on PSO beyond the known deterioration of its results in order to develop more efficient noise mitigation mechanisms. Based on the allocation of function evaluations by the noise mitigation mechanisms, we distinguish three groups of PSO algorithms as: single-evaluation, which sacrifice the accuracy of the objective values over performing more iterations; resampling-based, which sacrifice performing more iterations over better estimating the objective values; and hybrids, which merge methods from the previous two. With an empirical approach, we study and analyze the performance of existing and new PSO algorithms from each group on 20 large-scale benchmark functions subject to different levels of multiplicative Gaussian noise. Throughout the search process, we compute a set of 16 population statistics that measure different characteristics of the swarms and provide useful information that we utilize to design better PSO algorithms. Our study identifies and defines deception, blindness and disorientation as three conditions from which particles suffer in noisy optimization problems. The population statistics for different PSO algorithms reveal that particles often suffer from large proportions of deception, blindness and disorientation, and show that reducing these three conditions would lead to better results. The sensitivity of PSO to noisy optimization problems is confirmed and highlights the importance of noise mitigation mechanisms. The population statistics for single-evaluation PSO algorithms show that the commonly used evaporation mechanism produces too much disorientation, leading to divergent behaviour and to the worst results within the group. Two better algorithms are designed, the first utilizes probabilistic updates to reduce disorientation, and the second computes a centroid solution as the neighborhood best solution to reduce deception. The population statistics for resampling-based PSO algorithms show that basic resampling still leads to large proportions of deception and blindness, and its results are the worst within the group. Two better algorithms are designed to reduce deception and blindness. The first provides better estimates of the personal best solutions, and the second provides even better estimates of a few solutions from which the neighborhood best solutions are selected. However, an existing PSO algorithm is the best within the group as it strives to asymptotically minimize deception by sequentially reducing both blindness and disorientation. The population statistics for hybrid PSO algorithms show that they provide the best results thanks to a combined reduction of deception, blindness and disorientation. Amongst the hybrids, we find a promising algorithm whose simplicity, flexibility and quality of results questions the importance of overly complex methods designed to minimize deception. Overall, our research presents a thorough study to design, evaluate and tune PSO algorithms to address optimization problems subject to noise.</p>


2020 ◽  
Vol 18 (2) ◽  
pp. 14
Author(s):  
Afonso Celso de Castro Lemonge ◽  
Patrícia Habib Hallak ◽  
José Pedro Gonçalves Carvalho

Este artigo apresenta um Algoritmo Evolucionário (AE) baseado no comportamento de enxame de partículas (Particle Swarm Optimization - PSO) adaptado para a obtenção de soluções de problemas de otimização estrutural com restrições. O PSO é um algoritmo de fácil implementação e competitivo perante os demais algoritmos populacionais inspirados na natureza. Neste artigo, são analisados problemas de otimização estrutural de treliças submetidas a restrições de frequências naturais de vibração. Para o tratamento destas restrições, incorpora-se ao PSO uma técnica de penalização adaptativa (Adaptive Penalty Method - APM), que tem demonstrado robustez e eficiência quando aplicada no tratamento de problemas de otimização com restrições. O algoritmo proposto é validado através de experimentos computacionais em problemas de otimização estrutural amplamente discutidos na literatura.


2014 ◽  
Vol 945-949 ◽  
pp. 607-613
Author(s):  
Ling Liu ◽  
Pei Zhou ◽  
Jun Luo ◽  
Zan Pi

The paper focus on an improved particle swarm optimization (IPSO) used to solve nonlinear optimization problems of steering trapezoid mechanism. First of all, nonlinear optimization model of steering trapezoid mechanism is established. Sum of absolute value of difference between actual rotational angle of anterolateral steering wheel and theoretical rotational angle of anterolateral steering wheel is taken as objective function, bottom angle and steering arm length of steering trapezoid mechanism are selected to be design variables. After that, an improved particle swarm optimization algorithm is proposed by introducing Over-flow exception dealing functions to deal with complicated nonlinear constraints. Finally, codes for IPSO are programmed and parameters of steering trapezoid mechanism for different models are optimized, and numerical result shows that errors of objective function's ideal values and objective function's optimization values are minimal. Performance comparison experiment of different intelligent algorithms indicates that the proposed new algorithm is superior to Particle swarm algorithm based on simulated annealing (SA-PSO) and traditional particle swarm optimization (TPSO) in good and fast convergence and small calculating quantity, but a little inferior to particle swarm algorithm based on simulated annealing (SA-PSO) in calculation accuracy in the process of optimization.


2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Xiao-peng Wei ◽  
Jian-xia Zhang ◽  
Dong-sheng Zhou ◽  
Qiang Zhang

We propose an improved algorithm, for a multiswarm particle swarm optimization with transfer of the best particle called BMPSO. In the proposed algorithm, we introduce parasitism into the standard particle swarm algorithm (PSO) in order to balance exploration and exploitation, as well as enhancing the capacity for global search to solve nonlinear optimization problems. First, the best particle guides other particles to prevent them from being trapped by local optima. We provide a detailed description of BMPSO. We also present a diversity analysis of the proposed BMPSO, which is explained based on the Sphere function. Finally, we tested the performance of the proposed algorithm with six standard test functions and an engineering problem. Compared with some other algorithms, the results showed that the proposed BMPSO performed better when applied to the test functions and the engineering problem. Furthermore, the proposed BMPSO can be applied to other nonlinear optimization problems.


Sign in / Sign up

Export Citation Format

Share Document