Application of Quantum-Behaved Particle Swarm Optimization in Engineering Constrained Optimization Problems

2011 ◽  
Vol 383-390 ◽  
pp. 7208-7213
Author(s):  
De Kun Tan

To overcome the shortage of standard Particle Swarm Optimization(SPSO) on premature convergence, Quantum-behaved Particle Swarm Optimization (QPSO) is presented to solve engineering constrained optimization problem. QPSO algorithm is a novel PSO algorithm model in terms of quantum mechanics. The model is based on Delta potential, and we think the particle has the behavior of quanta. Because the particle doesn’t have a certain trajectory, it has more randomicity than the particle which has fixed path in PSO, thus the QPSO more easily escapes from local optima, and has more capability to seek the global optimal solution. In the period of iterative optimization, outside point method is used to deal with those particles that violate the constraints. Furthermore, compared with other intelligent algorithms, the QPSO is verified by two instances of engineering constrained optimization, experimental results indicate that the algorithm performs better in terms of accuracy and robustness.

2019 ◽  
Vol 61 (4) ◽  
pp. 177-185
Author(s):  
Moritz Mühlenthaler ◽  
Alexander Raß

Abstract A discrete particle swarm optimization (PSO) algorithm is a randomized search heuristic for discrete optimization problems. A fundamental question about randomized search heuristics is how long it takes, in expectation, until an optimal solution is found. We give an overview of recent developments related to this question for discrete PSO algorithms. In particular, we give a comparison of known upper and lower bounds of expected runtimes and briefly discuss the techniques used to obtain these bounds.


2014 ◽  
Vol 926-930 ◽  
pp. 3338-3341
Author(s):  
Hong Mei Ni ◽  
Zhian Yi ◽  
Jin Yue Liu

Chaos is a non-linear phenomenon that widely exists in the nature. Due to the ease of implementation and its special ability to avoid being trapped in local optima, chaos has been a novel optimization technique and chaos-based searching algorithms have aroused intense interests. Many real world optimization problems are dynamic in which global optimum and local optima change over time. Particle swarm optimization has performed well to find and track optima in static environments. When the particle swarm optimization (PSO) algorithm is used in dynamic multi-objective problems, there exist some problems, such as easily falling into prematurely, having slow convergence rate and so on. To solve above problems, a hybrid PSO algorithm based on chaos algorithm is brought forward. The hybrid PSO algorithm not only has the efficient parallelism but also increases the diversity of population because of the chaos algorithm. The simulation result shows that the new algorithm is prior to traditional PSO algorithm, having stronger adaptability and convergence, solving better the question on moving peaks benchmark.


2020 ◽  
Vol 53 (4) ◽  
pp. 559-566
Author(s):  
Lakhdar Kaddouri ◽  
Amel B.H. Adamou-Mitiche ◽  
Lahcene Mitiche

Particle Swarm Optimization (PSO) is an evolutionary algorithm widely used in optimization problems. It is characterized by a fast convergence, which can lead the algorithm to stagnate in local optima. In the present paper, a new Multi-PSO algorithm for the design of two-dimensional infinite impulse response (IIR) filters is built. It is based on the standard PSO and uses a new initialization strategy. This strategy is relayed to two types of swarms: a principal and auxiliaries. To improve the performance of the algorithm, the search space is divided into several areas, which allows a best covering and leading to a better exploration in each zone separately. This solved the problem of fast convergence in standard PSO. The results obtained demonstrate the effectiveness of the Multi-PSO algorithm in the filter coefficients optimization.


2016 ◽  
Vol 2016 ◽  
pp. 1-19 ◽  
Author(s):  
Biwei Tang ◽  
Zhanxia Zhu ◽  
Jianjun Luo

This paper develops a particle swarm optimization (PSO) based framework for constrained optimization problems (COPs). Aiming at enhancing the performance of PSO, a modified PSO algorithm, named SASPSO 2011, is proposed by adding a newly developed self-adaptive strategy to the standard particle swarm optimization 2011 (SPSO 2011) algorithm. Since the convergence of PSO is of great importance and significantly influences the performance of PSO, this paper first theoretically investigates the convergence of SASPSO 2011. Then, a parameter selection principle guaranteeing the convergence of SASPSO 2011 is provided. Subsequently, a SASPSO 2011-based framework is established to solve COPs. Attempting to increase the diversity of solutions and decrease optimization difficulties, the adaptive relaxation method, which is combined with the feasibility-based rule, is applied to handle constraints of COPs and evaluate candidate solutions in the developed framework. Finally, the proposed method is verified through 4 benchmark test functions and 2 real-world engineering problems against six PSO variants and some well-known methods proposed in the literature. Simulation results confirm that the proposed method is highly competitive in terms of the solution quality and can be considered as a vital alternative to solve COPs.


2014 ◽  
Vol 2014 ◽  
pp. 1-19 ◽  
Author(s):  
Zahra Beheshti ◽  
Siti Mariyam Shamsuddin ◽  
Sarina Sulaiman

In recent years, particle swarm optimization (PSO) has been extensively applied in various optimization problems because of its structural and implementation simplicity. However, the PSO can sometimes find local optima or exhibit slow convergence speed when solving complex multimodal problems. To address these issues, an improved PSO scheme called fusion global-local-topology particle swarm optimization (FGLT-PSO) is proposed in this study. The algorithm employs both global and local topologies in PSO to jump out of the local optima. FGLT-PSO is evaluated using twenty (20) unimodal and multimodal nonlinear benchmark functions and its performance is compared with several well-known PSO algorithms. The experimental results showed that the proposed method improves the performance of PSO algorithm in terms of solution accuracy and convergence speed.


2014 ◽  
Vol 2014 ◽  
pp. 1-15 ◽  
Author(s):  
Guohua Wu ◽  
Witold Pedrycz ◽  
Manhao Ma ◽  
Dishan Qiu ◽  
Haifeng Li ◽  
...  

Although Particle Swarm Optimization (PSO) has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner variable learning strategy (PSO-IVL) is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative relation. Based on this knowledge, the inner variable learning (IVL) strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting evolutionary algorithms by problem-oriented domain knowledge.


Author(s):  
Shengyu Pei

How to solve constrained optimization problems constitutes an important part of the research on optimization problems. In this paper, a hybrid immune clonal particle swarm optimization multi-objective algorithm is proposed to solve constrained optimization problems. In the proposed algorithm, the population is first initialized with the theory of good point set. Then, differential evolution is adopted to improve the local optimal solution of each particle, with immune clonal strategy incorporated to improve each particle. As a final step, sub-swarm is used to enhance the position and velocity of individual particle. The new algorithm has been tested on 24 standard test functions and three engineering optimization problems, whose results show that the new algorithm has good performance in both robustness and convergence.


Author(s):  
Anuj Chandila ◽  
Shailesh Tiwari ◽  
K. K. Mishra ◽  
Akash Punhani

This article describes how optimization is a process of finding out the best solutions among all available solutions for a problem. Many randomized algorithms have been designed to identify optimal solutions in optimization problems. Among these algorithms evolutionary programming, evolutionary strategy, genetic algorithm, particle swarm optimization and genetic programming are widely accepted for the optimization problems. Although a number of randomized algorithms are available in literature for solving optimization problems yet their design objectives are same. Each algorithm has been designed to meet certain goals like minimizing total number of fitness evaluations to capture nearly optimal solutions, to capture diverse optimal solutions in multimodal solutions when needed and also to avoid the local optimal solution in multi modal problems. This article discusses a novel optimization algorithm named as Environmental Adaption Method (EAM) foable 3r solving the optimization problems. EAM is designed to reduce the overall processing time for retrieving optimal solution of the problem, to improve the quality of solutions and particularly to avoid being trapped in local optima. The results of the proposed algorithm are compared with the latest version of existing algorithms such as particle swarm optimization (PSO-TVAC), and differential evolution (SADE) on benchmark functions and the proposed algorithm proves its effectiveness over the existing algorithms in all the taken cases.


2019 ◽  
Vol 10 (1) ◽  
pp. 107-131
Author(s):  
Anuj Chandila ◽  
Shailesh Tiwari ◽  
K. K. Mishra ◽  
Akash Punhani

This article describes how optimization is a process of finding out the best solutions among all available solutions for a problem. Many randomized algorithms have been designed to identify optimal solutions in optimization problems. Among these algorithms evolutionary programming, evolutionary strategy, genetic algorithm, particle swarm optimization and genetic programming are widely accepted for the optimization problems. Although a number of randomized algorithms are available in literature for solving optimization problems yet their design objectives are same. Each algorithm has been designed to meet certain goals like minimizing total number of fitness evaluations to capture nearly optimal solutions, to capture diverse optimal solutions in multimodal solutions when needed and also to avoid the local optimal solution in multi modal problems. This article discusses a novel optimization algorithm named as Environmental Adaption Method (EAM) foable 3r solving the optimization problems. EAM is designed to reduce the overall processing time for retrieving optimal solution of the problem, to improve the quality of solutions and particularly to avoid being trapped in local optima. The results of the proposed algorithm are compared with the latest version of existing algorithms such as particle swarm optimization (PSO-TVAC), and differential evolution (SADE) on benchmark functions and the proposed algorithm proves its effectiveness over the existing algorithms in all the taken cases.


2013 ◽  
Vol 321-324 ◽  
pp. 2183-2186
Author(s):  
Zheng Bo Li

Particle Swarm Optimization (PSO) is a swarm intelligence algorithm to achieve through competition and collaboration between the particles in the complex search space to find the global optimum. Basic PSO algorithm evolutionary late convergence speed is slow and easy to fall into the shortcomings of local minima, this paper presents a multi-learning particle swarm optimization algorithm, the algorithm particle at the same time to follow their own to find the optimal solution, random optimal solution and the optimal solution for the whole group of other particles with dimensions velocity update discriminate area boundary position optimization updates and small-scale perturbations of the global best position, in order to enhance the algorithm escape from local optima capacity. The test results show that several typical functions: improved particle swarm algorithms significantly improve the global search ability, and can effectively avoid the premature convergence problem. Algorithm so that the relative robustness of the search space position has been significantly improved global optimal solution in high-dimensional optimization problem, suitable for solving similar problems, the calculation results can meet the requirements of practical engineering.


Sign in / Sign up

Export Citation Format

Share Document