Movement Strategies for Multi-Objective Particle Swarm Optimization

2010 ◽  
Vol 1 (3) ◽  
pp. 59-79 ◽  
Author(s):  
S. Nguyen ◽  
V. Kachitvichyanukul

Particle Swarm Optimization (PSO) is one of the most effective metaheuristics algorithms, with many successful real-world applications. The reason for the success of PSO is the movement behavior, which allows the swarm to effectively explore the search space. Unfortunately, the original PSO algorithm is only suitable for single objective optimization problems. In this paper, three movement strategies are discussed for multi-objective PSO (MOPSO) and popular test problems are used to confirm their effectiveness. In addition, these algorithms are also applied to solve the engineering design and portfolio optimization problems. Results show that the algorithms are effective with both direct and indirect encoding schemes.

Author(s):  
S. Nguyen ◽  
V. Kachitvichyanukul

Particle Swarm Optimization (PSO) is one of the most effective metaheuristics algorithms, with many successful real-world applications. The reason for the success of PSO is the movement behavior, which allows the swarm to effectively explore the search space. Unfortunately, the original PSO algorithm is only suitable for single objective optimization problems. In this paper, three movement strategies are discussed for multi-objective PSO (MOPSO) and popular test problems are used to confirm their effectiveness. In addition, these algorithms are also applied to solve the engineering design and portfolio optimization problems. Results show that the algorithms are effective with both direct and indirect encoding schemes.


2021 ◽  
Author(s):  
Ahlem Aboud ◽  
Nizar Rokbani ◽  
Seyedali Mirjalili ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
...  

<p>Multifactorial Optimization (MFO) and Evolutionary Transfer Optimization (ETO) are new optimization challenging paradigms for which the multi-Objective Particle Swarm Optimization system (MOPSO) may be interesting despite limitations. MOPSO has been widely used in static/dynamic multi-objective optimization problems, while its potentials for multi-task optimization are not completely unveiled. This paper proposes a new Distributed Multifactorial Particle Swarm Optimization algorithm (DMFPSO) for multi-task optimization. This new system has a distributed architecture on a set of sub-swarms that are dynamically constructed based on the number of optimization tasks affected by each particle skill factor. DMFPSO is designed to deal with the issues of handling convergence and diversity concepts separately. DMFPSO uses Beta function to provide two optimized profiles with a dynamic switching behaviour. The first profile, Beta-1, is used for the exploration which aims to explore the search space toward potential solutions, while the second Beta-2 function is used for convergence enhancement. This new system is tested on 36 benchmarks provided by the CEC’2021 Evolutionary Transfer Multi-Objective Optimization Competition. Comparatives with the state-of-the-art methods are done using the Inverted General Distance (IGD) and Mean Inverted General Distance (MIGD) metrics. Based on the MSS metric, this proposal has the best results on most tested problems.</p>


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Martins Akugbe Arasomwan ◽  
Aderemi Oluyinka Adewumi

Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Waqas Haider Bangyal ◽  
Abdul Hameed ◽  
Wael Alosaimi ◽  
Hashem Alyami

Particle swarm optimization (PSO) algorithm is a population-based intelligent stochastic search technique used to search for food with the intrinsic manner of bee swarming. PSO is widely used to solve the diverse problems of optimization. Initialization of population is a critical factor in the PSO algorithm, which considerably influences the diversity and convergence during the process of PSO. Quasirandom sequences are useful for initializing the population to improve the diversity and convergence, rather than applying the random distribution for initialization. The performance of PSO is expanded in this paper to make it appropriate for the optimization problem by introducing a new initialization technique named WELL with the help of low-discrepancy sequence. To solve the optimization problems in large-dimensional search spaces, the proposed solution is termed as WE-PSO. The suggested solution has been verified on fifteen well-known unimodal and multimodal benchmark test problems extensively used in the literature, Moreover, the performance of WE-PSO is compared with the standard PSO and two other initialization approaches Sobol-based PSO (SO-PSO) and Halton-based PSO (H-PSO). The findings indicate that WE-PSO is better than the standard multimodal problem-solving techniques. The results validate the efficacy and effectiveness of our approach. In comparison, the proposed approach is used for artificial neural network (ANN) learning and contrasted to the standard backpropagation algorithm, standard PSO, H-PSO, and SO-PSO, respectively. The results of our technique has a higher accuracy score and outperforms traditional methods. Also, the outcome of our work presents an insight on how the proposed initialization technique has a high effect on the quality of cost function, integration, and diversity aspects.


2020 ◽  
Author(s):  
Ahlem Aboud ◽  
Raja Fdhila ◽  
Amir Hussain ◽  
Adel Alimi

Distributed architecture-based Particle Swarm Optimization is very useful for static optimization and not yet explored to solve complex dynamic multi-objective optimization problems. This study proposes a novel Dynamic Pareto bi-level Multi-Objective Particle Swarm Optimization (DPb-MOPSO) algorithm with two optimization levels. In the first level, all solutions are optimized in the same search space and the second level is based on a distributed architecture using the Pareto ranking operator for dynamic multi-swarm subdivision. The proposed approach adopts a dynamic handling strategy using a set of detectors to keep track of change in the objective function that is impacted by the problem’s time-varying parameters at each level. To ensure timely adaptation during the optimization process, a dynamic response strategy is considered for the reevaluation of all non-improved solutions, while the worst particles are replaced with a newly generated one. The convergence and<br>diversity performance of the DPb-MOPSO algorithm are proven through Friedman Analysis of Variance, and the Lyapunov theorem is used to prove stability analysis over the Inverted Generational Distance (IGD) and Hypervolume Difference (HVD) metrics. Compared to other evolutionary algorithms, the novel DPb-MOPSO is shown to be most robust for solving complex problems over a range of changes in both the Pareto Optimal Set and Pareto Optimal Front. <br>


2018 ◽  
Vol 232 ◽  
pp. 03039
Author(s):  
Taowei Chen ◽  
Yiming Yu ◽  
Kun Zhao

Particle swarm optimization(PSO) algorithm has been widely applied in solving multi-objective optimization problems(MOPs) since it was proposed. However, PSO algorithms updated the velocity of each particle using a single search strategy, which may be difficult to obtain approximate Pareto front for complex MOPs. In this paper, inspired by the theory of P system, a multi-objective particle swarm optimization (PSO) algorithm based on the framework of membrane system(PMOPSO) is proposed to solve MOPs. According to the hierarchical structure, objects and rules of P system, the PSO approach is used in elementary membranes to execute multiple search strategy. And non-dominated sorting and crowding distance is used in skin membrane for improving speed of convergence and maintaining population diversity by evolutionary rules. Compared with other multi-objective optimization algorithm including MOPSO, dMOPSO, SMPSO, MMOPSO, MOEA/D, SPEA2, PESA2, NSGAII on a benchmark series function, the experimental results indicate that the proposed algorithm is not only feasible and effective but also have a better convergence to true Pareto front.


2020 ◽  
Vol 53 (4) ◽  
pp. 559-566
Author(s):  
Lakhdar Kaddouri ◽  
Amel B.H. Adamou-Mitiche ◽  
Lahcene Mitiche

Particle Swarm Optimization (PSO) is an evolutionary algorithm widely used in optimization problems. It is characterized by a fast convergence, which can lead the algorithm to stagnate in local optima. In the present paper, a new Multi-PSO algorithm for the design of two-dimensional infinite impulse response (IIR) filters is built. It is based on the standard PSO and uses a new initialization strategy. This strategy is relayed to two types of swarms: a principal and auxiliaries. To improve the performance of the algorithm, the search space is divided into several areas, which allows a best covering and leading to a better exploration in each zone separately. This solved the problem of fast convergence in standard PSO. The results obtained demonstrate the effectiveness of the Multi-PSO algorithm in the filter coefficients optimization.


2021 ◽  
Author(s):  
Ahlem Aboud ◽  
Nizar Rokbani ◽  
Seyedali Mirjalili ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
...  

<p>Multifactorial Optimization (MFO) and Evolutionary Transfer Optimization (ETO) are new optimization challenging paradigms for which the multi-Objective Particle Swarm Optimization system (MOPSO) may be interesting despite limitations. MOPSO has been widely used in static/dynamic multi-objective optimization problems, while its potentials for multi-task optimization are not completely unveiled. This paper proposes a new Distributed Multifactorial Particle Swarm Optimization algorithm (DMFPSO) for multi-task optimization. This new system has a distributed architecture on a set of sub-swarms that are dynamically constructed based on the number of optimization tasks affected by each particle skill factor. DMFPSO is designed to deal with the issues of handling convergence and diversity concepts separately. DMFPSO uses Beta function to provide two optimized profiles with a dynamic switching behaviour. The first profile, Beta-1, is used for the exploration which aims to explore the search space toward potential solutions, while the second Beta-2 function is used for convergence enhancement. This new system is tested on 36 benchmarks provided by the CEC’2021 Evolutionary Transfer Multi-Objective Optimization Competition. Comparatives with the state-of-the-art methods are done using the Inverted General Distance (IGD) and Mean Inverted General Distance (MIGD) metrics. Based on the MSS metric, this proposal has the best results on most tested problems.</p>


Author(s):  
Dhafar Al-Ani ◽  
Saeid Habibi

Real-world problems are often complex and may need to deal with constrained optimization problems (COPs). This has led to a growing interest in optimization techniques that involve more than one objective function to be simultaneously optimized. Accordingly, at the end of the multi-objective optimization process, there will be more than one solution to be considered. This enables a trade-off of high-quality solutions and provides options to the decision-maker to choose a solution based on qualitative preferences. Particle Swarm Optimization (PSO) algorithms are increasingly being used to solve NP-hard and constrained optimization problems that involve multi-objective mathematical representations by finding accurate and robust solutions. PSOs are currently used in many real-world applications, including (but not limited to) medical diagnosis, image processing, speech recognition, chemical reactor, weather forecasting, system identification, reactive power control, stock exchange market, and economic power generation. In this paper, a new version of Multi-objective PSO and Differential Evolution (MOPSO-DE) is proposed to solve constrained optimization problems (COPs). As presented in this paper, the proposed MOPSO-DE scheme incorporates a new leader(s) updating mechanism that is invoked when the system is under the risk of converging to premature solutions, parallel islands mechanism, adaptive mutation, and then integrated to the DE in order to update the particles’ best position in the search-space. A series of experiments are conducted using 12 well-known benchmark test problems collected from the 2006 IEEE Congress on Evolutionary Computation (CEC2006) to verify the feasibility, performance, and effectiveness of the proposed MOPSO-DE algorithm. The simulation results show the proposed MOPSO-DE is highly competitive and is able to obtain the optimal solutions for the all test problems.


2013 ◽  
Vol 373-375 ◽  
pp. 1131-1134
Author(s):  
Wei Yi Qian ◽  
Guang Lei Liu

We propose a modified particle swarm optimization (PSO) algorithm named SPSO for the global optimization problems. In SPSO, we introduce the crossover operator in order to increase the diversity of the swarm. The crossover operator is contracted by forming a simplex. The crossover operator is used if the diversity of the swarm is below a threshold (denoted hlow) and continues until the diversity reaches the required value (hhigh). The six test problems are used for numerical study. Numerical results indicate that the proposed algorithm is better than some existing PSO.


Sign in / Sign up

Export Citation Format

Share Document