An Improved Multi-Objective Particle Swarm Optimization Algorithm Based on Adaptive Local Search

2017 ◽  
Vol 8 (2) ◽  
pp. 1-29 ◽  
Author(s):  
Swapnil Prakash Kapse ◽  
Shankar Krishnapillai

This paper demonstrates a novel local search approach based on an adaptive (time variant) search space index improving the exploration ability as well as diversity in multi-objective Particle Swarm Optimization. The novel strategy searches for the neighbourhood particles in a range which gradually increases with iterations. Particles get updated according to the rules of basic PSO and the non-dominated particles are subjected to Evolutionary update archiving. To improve the diversity, the archive is truncated based on crowding distance parameter. The leader is chosen among the candidates in the archive based on another local search. From the simulation results, it is clear that the implementation of the new scheme results in better convergence and diversity as compared to NSGA-II, CMPSO, and SMPSO reported in literature. Finally, the proposed algorithm is used to solve machine design based engineering problems from literature and compared with existing algorithms.

2018 ◽  
Vol 9 (4) ◽  
pp. 71-96 ◽  
Author(s):  
Swapnil Prakash Kapse ◽  
Shankar Krishnapillai

This article demonstrates the implementation of a novel local search approach based on Utopia point guided search, thus improving the exploration ability of multi- objective Particle Swarm Optimization. This strategy searches for best particles based on the criteria of seeking solutions closer to the Utopia point, thus improving the convergence to the Pareto-optimal front. The elite non-dominated selected particles are stored in an archive and updated at every iteration based on least crowding distance criteria. The leader is chosen among the candidates in the archive using the same guided search. From the simulation results based on many benchmark tests, the new algorithm gives better convergence and diversity when compared to existing several algorithms such as NSGA-II, CMOPSO, SMPSO, PSNS, DE+MOPSO and AMALGAM. Finally, the proposed algorithm is used to solve mechanical design based multi-objective optimization problems from the literature, where it shows the same advantages.


Water ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 1334
Author(s):  
Mohamed R. Torkomany ◽  
Hassan Shokry Hassan ◽  
Amin Shoukry ◽  
Ahmed M. Abdelrazek ◽  
Mohamed Elkholy

The scarcity of water resources nowadays lays stress on researchers to develop strategies aiming at making the best benefit of the currently available resources. One of these strategies is ensuring that reliable and near-optimum designs of water distribution systems (WDSs) are achieved. Designing WDSs is a discrete combinatorial NP-hard optimization problem, and its complexity increases when more objectives are added. Among the many existing evolutionary algorithms, a new hybrid fast-convergent multi-objective particle swarm optimization (MOPSO) algorithm is developed to increase the convergence and diversity rates of the resulted non-dominated solutions in terms of network capital cost and reliability using a minimized computational budget. Several strategies are introduced to the developed algorithm, which are self-adaptive PSO parameters, regeneration-on-collision, adaptive population size, and using hypervolume quality for selecting repository members. A local search method is also coupled to both the original MOPSO algorithm and the newly developed one. Both algorithms are applied to medium and large benchmark problems. The results of the new algorithm coupled with the local search are superior to that of the original algorithm in terms of different performance metrics in the medium-sized network. In contrast, the new algorithm without the local search performed better in the large network.


2021 ◽  
Author(s):  
Ahlem Aboud ◽  
Nizar Rokbani ◽  
Seyedali Mirjalili ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
...  

<p>Multifactorial Optimization (MFO) and Evolutionary Transfer Optimization (ETO) are new optimization challenging paradigms for which the multi-Objective Particle Swarm Optimization system (MOPSO) may be interesting despite limitations. MOPSO has been widely used in static/dynamic multi-objective optimization problems, while its potentials for multi-task optimization are not completely unveiled. This paper proposes a new Distributed Multifactorial Particle Swarm Optimization algorithm (DMFPSO) for multi-task optimization. This new system has a distributed architecture on a set of sub-swarms that are dynamically constructed based on the number of optimization tasks affected by each particle skill factor. DMFPSO is designed to deal with the issues of handling convergence and diversity concepts separately. DMFPSO uses Beta function to provide two optimized profiles with a dynamic switching behaviour. The first profile, Beta-1, is used for the exploration which aims to explore the search space toward potential solutions, while the second Beta-2 function is used for convergence enhancement. This new system is tested on 36 benchmarks provided by the CEC’2021 Evolutionary Transfer Multi-Objective Optimization Competition. Comparatives with the state-of-the-art methods are done using the Inverted General Distance (IGD) and Mean Inverted General Distance (MIGD) metrics. Based on the MSS metric, this proposal has the best results on most tested problems.</p>


2017 ◽  
Vol 31 (19-21) ◽  
pp. 1740073 ◽  
Author(s):  
Song Huang ◽  
Yan Wang ◽  
Zhicheng Ji

Multi-objective optimization problems (MOPs) need to be solved in real world recently. In this paper, a multi-objective particle swarm optimization based on Pareto set and aggregation approach was proposed to deal with MOPs. Firstly, velocities and positions were updated similar to PSO. Then, global-best set was defined in particle swarm optimizer to preserve Pareto-based set obtained by the population. Specifically, a hybrid updating strategy based on Pareto set and aggregation approach was introduced to update the global-best set and local search was carried on global-best set. Thirdly, personal-best positions were updated in decomposition way, and global-best position was selected from global-best set. Finally, ZDT instances and DTLZ instances were selected to evaluate the performance of MULPSO and the results show validity of the proposed algorithm for MOPs.


2020 ◽  
Author(s):  
Ahlem Aboud ◽  
Raja Fdhila ◽  
Amir Hussain ◽  
Adel Alimi

Distributed architecture-based Particle Swarm Optimization is very useful for static optimization and not yet explored to solve complex dynamic multi-objective optimization problems. This study proposes a novel Dynamic Pareto bi-level Multi-Objective Particle Swarm Optimization (DPb-MOPSO) algorithm with two optimization levels. In the first level, all solutions are optimized in the same search space and the second level is based on a distributed architecture using the Pareto ranking operator for dynamic multi-swarm subdivision. The proposed approach adopts a dynamic handling strategy using a set of detectors to keep track of change in the objective function that is impacted by the problem’s time-varying parameters at each level. To ensure timely adaptation during the optimization process, a dynamic response strategy is considered for the reevaluation of all non-improved solutions, while the worst particles are replaced with a newly generated one. The convergence and<br>diversity performance of the DPb-MOPSO algorithm are proven through Friedman Analysis of Variance, and the Lyapunov theorem is used to prove stability analysis over the Inverted Generational Distance (IGD) and Hypervolume Difference (HVD) metrics. Compared to other evolutionary algorithms, the novel DPb-MOPSO is shown to be most robust for solving complex problems over a range of changes in both the Pareto Optimal Set and Pareto Optimal Front. <br>


2014 ◽  
Vol 2014 ◽  
pp. 1-23 ◽  
Author(s):  
Martins Akugbe Arasomwan ◽  
Aderemi Oluyinka Adewumi

A new local search technique is proposed and used to improve the performance of particle swarm optimization algorithms by addressing the problem of premature convergence. In the proposed local search technique, a potential particle position in the solution search space is collectively constructed by a number of randomly selected particles in the swarm. The number of times the selection is made varies with the dimension of the optimization problem and each selected particle donates the value in the location of its randomly selected dimension from its personal best. After constructing the potential particle position, some local search is done around its neighbourhood in comparison with the current swarm global best position. It is then used to replace the global best particle position if it is found to be better; otherwise no replacement is made. Using some well-studied benchmark problems with low and high dimensions, numerical simulations were used to validate the performance of the improved algorithms. Comparisons were made with four different PSO variants, two of the variants implement different local search technique while the other two do not. Results show that the improved algorithms could obtain better quality solution while demonstrating better convergence velocity and precision, stability, robustness, and global-local search ability than the competing variants.


2021 ◽  
Author(s):  
Ahlem Aboud ◽  
Nizar Rokbani ◽  
Seyedali Mirjalili ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
...  

<p>Multifactorial Optimization (MFO) and Evolutionary Transfer Optimization (ETO) are new optimization challenging paradigms for which the multi-Objective Particle Swarm Optimization system (MOPSO) may be interesting despite limitations. MOPSO has been widely used in static/dynamic multi-objective optimization problems, while its potentials for multi-task optimization are not completely unveiled. This paper proposes a new Distributed Multifactorial Particle Swarm Optimization algorithm (DMFPSO) for multi-task optimization. This new system has a distributed architecture on a set of sub-swarms that are dynamically constructed based on the number of optimization tasks affected by each particle skill factor. DMFPSO is designed to deal with the issues of handling convergence and diversity concepts separately. DMFPSO uses Beta function to provide two optimized profiles with a dynamic switching behaviour. The first profile, Beta-1, is used for the exploration which aims to explore the search space toward potential solutions, while the second Beta-2 function is used for convergence enhancement. This new system is tested on 36 benchmarks provided by the CEC’2021 Evolutionary Transfer Multi-Objective Optimization Competition. Comparatives with the state-of-the-art methods are done using the Inverted General Distance (IGD) and Mean Inverted General Distance (MIGD) metrics. Based on the MSS metric, this proposal has the best results on most tested problems.</p>


Sign in / Sign up

Export Citation Format

Share Document