scholarly journals Dynamic Neighborhood-Based Particle Swarm Optimization for Multimodal Problems

2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Xu-Tao Zhang ◽  
Biao Xu ◽  
Wei Zhang ◽  
Jun Zhang ◽  
Xin-fang Ji

Various black-box optimization problems in real world can be classified as multimodal optimization problems. Neighborhood information plays an important role in improving the performance of an evolutionary algorithm when dealing with such problems. In view of this, we propose a particle swarm optimization algorithm based on dynamic neighborhood to solve the multimodal optimization problem. In this paper, a dynamic ε-neighborhood selection mechanism is first defined to balance the exploration and exploitation of the algorithm. Then, based on the information provided by the neighborhoods, four different particle position updating strategies are designed to further support the algorithm’s exploration and exploitation of the search space. Finally, the proposed algorithm is compared with 7 state-of-the-art multimodal algorithms on 8 benchmark instances. The experimental results reveal that the proposed algorithm is superior to the compared ones and is an effective method to tackle multimodal optimization problems.

2021 ◽  
Author(s):  
Ahlem Aboud ◽  
Nizar Rokbani ◽  
Seyedali Mirjalili ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
...  

<p>Multifactorial Optimization (MFO) and Evolutionary Transfer Optimization (ETO) are new optimization challenging paradigms for which the multi-Objective Particle Swarm Optimization system (MOPSO) may be interesting despite limitations. MOPSO has been widely used in static/dynamic multi-objective optimization problems, while its potentials for multi-task optimization are not completely unveiled. This paper proposes a new Distributed Multifactorial Particle Swarm Optimization algorithm (DMFPSO) for multi-task optimization. This new system has a distributed architecture on a set of sub-swarms that are dynamically constructed based on the number of optimization tasks affected by each particle skill factor. DMFPSO is designed to deal with the issues of handling convergence and diversity concepts separately. DMFPSO uses Beta function to provide two optimized profiles with a dynamic switching behaviour. The first profile, Beta-1, is used for the exploration which aims to explore the search space toward potential solutions, while the second Beta-2 function is used for convergence enhancement. This new system is tested on 36 benchmarks provided by the CEC’2021 Evolutionary Transfer Multi-Objective Optimization Competition. Comparatives with the state-of-the-art methods are done using the Inverted General Distance (IGD) and Mean Inverted General Distance (MIGD) metrics. Based on the MSS metric, this proposal has the best results on most tested problems.</p>


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Martins Akugbe Arasomwan ◽  
Aderemi Oluyinka Adewumi

Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted.


2014 ◽  
Vol 1044-1045 ◽  
pp. 1418-1423
Author(s):  
Pasura Aungkulanon

Machining optimization problem aims to optimize machinery conditions which are important for economic settings. The effective methods for solving these problems using a finite sequence of instructions can be categorized into two groups; exact optimization algorithm and meta-heuristic algorithms. A well-known meta-heuristic approach called Harmony Search Algorithm was used to compare with Particle Swarm Optimization. We implemented and analysed algorithms using unconstrained problems under different conditions included single, multi-peak, curved ridge optimization, and machinery optimization problem. The computational outputs demonstrated the proposed Particle Swarm Optimization resulted in the better outcomes in term of mean and variance of process yields.


Author(s):  
Jenn-Long Liu ◽  

Particle swarm optimization (PSO) is a promising evolutionary approach related to a particle moves over the search space with velocity, which is adjusted according to the flying experiences of the particle and its neighbors, and flies towards the better and better search area over the course of search process. Although the PSO is effective in solving the global optimization problems, there are some crucial user-input parameters, such as cognitive and social learning rates, affect the performance of algorithm since the search process of a PSO algorithm is nonlinear and complex. Consequently, a PSO with well-selected parameter settings may result in good performance. This work develops an evolving PSO based on the Clerc’s PSO to evaluate the fitness of objective function and a genetic algorithm (GA) to evolve the optimal design parameters to provide the usage of PSO. The crucial design parameters studied herein include the cognitive and social learning rates as well as constriction factor for the Clerc’s PSO. Several benchmarking cases are experimented to generalize a set of optimal parameters via the evolving PSO. Furthermore, the better parameters are applied to the engineering optimization of a pressure vessel design.


2015 ◽  
Vol 2015 ◽  
pp. 1-15 ◽  
Author(s):  
Oussama Ait Sahed ◽  
Kamel Kara ◽  
Mohamed Laid Hadjili

A fuzzy predictive controller using particle swarm optimization (PSO) approach is proposed. The aim is to develop an efficient algorithm that is able to handle the relatively complex optimization problem with minimal computational time. This can be achieved using reduced population size and small number of iterations. In this algorithm, instead of using the uniform distribution as in the conventional PSO algorithm, the initial particles positions are distributed according to the normal distribution law, within the area around the best position. The radius limiting this area is adaptively changed according to the tracking error values. Moreover, the choice of the initial best position is based on prior knowledge about the search space landscape and the fact that in most practical applications the dynamic optimization problem changes are gradual. The efficiency of the proposed control algorithm is evaluated by considering the control of the model of a 4 × 4 Multi-Input Multi-Output industrial boiler. This model is characterized by being nonlinear with high interactions between its inputs and outputs, having a nonminimum phase behaviour, and containing instabilities and time delays. The obtained results are compared to those of the control algorithms based on the conventional PSO and the linear approach.


2014 ◽  
Vol 989-994 ◽  
pp. 2621-2624
Author(s):  
Shao Song Wan ◽  
Jian Cao ◽  
Qun Song Zhu

In order to resolve these problems, we put forward a new design of the intelligent lock which is mainly based on the technology of wireless sensor network. Particle swarm optimization (PSO) is a recently proposed intelligent algorithm which is motivated by swarm intelligence. PSO has been shown to perform well on many benchmark and real-world optimization problems; it easily falls into local optima when solving complex multimodal problems. To avoid the local optimization, the algorithm renews population and enhances the diversity of population by using density calculation of immune theory and adjusting new chaos sequence. The paper gives the circuit diagram of the hardware components based on single chip and describe how to design the software. The experimental results show that the immune genetic algorithm based on chaos theory can search the result of the optimization and evidently improve the convergent speed and astringency.


2020 ◽  
Author(s):  
Ahlem Aboud ◽  
Raja Fdhila ◽  
Amir Hussain ◽  
Adel Alimi

Distributed architecture-based Particle Swarm Optimization is very useful for static optimization and not yet explored to solve complex dynamic multi-objective optimization problems. This study proposes a novel Dynamic Pareto bi-level Multi-Objective Particle Swarm Optimization (DPb-MOPSO) algorithm with two optimization levels. In the first level, all solutions are optimized in the same search space and the second level is based on a distributed architecture using the Pareto ranking operator for dynamic multi-swarm subdivision. The proposed approach adopts a dynamic handling strategy using a set of detectors to keep track of change in the objective function that is impacted by the problem’s time-varying parameters at each level. To ensure timely adaptation during the optimization process, a dynamic response strategy is considered for the reevaluation of all non-improved solutions, while the worst particles are replaced with a newly generated one. The convergence and<br>diversity performance of the DPb-MOPSO algorithm are proven through Friedman Analysis of Variance, and the Lyapunov theorem is used to prove stability analysis over the Inverted Generational Distance (IGD) and Hypervolume Difference (HVD) metrics. Compared to other evolutionary algorithms, the novel DPb-MOPSO is shown to be most robust for solving complex problems over a range of changes in both the Pareto Optimal Set and Pareto Optimal Front. <br>


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Jingzheng Yao ◽  
Duanfeng Han

Barebones particle swarm optimization (BPSO) is a new PSO variant, which has shown a good performance on many optimization problems. However, similar to the standard PSO, BPSO also suffers from premature convergence when solving complex optimization problems. In order to improve the performance of BPSO, this paper proposes a new BPSO variant called BPSO with neighborhood search (NSBPSO) to achieve a tradeoff between exploration and exploitation during the search process. Experiments are conducted on twelve benchmark functions and a real-world problem of ship design. Simulation results demonstrate that our approach outperforms the standard PSO, BPSO, and six other improved PSO algorithms.


Sign in / Sign up

Export Citation Format

Share Document