scholarly journals Repulsive Self-Adaptive Acceleration Particle Swarm Optimization Approach

2014 ◽  
Vol 4 (3) ◽  
pp. 189-204 ◽  
Author(s):  
Simone A. Ludwig

Abstract Adaptive Particle Swarm Optimization (PSO) variants have become popular in recent years. The main idea of these adaptive PSO variants is that they adaptively change their search behavior during the optimization process based on information gathered during the run. Adaptive PSO variants have shown to be able to solve a wide range of difficult optimization problems efficiently and effectively. In this paper we propose a Repulsive Self-adaptive Acceleration PSO (RSAPSO) variant that adaptively optimizes the velocity weights of every particle at every iteration. The velocity weights include the acceleration constants as well as the inertia weight that are responsible for the balance between exploration and exploitation. Our proposed RSAPSO variant optimizes the velocity weights that are then used to search for the optimal solution of the problem (e.g., benchmark function). We compare RSAPSO to four known adaptive PSO variants (decreasing weight PSO, time-varying acceleration coefficients PSO, guaranteed convergence PSO, and attractive and repulsive PSO) on twenty benchmark problems. The results show that RSAPSO achives better results compared to the known PSO variants on difficult optimization problems that require large numbers of function evaluations.

2012 ◽  
Vol 2012 ◽  
pp. 1-12 ◽  
Author(s):  
An Liu ◽  
Erwie Zahara ◽  
Ming-Ta Yang

Ordinary differential equations usefully describe the behavior of a wide range of dynamic physical systems. The particle swarm optimization (PSO) method has been considered an effective tool for solving the engineering optimization problems for ordinary differential equations. This paper proposes a modified hybrid Nelder-Mead simplex search and particle swarm optimization (M-NM-PSO) method for solving parameter estimation problems. The M-NM-PSO method improves the efficiency of the PSO method and the conventional NM-PSO method by rapid convergence and better objective function value. Studies are made for three well-known cases, and the solutions of the M-NM-PSO method are compared with those by other methods published in the literature. The results demonstrate that the proposed M-NM-PSO method yields better estimation results than those obtained by the genetic algorithm, the modified genetic algorithm (real-coded GA (RCGA)), the conventional particle swarm optimization (PSO) method, and the conventional NM-PSO method.


2021 ◽  
Author(s):  
Ahlem Aboud ◽  
Nizar Rokbani ◽  
Seyedali Mirjalili ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
...  

<p>Multifactorial Optimization (MFO) and Evolutionary Transfer Optimization (ETO) are new optimization challenging paradigms for which the multi-Objective Particle Swarm Optimization system (MOPSO) may be interesting despite limitations. MOPSO has been widely used in static/dynamic multi-objective optimization problems, while its potentials for multi-task optimization are not completely unveiled. This paper proposes a new Distributed Multifactorial Particle Swarm Optimization algorithm (DMFPSO) for multi-task optimization. This new system has a distributed architecture on a set of sub-swarms that are dynamically constructed based on the number of optimization tasks affected by each particle skill factor. DMFPSO is designed to deal with the issues of handling convergence and diversity concepts separately. DMFPSO uses Beta function to provide two optimized profiles with a dynamic switching behaviour. The first profile, Beta-1, is used for the exploration which aims to explore the search space toward potential solutions, while the second Beta-2 function is used for convergence enhancement. This new system is tested on 36 benchmarks provided by the CEC’2021 Evolutionary Transfer Multi-Objective Optimization Competition. Comparatives with the state-of-the-art methods are done using the Inverted General Distance (IGD) and Mean Inverted General Distance (MIGD) metrics. Based on the MSS metric, this proposal has the best results on most tested problems.</p>


Author(s):  
Jiten Makadia ◽  
C.D. Sankhavara

Swarm Intelligence algorithms like PSO (Particle Swarm Optimization), ACO (Ant Colony Optimization), ABC (Artificial Bee Colony), Glow-worm swarm Optimization, etc. have been utilized by researchers for solving optimization problems. This work presents the application of a novel modified EHO (Elephant Herding Optimization) for cost optimization of shell and tube heat exchanger. A comparison of the results obtained by EHO in two benchmark problems shows that it is superior to those obtained with genetic algorithm and particle swarm optimization. The overall cost reduction is 13.3 % and 9.68% for both the benchmark problem compared to PSO. Results indicate that EHO can be effectively utilized for solving real-life optimization problems.


Author(s):  
T. O. Ting ◽  
H. C. Ting ◽  
T. S. Lee

In this work, a hybrid Taguchi-Particle Swarm Optimization (TPSO) is proposed to solve global numerical optimization problems with continuous and discrete variables. This hybrid algorithm combines the well-known Particle Swarm Optimization Algorithm with the established Taguchi method, which has been an important tool for robust design. This paper presents the improvements obtained despite the simplicity of the hybridization process. The Taguchi method is run only once in every PSO iteration and therefore does not give significant impact in terms of computational cost. The method creates a more diversified population, which also contributes to the success of avoiding premature convergence. The proposed method is effectively applied to solve 13 benchmark problems. This study’s results show drastic improvements in comparison with the standard PSO algorithm involving continuous and discrete variables on high dimensional benchmark functions.


2020 ◽  
Vol 2020 ◽  
pp. 1-26
Author(s):  
Wusi Yang ◽  
Li Chen ◽  
Yi Wang ◽  
Maosheng Zhang

The recently proposed multiobjective particle swarm optimization algorithm based on competition mechanism algorithm cannot effectively deal with many-objective optimization problems, which is characterized by relatively poor convergence and diversity, and long computing runtime. In this paper, a novel multi/many-objective particle swarm optimization algorithm based on competition mechanism is proposed, which maintains population diversity by the maximum and minimum angle between ordinary and extreme individuals. And the recently proposed θ-dominance is adopted to further enhance the performance of the algorithm. The proposed algorithm is evaluated on the standard benchmark problems DTLZ, WFG, and UF1-9 and compared with the four recently proposed multiobjective particle swarm optimization algorithms and four state-of-the-art many-objective evolutionary optimization algorithms. The experimental results indicate that the proposed algorithm has better convergence and diversity, and its performance is superior to other comparative algorithms on most test instances.


2008 ◽  
Author(s):  
◽  
Nelendran Pillay

Linear control systems can be easily tuned using classical tuning techniques such as the Ziegler-Nichols and Cohen-Coon tuning formulae. Empirical studies have found that these conventional tuning methods result in an unsatisfactory control performance when they are used for processes experiencing the negative destabilizing effects of strong nonlinearities. It is for this reason that control practitioners often prefer to tune most nonlinear systems using trial and error tuning, or intuitive tuning. A need therefore exists for the development of a suitable tuning technique that is applicable for a wide range of control loops that do not respond satisfactorily to conventional tuning. Emerging technologies such as Swarm Intelligence (SI) have been utilized to solve many non-linear engineering problems. Particle Swarm Optimization (PSO), developed by Eberhart and Kennedy (1995), is a sub-field of SI and was inspired by swarming patterns occurring in nature such as flocking birds. It was observed that each individual exchanges previous experience, hence knowledge of the “best position” attained by an individual becomes globally known. In the study, the problem of identifying the PID controller parameters is considered as an optimization problem. An attempt has been made to determine the PID parameters employing the PSO technique. A wide range of typical process models commonly encountered in industry is used to assess the efficacy of the PSO methodology. Comparisons are made between the PSO technique and other conventional methods using simulations and real-time control.


2014 ◽  
Vol 2014 ◽  
pp. 1-23 ◽  
Author(s):  
Martins Akugbe Arasomwan ◽  
Aderemi Oluyinka Adewumi

A new local search technique is proposed and used to improve the performance of particle swarm optimization algorithms by addressing the problem of premature convergence. In the proposed local search technique, a potential particle position in the solution search space is collectively constructed by a number of randomly selected particles in the swarm. The number of times the selection is made varies with the dimension of the optimization problem and each selected particle donates the value in the location of its randomly selected dimension from its personal best. After constructing the potential particle position, some local search is done around its neighbourhood in comparison with the current swarm global best position. It is then used to replace the global best particle position if it is found to be better; otherwise no replacement is made. Using some well-studied benchmark problems with low and high dimensions, numerical simulations were used to validate the performance of the improved algorithms. Comparisons were made with four different PSO variants, two of the variants implement different local search technique while the other two do not. Results show that the improved algorithms could obtain better quality solution while demonstrating better convergence velocity and precision, stability, robustness, and global-local search ability than the competing variants.


Mathematics ◽  
2019 ◽  
Vol 7 (2) ◽  
pp. 146 ◽  
Author(s):  
Ying Sun ◽  
Yuelin Gao ◽  
Xudong Shi

It is generally known that the balance between convergence and diversity is a key issue for solving multi-objective optimization problems. Thus, a chaotic multi-objective particle swarm optimization approach incorporating clone immunity (CICMOPSO) is proposed in this paper. First, points in a non-dominated solution set are mapped to a parallel-cell coordinate system. Then, the status of the particles is evaluated by the Pareto entropy and difference entropy. At the same time, the algorithm parameters are adjusted by feedback information. At the late stage of the algorithm, the local-search ability of the particle swarm still needs to be improved. Logistic mapping and the neighboring immune operator are used to maintain and change the external archive. Experimental test results show that the convergence and diversity of the algorithm are improved.


Sign in / Sign up

Export Citation Format

Share Document