scholarly journals A Multimodal Improved Particle Swarm Optimization for High Dimensional Problems in Electromagnetic Devices

Energies ◽  
2021 ◽  
Vol 14 (24) ◽  
pp. 8575
Author(s):  
Rehan Ali Khan ◽  
Shiyou Yang ◽  
Shafiullah Khan ◽  
Shah Fahad ◽  
Kalimullah

Particle Swarm Optimization (PSO) is a member of the swarm intelligence-based on a metaheuristic approach which is inspired by the natural deeds of bird flocking and fish schooling. In comparison to other traditional methods, the model of PSO is widely recognized as a simple algorithm and easy to implement. However, the traditional PSO’s have two primary issues: premature convergence and loss of diversity. These problems arise at the latter stages of the evolution process when dealing with high-dimensional, complex and electromagnetic inverse problems. To address these types of issues in the PSO approach, we proposed an Improved PSO (IPSO) which employs a dynamic control parameter as well as an adaptive mutation mechanism. The main proposal of the novel adaptive mutation operator is to prevent the diversity loss of the optimization process while the dynamic factor comprises the balance between exploration and exploitation in the search domain. The experimental outcomes achieved by solving complicated and extremely high-dimensional optimization problems were also validated on superconducting magnetic energy storage devices (SMES). According to numerical and experimental analysis, the IPSO delivers a better optimal solution than the other solutions described, particularly in the early computational evaluation of the generation.

2019 ◽  
Vol 61 (4) ◽  
pp. 177-185
Author(s):  
Moritz Mühlenthaler ◽  
Alexander Raß

Abstract A discrete particle swarm optimization (PSO) algorithm is a randomized search heuristic for discrete optimization problems. A fundamental question about randomized search heuristics is how long it takes, in expectation, until an optimal solution is found. We give an overview of recent developments related to this question for discrete PSO algorithms. In particular, we give a comparison of known upper and lower bounds of expected runtimes and briefly discuss the techniques used to obtain these bounds.


2016 ◽  
Vol 11 (1) ◽  
pp. 58-67 ◽  
Author(s):  
S Sarathambekai ◽  
K Umamaheswari

Discrete particle swarm optimization is one of the most recently developed population-based meta-heuristic optimization algorithm in swarm intelligence that can be used in any discrete optimization problems. This article presents a discrete particle swarm optimization algorithm to efficiently schedule the tasks in the heterogeneous multiprocessor systems. All the optimization algorithms share a common algorithmic step, namely population initialization. It plays a significant role because it can affect the convergence speed and also the quality of the final solution. The random initialization is the most commonly used method in majority of the evolutionary algorithms to generate solutions in the initial population. The initial good quality solutions can facilitate the algorithm to locate the optimal solution or else it may prevent the algorithm from finding the optimal solution. Intelligence should be incorporated to generate the initial population in order to avoid the premature convergence. This article presents a discrete particle swarm optimization algorithm, which incorporates opposition-based technique to generate initial population and greedy algorithm to balance the load of the processors. Make span, flow time, and reliability cost are three different measures used to evaluate the efficiency of the proposed discrete particle swarm optimization algorithm for scheduling independent tasks in distributed systems. Computational simulations are done based on a set of benchmark instances to assess the performance of the proposed algorithm.


2014 ◽  
Vol 1049-1050 ◽  
pp. 1690-1693 ◽  
Author(s):  
Juan Li

The traditional evolutionary algorithm is cannot converge faster to solve the path optimization problems, and the path that is computed is not the shortest path, in allusion to the disadvantage of this algorithm, a mutation particle swarm optimization algorithm is proposed. The algorithm introduces the adaptive mutation strategy, and accelerated the speed to search for the global optimal solution. For seven examples experiment in standard database, the result shows that the algorithm is more efficient..


2014 ◽  
Vol 2014 ◽  
pp. 1-16 ◽  
Author(s):  
Xiaobing Yu ◽  
Jie Cao ◽  
Haiyan Shan ◽  
Li Zhu ◽  
Jun Guo

Particle swarm optimization (PSO) and differential evolution (DE) are both efficient and powerful population-based stochastic search techniques for solving optimization problems, which have been widely applied in many scientific and engineering fields. Unfortunately, both of them can easily fly into local optima and lack the ability of jumping out of local optima. A novel adaptive hybrid algorithm based on PSO and DE (HPSO-DE) is formulated by developing a balanced parameter between PSO and DE. Adaptive mutation is carried out on current population when the population clusters around local optima. The HPSO-DE enjoys the advantages of PSO and DE and maintains diversity of the population. Compared with PSO, DE, and their variants, the performance of HPSO-DE is competitive. The balanced parameter sensitivity is discussed in detail.


2011 ◽  
Vol 383-390 ◽  
pp. 7208-7213
Author(s):  
De Kun Tan

To overcome the shortage of standard Particle Swarm Optimization(SPSO) on premature convergence, Quantum-behaved Particle Swarm Optimization (QPSO) is presented to solve engineering constrained optimization problem. QPSO algorithm is a novel PSO algorithm model in terms of quantum mechanics. The model is based on Delta potential, and we think the particle has the behavior of quanta. Because the particle doesn’t have a certain trajectory, it has more randomicity than the particle which has fixed path in PSO, thus the QPSO more easily escapes from local optima, and has more capability to seek the global optimal solution. In the period of iterative optimization, outside point method is used to deal with those particles that violate the constraints. Furthermore, compared with other intelligent algorithms, the QPSO is verified by two instances of engineering constrained optimization, experimental results indicate that the algorithm performs better in terms of accuracy and robustness.


2014 ◽  
Vol 2014 ◽  
pp. 1-14 ◽  
Author(s):  
J. J. Jamian ◽  
M. N. Abdullah ◽  
H. Mokhlis ◽  
M. W. Mustafa ◽  
A. H. A. Bakar

The Particle Swarm Optimization (PSO) Algorithm is a popular optimization method that is widely used in various applications, due to its simplicity and capability in obtaining optimal results. However, ordinary PSOs may be trapped in the local optimal point, especially in high dimensional problems. To overcome this problem, an efficient Global Particle Swarm Optimization (GPSO) algorithm is proposed in this paper, based on a new updated strategy of the particle position. This is done through sharing information of particle position between the dimensions (variables) at any iteration. The strategy can enhance the exploration capability of the GPSO algorithm to determine the optimum global solution and avoid traps at the local optimum. The proposed GPSO algorithm is validated on a 12-benchmark mathematical function and compared with three different types of PSO techniques. The performance of this algorithm is measured based on the solutions’ quality, convergence characteristics, and their robustness after 50 trials. The simulation results showed that the new updated strategy in GPSO assists in realizing a better optimum solution with the smallest standard deviation value compared to other techniques. It can be concluded that the proposed GPSO method is a superior technique for solving high dimensional numerical function optimization problems.


2018 ◽  
Vol 232 ◽  
pp. 03015
Author(s):  
Changjun Wen ◽  
Changlian Liu ◽  
Heng Zhang ◽  
Hongliang Wang

The particle swarm optimization (PSO) is a widely used tool for solving optimization problems in the field of engineering technology. However, PSO is likely to fall into local optimum, which has the disadvantages of slow convergence speed and low convergence precision. In view of the above shortcomings, a particle swarm optimization with Gaussian disturbance is proposed. With introducing the Gaussian disturbance in the self-cognition part and social cognition part of the algorithm, this method can improve the convergence speed and precision of the algorithm, which can also improve the ability of the algorithm to escape the local optimal solution. The algorithm is simulated by Griewank function after the several evolutionary modes of GDPSO algorithm are analyzed. The experimental results show that the convergence speed and the optimization precision of the GDPSO is better than that of PSO.


Author(s):  
Shengyu Pei

How to solve constrained optimization problems constitutes an important part of the research on optimization problems. In this paper, a hybrid immune clonal particle swarm optimization multi-objective algorithm is proposed to solve constrained optimization problems. In the proposed algorithm, the population is first initialized with the theory of good point set. Then, differential evolution is adopted to improve the local optimal solution of each particle, with immune clonal strategy incorporated to improve each particle. As a final step, sub-swarm is used to enhance the position and velocity of individual particle. The new algorithm has been tested on 24 standard test functions and three engineering optimization problems, whose results show that the new algorithm has good performance in both robustness and convergence.


Author(s):  
Anuj Chandila ◽  
Shailesh Tiwari ◽  
K. K. Mishra ◽  
Akash Punhani

This article describes how optimization is a process of finding out the best solutions among all available solutions for a problem. Many randomized algorithms have been designed to identify optimal solutions in optimization problems. Among these algorithms evolutionary programming, evolutionary strategy, genetic algorithm, particle swarm optimization and genetic programming are widely accepted for the optimization problems. Although a number of randomized algorithms are available in literature for solving optimization problems yet their design objectives are same. Each algorithm has been designed to meet certain goals like minimizing total number of fitness evaluations to capture nearly optimal solutions, to capture diverse optimal solutions in multimodal solutions when needed and also to avoid the local optimal solution in multi modal problems. This article discusses a novel optimization algorithm named as Environmental Adaption Method (EAM) foable 3r solving the optimization problems. EAM is designed to reduce the overall processing time for retrieving optimal solution of the problem, to improve the quality of solutions and particularly to avoid being trapped in local optima. The results of the proposed algorithm are compared with the latest version of existing algorithms such as particle swarm optimization (PSO-TVAC), and differential evolution (SADE) on benchmark functions and the proposed algorithm proves its effectiveness over the existing algorithms in all the taken cases.


Sign in / Sign up

Export Citation Format

Share Document