scholarly journals Particle Swarm Optimization

Author(s):  
Alaa Tharwat ◽  
Tarek Gaber ◽  
Aboul Ella Hassanien ◽  
Basem E. Elnaghi

Optimization algorithms are necessary to solve many problems such as parameter tuning. Particle Swarm Optimization (PSO) is one of these optimization algorithms. The aim of PSO is to search for the optimal solution in the search space. This paper highlights the basic background needed to understand and implement the PSO algorithm. This paper starts with basic definitions of the PSO algorithm and how the particles are moved in the search space to find the optimal or near optimal solution. Moreover, a numerical example is illustrated to show how the particles are moved in a convex optimization problem. Another numerical example is illustrated to show how the PSO trapped in a local minima problem. Two experiments are conducted to show how the PSO searches for the optimal parameters in one-dimensional and two-dimensional spaces to solve machine learning problems.

2019 ◽  
Vol 7 (5) ◽  
pp. 36-44
Author(s):  
Satish Gajawada ◽  
Hassan Mustafa

The Soul is eternal and exists even after death of a person or animal. The main idea that is captured in this work is that soul continues to exist and takes a different a body after the death. The primary goal of this work is to invent a new field titled "Artificial Soul Optimization (ASO)". The term "Artificial Soul Optimization" is coined in this paper. All the Optimization algorithms which are proposed based on Artificial Souls will come under "Artificial Soul Optimization" Field (ASO Field). In the Particle Swarm Optimization and Artificial Human Optimization, the basic entities in search space are Artificial Birds and Artificial Humans respectively. Similarly, in Artificial Soul Optimization, the basic entities in search space are Artificial Souls. In this work, the ASO Field concepts are added to Particle Swarm Optimization (PSO) algorithm to create a new hybrid algorithm titled "Soul Particle Swarm Optimization (SoPSO). The proposed SoPSO algorithm is applied on various benchmark functions. Results obtained are compared with PSO algorithm. The World's first Hybrid PSO algorithm based on Artificial Souls is created in this work.


2013 ◽  
Vol 321-324 ◽  
pp. 2183-2186
Author(s):  
Zheng Bo Li

Particle Swarm Optimization (PSO) is a swarm intelligence algorithm to achieve through competition and collaboration between the particles in the complex search space to find the global optimum. Basic PSO algorithm evolutionary late convergence speed is slow and easy to fall into the shortcomings of local minima, this paper presents a multi-learning particle swarm optimization algorithm, the algorithm particle at the same time to follow their own to find the optimal solution, random optimal solution and the optimal solution for the whole group of other particles with dimensions velocity update discriminate area boundary position optimization updates and small-scale perturbations of the global best position, in order to enhance the algorithm escape from local optima capacity. The test results show that several typical functions: improved particle swarm algorithms significantly improve the global search ability, and can effectively avoid the premature convergence problem. Algorithm so that the relative robustness of the search space position has been significantly improved global optimal solution in high-dimensional optimization problem, suitable for solving similar problems, the calculation results can meet the requirements of practical engineering.


2020 ◽  
Vol 10 (1) ◽  
pp. 56-64 ◽  
Author(s):  
Neeti Kashyap ◽  
A. Charan Kumari ◽  
Rita Chhikara

AbstractWeb service compositions are commendable in structuring innovative applications for different Internet-based business solutions. The existing services can be reused by the other applications via the web. Due to the availability of services that can serve similar functionality, suitable Service Composition (SC) is required. There is a set of candidates for each service in SC from which a suitable candidate service is picked based on certain criteria. Quality of service (QoS) is one of the criteria to select the appropriate service. A standout amongst the most important functionality presented by services in the Internet of Things (IoT) based system is the dynamic composability. In this paper, two of the metaheuristic algorithms namely Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) are utilized to tackle QoS based service composition issues. QoS has turned into a critical issue in the management of web services because of the immense number of services that furnish similar functionality yet with various characteristics. Quality of service in service composition comprises of different non-functional factors, for example, service cost, execution time, availability, throughput, and reliability. Choosing appropriate SC for IoT based applications in order to optimize the QoS parameters with the fulfillment of user’s necessities has turned into a critical issue that is addressed in this paper. To obtain results via simulation, the PSO algorithm is used to solve the SC problem in IoT. This is further assessed and contrasted with GA. Experimental results demonstrate that GA can enhance the proficiency of solutions for SC problem in IoT. It can also help in identifying the optimal solution and also shows preferable outcomes over PSO.


Author(s):  
T. O. Ting

In this chapter, the main objective of maximizing the Material Reduction Rate (MRR) in the drilling process is carried out. The model describing the drilling process is adopted from the authors' previous work. With the model in hand, a novel algorithm known as Weightless Swarm Algorithm is employed to solve the maximization of MRR due to some constraints. Results show that WSA can find solutions effectively. Constraints are handled effectively, and no violations occur; results obtained are feasible and valid. Results are then compared to previous results by Particle Swarm Optimization (PSO) algorithm. From this comparison, it is quite impossible to conclude which algorithm has a better performance. However, in general, WSA is more stable compared to PSO, from lower standard deviations in most of the cases tested. In addition, the simplicity of WSA offers abundant advantages as the presence of a sole parameter enables easy parameter tuning and thereby enables this algorithm to perform to its fullest.


Author(s):  
Ying Tan

Compared to conventional PSO algorithm, particle swarm optimization algorithms inspired by immunity-clonal strategies are presented for their rapid convergence, easy implementation and ability of optimization. A novel PSO algorithm, clonal particle swarm optimization (CPSO) algorithm, is proposed based on clonal principle in natural immune system. By cloning the best individual of successive generations, the CPSO enlarges the area near the promising candidate solution and accelerates the evolution of the swarm, leading to better optimization capability and faster convergence performance than conventional PSO. As a variant, an advance-and-retreat strategy is incorporated to find the nearby minima in an enlarged solution space for greatly accelerating the CPSO before the next clonal operation. A black hole model is also established for easy implementation and good performance. Detailed descriptions of the CPSO algorithm and its variants are elaborated. Extensive experiments on 15 benchmark test functions demonstrate that the proposed CPSO algorithms speedup the evolution procedure and improve the global optimization performance. Finally, an application of the proposed PSO algorithms to spam detection is provided in comparison with the other three methods.


2019 ◽  
Vol 61 (4) ◽  
pp. 177-185
Author(s):  
Moritz Mühlenthaler ◽  
Alexander Raß

Abstract A discrete particle swarm optimization (PSO) algorithm is a randomized search heuristic for discrete optimization problems. A fundamental question about randomized search heuristics is how long it takes, in expectation, until an optimal solution is found. We give an overview of recent developments related to this question for discrete PSO algorithms. In particular, we give a comparison of known upper and lower bounds of expected runtimes and briefly discuss the techniques used to obtain these bounds.


Mathematics ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 357 ◽  
Author(s):  
Shu-Kai S. Fan ◽  
Chih-Hung Jen

Particle swarm optimization (PSO) is a population-based optimization technique that has been applied extensively to a wide range of engineering problems. This paper proposes a variation of the original PSO algorithm for unconstrained optimization, dubbed the enhanced partial search particle swarm optimizer (EPS-PSO), using the idea of cooperative multiple swarms in an attempt to improve the convergence and efficiency of the original PSO algorithm. The cooperative searching strategy is particularly devised to prevent the particles from being trapped into the local optimal solutions and tries to locate the global optimal solution efficiently. The effectiveness of the proposed algorithm is verified through the simulation study where the EPS-PSO algorithm is compared to a variety of exiting “cooperative” PSO algorithms in terms of noted benchmark functions.


2019 ◽  
Vol 2019 ◽  
pp. 1-22 ◽  
Author(s):  
Hao Li ◽  
Hongbin Jin ◽  
Hanzhong Wang ◽  
Yanyan Ma

For the first time , the Holonic Particle Swarm Optimization (HPSO ) algorithm applies multiagent theory about the improvement in the PSO algorithm and achieved good results. In order to further improve the performance of the algorithm, this paper proposes an improved Adaptive Holonic Particle Swarm Optimization (AHPSO) algorithm. Firstly, a brief review of the HPSO algorithm is carried out, and the HPSO algorithm can be further studied in three aspects: grouping strategy, iteration number setting, and state switching discrimination. The HPSO algorithm uses an approximately uniform grouping strategy that is the simplest but does not consider the connections between particles. And if the particles with larger or smaller differences are grouped together in different search stages, the search efficiency will be improved. Therefore, this paper proposes a grouping strategy based on information entropy and system clustering and combines two grouping strategies with corresponding search methods. The performance of the HPSO algorithm depends on the setting of the number of iterations. If it is too small, it is difficult to search for the optimal and it wastes so many computing resources. Therefore, this paper constructs an adaptive termination condition that causes the particles to terminate spontaneously after convergence. The HPSO algorithm only performs a conversion from extensive search to exact search and still has the potential to fall into local optimum. This paper proposes a state switching condition to improve the probability that the algorithm jumps out of the local optimum. Finally, AHPSO and HPSO are compared by using 22 groups of standard test functions. AHPSO is faster in convergence than HPSO, and the number of iterations of AHPSO convergence is employed in HPSO. At this point, there exists a large gap between HPSO and the optimal solution, i.e., AHPSO can have better algorithm efficiency without setting the number of iterations.


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Martins Akugbe Arasomwan ◽  
Aderemi Oluyinka Adewumi

Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted.


2017 ◽  
Vol 29 (1) ◽  
pp. 127-142
Author(s):  
Rkia Fajr ◽  
Abdelaziz Bouroumi

Abstract This paper introduces a new variant of the particle swarm optimization (PSO) algorithm, designed for global optimization of multidimensional functions. The goal of this variant, called ImPSO, is to improve the exploration and exploitation abilities of the algorithm by introducing a new operation in the iterative search process. The use of this operation is governed by a stochastic rule that ensures either the exploration of new regions of the search space or the exploitation of good intermediate solutions. The proposed method is inspired by collaborative human learning and uses as a starting point a basic PSO variant with constriction factor and velocity clamping. Simulation results that show the ability of ImPSO to locate the global optima of multidimensional functions are presented for 10 well-know benchmark functions from CEC-2013 and CEC-2005. These results are compared with the PSO variant used as starting point, three other PSO variants, one of which is based on human learning strategies, and three alternative evolutionary computing methods.


Sign in / Sign up

Export Citation Format

Share Document