scholarly journals Comparison of Centralized and Distributed Intelligent Particle Multi-Swarm Optimization on Search Performance

2021 ◽  
Vol 10 (1) ◽  
pp. 1
Author(s):  
Hiroshi Sho

In recent years, the technology of particle swarm optimization (PSO) is expanding remarkably. Especially, the technical development of particle multi-swarm optimization (PMSO) attracts attention, and it is expected to handle complex optimization problems. In this paper, we propose two kinds of search methods of PMSO for pattern classification. The crucial idea, here, is how to handle the given parity problems by using these search methods of centralized and distributed intelligent particle multi-swarm optimization (i.e., CIPMSO and DIPMSO). Due to accomplish the hard task of obtaining the high-performance and high-efficiency of PMSO technology, many computer experiments are carried out to handle the 2-bit, 3-bit and 4-bit parity problems under different search situations. Therefore, the obtained experimental results are analyzed and compared, respectively, the search performance and characteristics of the search methods of both CIPMSO and DIPMSO are clarified. Based on the obtained information and know-how, it will further improve the search efficiency and act in conformity of PMSO technology.

2020 ◽  
Vol 9 (1) ◽  
pp. 54
Author(s):  
Hiroshi Sho

The purpose of this study is to clarify the search performance of differential evolution (DE) and particle swarm optimization (PSO) technologies for instinctively understanding the specificity of the used search methods. Due to achieve the task, here, the several search methods of both, i.e. DE/rand/1, DE/rand/2, DE/best/1, DE/best/2, the PSO, PSOIW, and CPSO, which are implemented in this paper. Therefore, many computer experiments are carried out for handling the given four benchmark problems. Through the analysis of the obtained experimental data, the detail search performance and characteristics of them are observed and compared, respectively. From the obtained results, it is found that the search methods of DE/best/1 and the PSO relatively have better search performance. Based on the findings and know-how, they can provide some important reference and key hint for encouraging development and improvement of both DE and PSO technologies in the near future. And as the applicative examples, the PSO is used to handle typical 2-bit and 3-bit parity problems for pattern classification.


Author(s):  
Malek Sarhani ◽  
Stefan Voß

AbstractBio-inspired optimization aims at adapting observed natural behavioral patterns and social phenomena towards efficiently solving complex optimization problems, and is nowadays gaining much attention. However, researchers recently highlighted an inconsistency between the need in the field and the actual trend. Indeed, while nowadays it is important to design innovative contributions, an actual trend in bio-inspired optimization is to re-iterate the existing knowledge in a different form. The aim of this paper is to fill this gap. More precisely, we start first by highlighting new examples for this problem by considering and describing the concepts of chunking and cooperative learning. Second, by considering particle swarm optimization (PSO), we present a novel bridge between these two notions adapted to the problem of feature selection. In the experiments, we investigate the practical importance of our approach while exploring both its strength and limitations. The results indicate that the approach is mainly suitable for large datasets, and that further research is needed to improve the computational efficiency of the approach and to ensure the independence of the sub-problems defined using chunking.


2015 ◽  
Vol 741 ◽  
pp. 359-362
Author(s):  
Hong Wei Zhao ◽  
Li Wei Tian

Basic Artificial Fish Swarm(AFS) Algorithm is a new type of heuristic swarm intelligence algorithm but optimization is difficult to get a very high precision due to the randomness of the artificial fish behavior.This paper presents an extended AFS algorithm, namely the Cooperative Artificial Fish Swarm (CAFS),which significantly improves the original AFS in solving complex optimization problems. In this work,firstly,CAFS algorithm is used for optimizing six widely-used benchmark functions and the comparative results produced by CAFS, Particle Swarm Optimization (PSO) are studied.Secondly,K-medoids and CAFS algorithm is used for data clustering on several benchmark data sets.The simulation results show that the proposed CAFS outperforms the other two algorithms in terms of accuracy,robustness and convergence speed.


Algorithms ◽  
2019 ◽  
Vol 12 (7) ◽  
pp. 138
Author(s):  
Zheng Ji ◽  
Xu Cai ◽  
Xuyang Lou

This paper presents a quantum-behaved neurodynamic swarm optimization approach to solve the nonconvex optimization problems with inequality constraints. Firstly, the general constrained optimization problem is addressed and a high-performance feedback neural network for solving convex nonlinear programming problems is introduced. The convergence of the proposed neural network is also proved. Then, combined with the quantum-behaved particle swarm method, a quantum-behaved neurodynamic swarm optimization (QNSO) approach is presented. Finally, the performance of the proposed QNSO algorithm is evaluated through two function tests and three applications including the hollow transmission shaft, heat exchangers and crank–rocker mechanism. Numerical simulations are also provided to verify the advantages of our method.


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Jingzheng Yao ◽  
Duanfeng Han

Barebones particle swarm optimization (BPSO) is a new PSO variant, which has shown a good performance on many optimization problems. However, similar to the standard PSO, BPSO also suffers from premature convergence when solving complex optimization problems. In order to improve the performance of BPSO, this paper proposes a new BPSO variant called BPSO with neighborhood search (NSBPSO) to achieve a tradeoff between exploration and exploitation during the search process. Experiments are conducted on twelve benchmark functions and a real-world problem of ship design. Simulation results demonstrate that our approach outperforms the standard PSO, BPSO, and six other improved PSO algorithms.


2011 ◽  
Vol 181-182 ◽  
pp. 937-942
Author(s):  
Bo Liu ◽  
Hong Xia Pan

Particle swarm optimization (PSO) is widely used to solve complex optimization problems. However, classical PSO may be trapped in local optima and fails to converge to global optimum. In this paper, the concept of the self particles and the random particles is introduced into classical PSO to keep the particle diversity. All particles are divided into the standard particles, the self particles and the random particles according to special proportion. The feature of the proposed algorithm is analyzed and several testing functions are performed in simulation study. Experimental results show that, the proposed PDPSO algorithm can escape from local minima and significantly enhance the convergence precision.


2008 ◽  
Vol 2008 ◽  
pp. 1-4 ◽  
Author(s):  
Munish Rattan ◽  
Manjeet Singh Patterh ◽  
B. S. Sohi

Particle swarm optimization (PSO) is a new, high-performance evolutionary technique, which has recently been used for optimization problems in antennas and electromagnetics. It is a global optimization technique-like genetic algorithm (GA) but has less computational cost compared to GA. In this paper, PSO has been used to optimize the gain, impedance, and bandwidth of Yagi-Uda array. To evaluate the performance of designs, a method of moments code NEC2 has been used. The results are comparable to those obtained using GA.


Information ◽  
2018 ◽  
Vol 9 (7) ◽  
pp. 173 ◽  
Author(s):  
Xiang Yu ◽  
Claudio Estevez

Multiswarm comprehensive learning particle swarm optimization (MSCLPSO) is a multiobjective metaheuristic recently proposed by the authors. MSCLPSO uses multiple swarms of particles and externally stores elitists that are nondominated solutions found so far. MSCLPSO can approximate the true Pareto front in one single run; however, it requires a large number of generations to converge, because each swarm only optimizes the associated objective and does not learn from any search experience outside the swarm. In this paper, we propose an adaptive particle velocity update strategy for MSCLPSO to improve the search efficiency. Based on whether the elitists are indifferent or complex on each dimension, each particle adaptively determines whether to just learn from some particle in the same swarm, or additionally from the difference of some pair of elitists for the velocity update on that dimension, trying to achieve a tradeoff between optimizing the associated objective and exploring diverse regions of the Pareto set. Experimental results on various two-objective and three-objective benchmark optimization problems with different dimensional complexity characteristics demonstrate that the adaptive particle velocity update strategy improves the search performance of MSCLPSO significantly and is able to help MSCLPSO locate the true Pareto front more quickly and obtain better distributed nondominated solutions over the entire Pareto front.


2018 ◽  
Vol 2018 ◽  
pp. 1-17 ◽  
Author(s):  
Xueying Lv ◽  
Yitian Wang ◽  
Junyi Deng ◽  
Guanyu Zhang ◽  
Liu Zhang

In this study, an improved eliminate particle swarm optimization (IEPSO) is proposed on the basis of the last-eliminated principle to solve optimization problems in engineering design. During optimization, the IEPSO enhances information communication among populations and maintains population diversity to overcome the limitations of classical optimization algorithms in solving multiparameter, strong coupling, and nonlinear engineering optimization problems. These limitations include advanced convergence and the tendency to easily fall into local optimization. The parameters involved in the imported “local-global information sharing” term are analyzed, and the principle of parameter selection for performance is determined. The performances of the IEPSO and classical optimization algorithms are then tested by using multiple sets of classical functions to verify the global search performance of the IEPSO. The simulation test results and those of the improved classical optimization algorithms are compared and analyzed to verify the advanced performance of the IEPSO algorithm.


2014 ◽  
Vol 687-691 ◽  
pp. 1420-1424
Author(s):  
Hai Tao Han ◽  
Wan Feng Ji ◽  
Yao Qing Zhang ◽  
De Peng Sha

Two main requirements of the optimization problems are included: one is finding the global minimum, the other is obtaining fast convergence speed. As heuristic algorithm and swarm intelligence algorithm, both particle swarm optimization and genetic algorithm are widely used in vehicle path planning because of their favorable search performance. This paper analyzes the characteristics and the same and different points of two algorithms as well as making simulation experiment under the same operational environment and threat states space. The result shows that particle swarm optimization is superior to genetic algorithm in searching speed and convergence.


Sign in / Sign up

Export Citation Format

Share Document