An Efficient Particle Swarm Optimization with Multidimensional Mean Learning

Author(s):  
Wei Li ◽  
Xiang Meng ◽  
Ying Huang ◽  
Junhui Yang

Particle swarm optimization (PSO) algorithm is a stochastic and population-based optimization algorithm. Its traditional learning strategy is implemented by updating the best position using the particle’s own historical best experience and its neighborhood’s best experience to find the optimal solution of the problem. However, the learning strategy is ineffective when dealing with highly complex problems. In this paper, a particle swarm optimization algorithm based on a multidimensional mean learning strategy is proposed. In this algorithm, an opposition-based learning strategy is utilized to initialize the population to enhance the exploitation capability. Furthermore, the historical best positions of all the particles are reconstructed in a vertical crossover manner that is based on the mean information of multiple optimal dimensions to generate the guiding particles. Additionally, an improved inertia weight is used to further guide all the particle movements to balance the capability of the proposed algorithm for global exploration and local exploitation. The proposed algorithm is tested on 12 benchmark functions and is compared with some well-known PSO algorithms. The experimental results show that the proposed algorithm obtains more competitive optimal solution compared with other PSO algorithms when solving high-dimensional complex problems.

Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 597
Author(s):  
Kun Miao ◽  
Qian Feng ◽  
Wei Kuang

The particle swarm optimization algorithm (PSO) is a widely used swarm-based natural inspired optimization algorithm. However, it suffers search stagnation from being trapped into a sub-optimal solution in an optimization problem. This paper proposes a novel hybrid algorithm (SDPSO) to improve its performance on local searches. The algorithm merges two strategies, the static exploitation (SE, a velocity updating strategy considering inertia-free velocity), and the direction search (DS) of Rosenbrock method, into the original PSO. With this hybrid, on the one hand, extensive exploration is still maintained by PSO; on the other hand, the SE is responsible for locating a small region, and then the DS further intensifies the search. The SDPSO algorithm was implemented and tested on unconstrained benchmark problems (CEC2014) and some constrained engineering design problems. The performance of SDPSO is compared with that of other optimization algorithms, and the results show that SDPSO has a competitive performance.


2021 ◽  
Vol 11 (2) ◽  
pp. 839
Author(s):  
Shaofei Sun ◽  
Hongxin Zhang ◽  
Xiaotong Cui ◽  
Liang Dong ◽  
Muhammad Saad Khan ◽  
...  

This paper focuses on electromagnetic information security in communication systems. Classical correlation electromagnetic analysis (CEMA) is known as a powerful way to recover the cryptographic algorithm’s key. In the classical method, only one byte of the key is used while the other bytes are considered as noise, which not only reduces the efficiency but also is a waste of information. In order to take full advantage of useful information, multiple bytes of the key are used. We transform the key into a multidimensional form, and each byte of the key is considered as a dimension. The problem of the right key searching is transformed into the problem of optimizing correlation coefficients of key candidates. The particle swarm optimization (PSO) algorithm is particularly more suited to solve the optimization problems with high dimension and complex structure. In this paper, we applied the PSO algorithm into CEMA to solve multidimensional problems, and we also add a mutation operator to the optimization algorithm to improve the result. Here, we have proposed a multibyte correlation electromagnetic analysis based on particle swarm optimization. We verified our method on a universal test board that is designed for research and development on hardware security. We implemented the Advanced Encryption Standard (AES) cryptographic algorithm on the test board. Experimental results have shown that our method outperforms the classical method; it achieves approximately 13.72% improvement for the corresponding case.


2020 ◽  
Vol 10 (1) ◽  
pp. 56-64 ◽  
Author(s):  
Neeti Kashyap ◽  
A. Charan Kumari ◽  
Rita Chhikara

AbstractWeb service compositions are commendable in structuring innovative applications for different Internet-based business solutions. The existing services can be reused by the other applications via the web. Due to the availability of services that can serve similar functionality, suitable Service Composition (SC) is required. There is a set of candidates for each service in SC from which a suitable candidate service is picked based on certain criteria. Quality of service (QoS) is one of the criteria to select the appropriate service. A standout amongst the most important functionality presented by services in the Internet of Things (IoT) based system is the dynamic composability. In this paper, two of the metaheuristic algorithms namely Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) are utilized to tackle QoS based service composition issues. QoS has turned into a critical issue in the management of web services because of the immense number of services that furnish similar functionality yet with various characteristics. Quality of service in service composition comprises of different non-functional factors, for example, service cost, execution time, availability, throughput, and reliability. Choosing appropriate SC for IoT based applications in order to optimize the QoS parameters with the fulfillment of user’s necessities has turned into a critical issue that is addressed in this paper. To obtain results via simulation, the PSO algorithm is used to solve the SC problem in IoT. This is further assessed and contrasted with GA. Experimental results demonstrate that GA can enhance the proficiency of solutions for SC problem in IoT. It can also help in identifying the optimal solution and also shows preferable outcomes over PSO.


2012 ◽  
Vol 182-183 ◽  
pp. 1953-1957
Author(s):  
Zhao Xia Wu ◽  
Shu Qiang Chen ◽  
Jun Wei Wang ◽  
Li Fu Wang

When the parameters were measured by using fiber Bragg grating (FBG) in practice, there were some parameters hard to measure, which would influenced the reflective spectral of FBG severely, and make the characteristic information harder to be extracted. Therefore, particle swarm optimization algorithm was proposed in analyzing the uniform force reflective spectral of FBG. Based on the uniform force sense theory of FBG and particle swarm optimization algorithm, the objective function were established, meanwhile the experiment and simulation were constructed. And the characteristic information in reflective spectrum of FBG was extracted. By using particle swarm optimization algorithm, experimental data showed that particle swarm optimization algorithm used in extracting the characteristic information not only was efficaciously and easily, but also had some advantages, such as high accuracy, stability and fast convergence rate. And it was useful in high precision measurement of FBG sensor.


2019 ◽  
Vol 61 (4) ◽  
pp. 177-185
Author(s):  
Moritz Mühlenthaler ◽  
Alexander Raß

Abstract A discrete particle swarm optimization (PSO) algorithm is a randomized search heuristic for discrete optimization problems. A fundamental question about randomized search heuristics is how long it takes, in expectation, until an optimal solution is found. We give an overview of recent developments related to this question for discrete PSO algorithms. In particular, we give a comparison of known upper and lower bounds of expected runtimes and briefly discuss the techniques used to obtain these bounds.


Mathematics ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 357 ◽  
Author(s):  
Shu-Kai S. Fan ◽  
Chih-Hung Jen

Particle swarm optimization (PSO) is a population-based optimization technique that has been applied extensively to a wide range of engineering problems. This paper proposes a variation of the original PSO algorithm for unconstrained optimization, dubbed the enhanced partial search particle swarm optimizer (EPS-PSO), using the idea of cooperative multiple swarms in an attempt to improve the convergence and efficiency of the original PSO algorithm. The cooperative searching strategy is particularly devised to prevent the particles from being trapped into the local optimal solutions and tries to locate the global optimal solution efficiently. The effectiveness of the proposed algorithm is verified through the simulation study where the EPS-PSO algorithm is compared to a variety of exiting “cooperative” PSO algorithms in terms of noted benchmark functions.


2016 ◽  
Vol 11 (1) ◽  
pp. 58-67 ◽  
Author(s):  
S Sarathambekai ◽  
K Umamaheswari

Discrete particle swarm optimization is one of the most recently developed population-based meta-heuristic optimization algorithm in swarm intelligence that can be used in any discrete optimization problems. This article presents a discrete particle swarm optimization algorithm to efficiently schedule the tasks in the heterogeneous multiprocessor systems. All the optimization algorithms share a common algorithmic step, namely population initialization. It plays a significant role because it can affect the convergence speed and also the quality of the final solution. The random initialization is the most commonly used method in majority of the evolutionary algorithms to generate solutions in the initial population. The initial good quality solutions can facilitate the algorithm to locate the optimal solution or else it may prevent the algorithm from finding the optimal solution. Intelligence should be incorporated to generate the initial population in order to avoid the premature convergence. This article presents a discrete particle swarm optimization algorithm, which incorporates opposition-based technique to generate initial population and greedy algorithm to balance the load of the processors. Make span, flow time, and reliability cost are three different measures used to evaluate the efficiency of the proposed discrete particle swarm optimization algorithm for scheduling independent tasks in distributed systems. Computational simulations are done based on a set of benchmark instances to assess the performance of the proposed algorithm.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Xiaofeng Lv ◽  
Deyun Zhou ◽  
Ling Ma ◽  
Yuyuan Zhang ◽  
Yongchuan Tang

The fault rate in equipment increases significantly along with the service life of the equipment, especially for multiple fault. Typically, the Bayesian theory is used to construct the model of faults, and intelligent algorithm is used to solve the model. Lagrangian relaxation algorithm can be adopted to solve multiple fault diagnosis models. But the mathematical derivation process may be complex, while the updating method for Lagrangian multiplier is limited and it may fall into a local optimal solution. The particle swarm optimization (PSO) algorithm is a global search algorithm. In this paper, an improved Lagrange-particle swarm optimization algorithm is proposed. The updating of the Lagrangian multipliers is with the PSO algorithm for global searching. The difference between the upper and lower bounds is proposed to construct the fitness function of PSO. The multiple fault diagnosis model can be solved by the improved Lagrange-particle swarm optimization algorithm. Experiment on a case study of sensor data-based multiple fault diagnosis verifies the effectiveness and robustness of the proposed method.


2019 ◽  
Vol 2019 ◽  
pp. 1-22 ◽  
Author(s):  
Hao Li ◽  
Hongbin Jin ◽  
Hanzhong Wang ◽  
Yanyan Ma

For the first time , the Holonic Particle Swarm Optimization (HPSO ) algorithm applies multiagent theory about the improvement in the PSO algorithm and achieved good results. In order to further improve the performance of the algorithm, this paper proposes an improved Adaptive Holonic Particle Swarm Optimization (AHPSO) algorithm. Firstly, a brief review of the HPSO algorithm is carried out, and the HPSO algorithm can be further studied in three aspects: grouping strategy, iteration number setting, and state switching discrimination. The HPSO algorithm uses an approximately uniform grouping strategy that is the simplest but does not consider the connections between particles. And if the particles with larger or smaller differences are grouped together in different search stages, the search efficiency will be improved. Therefore, this paper proposes a grouping strategy based on information entropy and system clustering and combines two grouping strategies with corresponding search methods. The performance of the HPSO algorithm depends on the setting of the number of iterations. If it is too small, it is difficult to search for the optimal and it wastes so many computing resources. Therefore, this paper constructs an adaptive termination condition that causes the particles to terminate spontaneously after convergence. The HPSO algorithm only performs a conversion from extensive search to exact search and still has the potential to fall into local optimum. This paper proposes a state switching condition to improve the probability that the algorithm jumps out of the local optimum. Finally, AHPSO and HPSO are compared by using 22 groups of standard test functions. AHPSO is faster in convergence than HPSO, and the number of iterations of AHPSO convergence is employed in HPSO. At this point, there exists a large gap between HPSO and the optimal solution, i.e., AHPSO can have better algorithm efficiency without setting the number of iterations.


2013 ◽  
Vol 427-429 ◽  
pp. 1710-1713
Author(s):  
Xiang Tian ◽  
Yue Lin Gao

This paper introduces the principles and characteristics of Particle Swarm Optimization algorithm, and aims at the shortcoming of PSO algorithm, which is easily plunging into the local minimum, then we proposes a new improved adaptive hybrid particle swarm optimization algorithm. It adopts dynamically changing inertia weight and variable learning factors, which is based on the mechanism of natural selection. The numerical results of classical functions illustrate that this hybrid algorithm improves global searching ability and the success rate.


Sign in / Sign up

Export Citation Format

Share Document