scholarly journals Optimizing High-Dimensional Functions with an Efficient Particle Swarm Optimization Algorithm

2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Guoliang Li ◽  
Jinhong Sun ◽  
Mohammad N.A. Rana ◽  
Yinglei Song ◽  
Chunmei Liu ◽  
...  

The optimization of high-dimensional functions is an important problem in both science and engineering. Particle swarm optimization is a technique often used for computing the global optimum of a multivariable function. In this paper, we develop a new particle swarm optimization algorithm that can accurately compute the optimal value of a high-dimensional function. The iteration process of the algorithm is comprised of a number of large iteration steps, where a large iteration step consists of two stages. In the first stage, an expansion procedure is utilized to effectively explore the high-dimensional variable space. In the second stage, the traditional particle swarm optimization algorithm is employed to compute the global optimal value of the function. A translation step is applied to each particle in the swarm after a large iteration step is completed to start a new large iteration step. Based on this technique, the variable space of a function can be extensively explored. Our analysis and testing results on high-dimensional benchmark functions show that this algorithm can achieve optimization results with significantly improved accuracy, compared with traditional particle swarm optimization algorithms and a few other state-of-the-art optimization algorithms based on particle swarm optimization.

2020 ◽  
Vol 2020 ◽  
pp. 1-26
Author(s):  
Wusi Yang ◽  
Li Chen ◽  
Yi Wang ◽  
Maosheng Zhang

The recently proposed multiobjective particle swarm optimization algorithm based on competition mechanism algorithm cannot effectively deal with many-objective optimization problems, which is characterized by relatively poor convergence and diversity, and long computing runtime. In this paper, a novel multi/many-objective particle swarm optimization algorithm based on competition mechanism is proposed, which maintains population diversity by the maximum and minimum angle between ordinary and extreme individuals. And the recently proposed θ-dominance is adopted to further enhance the performance of the algorithm. The proposed algorithm is evaluated on the standard benchmark problems DTLZ, WFG, and UF1-9 and compared with the four recently proposed multiobjective particle swarm optimization algorithms and four state-of-the-art many-objective evolutionary optimization algorithms. The experimental results indicate that the proposed algorithm has better convergence and diversity, and its performance is superior to other comparative algorithms on most test instances.


Author(s):  
Goran Klepac

Developed predictive models, especially models based on probabilistic concept, regarding numerous potential combinatory states can be very complex. That complexity can cause uncertainty about which factors should have which values to achieve optimal value of output. An example of that problem is developed with a Bayesian network with numerous potential states and their interaction when we would like to find optimal value of nodes for achieving maximum probability on specific output node. This chapter shows a novel concept based on usage of the particle swarm optimization algorithm for finding optimal values within developed probabilistic models.


2013 ◽  
Vol 325-326 ◽  
pp. 1628-1631 ◽  
Author(s):  
Hong Zhou ◽  
Ke Luo

Be aimed at the problems that K-medoids algorithm is easy to fall into the local optimal value and basic particle swarm algorithm is easy to fall into the premature convergence, this paper joins the Simulated Annealing (SA) thought and proposes a novel K-medoids clustering algorithm based on Particle swarm optimization algorithm with simulated annealing. The new algorithm combines the quick optimization ability of particle swarm optimization algorithm and the probability of jumping property with SA, and maintains the characteristics that particle swarm algorithm is easy to realize, and improves the ability of the algorithm from local extreme value point. The experimental results show that the algorithm enhances the convergence speed and accuracy of the algorithm, and the clustering effect is better than the original k-medoids algorithm.


2021 ◽  
Vol 11 (19) ◽  
pp. 9254
Author(s):  
Lingren Kong ◽  
Jianzhong Wang ◽  
Peng Zhao

Dynamic weapon target assignment (DWTA) is an effective method to solve the multi-stage battlefield fire optimization problem, which can reflect the actual combat scenario better than static weapon target assignment (SWTA). In this paper, a meaningful and effective DWTA model is established, which contains two practical and conflicting objectives, namely, maximizing combat benefits and minimizing weapon costs. Moreover, the model contains limited resource constraints, feasibility constraints and fire transfer constraints. The existence of multi-objective and multi-constraint makes DWTA more complicated. To solve this problem, an improved multiobjective particle swarm optimization algorithm (IMOPSO) is proposed in this paper. Various learning strategies are adopted for the dominated and non-dominated solutions of the algorithm, so that the algorithm can learn and evolve in a targeted manner. In order to solve the problem that the algorithm is easy to fall into local optimum, this paper proposes a search strategy based on simulated binary crossover (SBX) and polynomial mutation (PM), which enables elitist information to be shared among external archive and enhances the exploratory capabilities of IMOPSO. In addition, a dynamic archive maintenance strategy is applied to improve the diversity of non-dominated solutions. Finally, this algorithm is compared with three state-of-the-art multiobjective optimization algorithms, including solving benchmark functions and DWTA model in this article. Experimental results show that IMOPSO has better convergence and distribution than the other three multiobjective optimization algorithms. IMOPSO has obvious advantages in solving multiobjective DWTA problems.


2014 ◽  
Vol 571-572 ◽  
pp. 191-195
Author(s):  
Lin Ping Su ◽  
Zhao Wang ◽  
Zheng Guan Huang ◽  
Hao Li

Since the 1950s, with the great development of computer technology and bionics, particle swarm optimization (PSO) was raised. The particle swarm optimization mimics the nature biological group behaviors, and has the following advantages compared to classic optimization algorithms: it is a global optimization process and doesn’t depend on the initial state; it can be applied widely without prior knowledge on the optimization problems; the ideas and the implements of PSO are quite simple, the steps are standardization, and it’s very convenient to integrate it with other algorithms; PSO is based on the swarm intelligence theory, and it has very good potential parallelism. Particle swarm optimization has a feature that fitness value is used to exchange information in the population, and guides the population to close the optimal solution. Therefore, a mount of fitness should be calculated in swarm intelligence optimization algorithms in order to find the optimal solution or an approximate one. However, when the calculation of the fitness is quite complex, the time cost of this kind of algorithms will be too large. What’s more, the fitness of optimization problems in the real world is often difficult to calculate. Addressing this problem,Efficient Particle Swarm Optimization Algorithm Based on Affinity Propagation (EAPSO) is proposed in this paper.


2013 ◽  
Vol 401-403 ◽  
pp. 1328-1335 ◽  
Author(s):  
Yu Feng Yu ◽  
Guo Li ◽  
Chen Xu

Particle swarm optimization (PSO) algorithm has the ability of global optimization , but it often suffers from premature convergence problem, especially in high-dimensional multimodal functions. In order to overcome the premature property and improve the global optimization performance of PSO algorithm, this paper proposes an improved particle swarm optimization algorithm , called IPSO. The simulation results of eight unimodal/multimodal benchmark functions demonstrate that IPSO is superior in enhancing the global convergence performance and avoiding the premature convergence problem to SPSO no matter on unimodal or multimodal high-dimensional (100 real-valued variables) functions.


Sign in / Sign up

Export Citation Format

Share Document