scholarly journals A Biological Immune Mechanism-Based Quantum PSO Algorithm and Its Application in Back Analysis for Seepage Parameters

2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Jiacheng Tan ◽  
Liqun Xu ◽  
Kailai Zhang ◽  
Chao Yang

Back analysis for seepage parameters is a classic issue in hydraulic engineering seepage calculations. Considering the characteristics of inversion problems, including high dimensionality, numerous local optimal values, poor convergence performance, and excessive calculation time, a biological immune mechanism-based quantum particle swarm optimization (IQPSO) algorithm was proposed to solve the inversion problem. By introducing a concentration regulation strategy to improve the population diversity and a vaccination strategy to accelerate the convergence rate, the modified algorithm overcame the shortcomings of traditional PSO which can easily fall into a local optimum. Furthermore, a simple multicore parallel computation strategy was applied to reduce computation time. The effectiveness and practicability of IQPSO were evaluated by numerical experiments. In this paper, taking one concrete face rock-fill dam (CFRD) as a case, a back analysis for seepage parameters was accomplished by utilizing the proposed optimization algorithm and the steady seepage field of the dam was analysed by the finite element method (FEM). Compared with immune PSO and quantum PSO, the proposed algorithm had better global search ability, convergence performance, and calculation rate. The optimized back analysis could obtain the permeability coefficient of CFRD with high accuracy.

Author(s):  
Shoubao Su ◽  
Zhaorui Zhai ◽  
Chishe Wang ◽  
Kaimeng Ding

The traditional fractional-order particle swarm optimization (FOPSO) algorithm depends on the fractional order [Formula: see text], and it is easy to fall into local optimum. To overcome these disadvantages, a novel perspective with PID gains tuning procedure is proposed by combining the time factor with FOPSO, i.e. a new fractional-order particle swarm optimization called TFFV-PSO, which reduces the dependence on the fractional order to enhance the ability of particles to escape from local optimums. According to its influence on the performance of the algorithm, the time factor is varied with population diversity parameters to balance the exploration and exploitation capabilities of the particle swarm, so as to adjust the convergence speed of the algorithm, then it follows that a better convergence performance will be obtained. The improved method is tested on several benchmark functions and applied to tune the PID controller parameters. The experimental results and the comparison with previous other methods show that our proposed TFFV-PSO provides an adequate velocity of convergence and a satisfying accuracy, as well as even better robustness.


2010 ◽  
Vol 20-23 ◽  
pp. 1280-1285
Author(s):  
Jian Xiang Wei ◽  
Yue Hong Sun

The particle swarm optimization (PSO) algorithm is a new population search strategy, which has exhibited good performance through well-known numerical test problems. However, it is easy to trap into local optimum because the population diversity becomes worse during the evolution. In order to overcome the shortcoming of the PSO, this paper proposes an improved PSO based on the symmetry distribution of the particle space position. From the research of particle movement in high dimensional space, we can see: the more symmetric of the particle distribution, the bigger probability can the algorithm be during converging to the global optimization solution. A novel population diversity function is put forward and an adjustment algorithm is put into the basic PSO. The steps of the proposed algorithm are given in detail. With two typical benchmark functions, the experimental results show the improved PSO has better convergence precision than the basic PSO.


2012 ◽  
Vol 605-607 ◽  
pp. 2442-2446
Author(s):  
Xin Ran Li ◽  
Yan Xia Jin

The article puts forward an improved PSO algorithm based on the quantum behavior——CMQPSO algorithm to improve premature convergence problem in particle swarm algorithm. The new algorithm first adopts Tent mapping initialization of particle swarm, searches each particle chaos, and strengthens the diversity of searching. Secondly, a method of effective judgment of early stagnation is embedded in the algorithm. Once the early maturity is retrieved, the algorithm mutates particles to jump out of the local optimum particle according to the structure mutation so as to reduce invalid iteration. The calculation of classical function test shows that the improved algorithm is superior to classical PSO algorithm and quantum-behaved PSO algorithm.


2013 ◽  
Vol 284-287 ◽  
pp. 2411-2415
Author(s):  
Chien Chun Kung ◽  
Kuei Yi Chen

This paper presents a technique to design a PSO guidance algorithm for the nonlinear and dynamic pursuit-evasion optimization problem. In the PSO guidance algorithm, the particle positions of the swarm are initialized randomly within the guidance command solution space. With the particle positions to be guidance commands, we predict and record missiles’ behavior by solving point-mass equations of motion during a defined short-range period. Taking relative distance to be the objective function, the fitness function is then evaluated according to the objective function. As the PSO algorithm proceeds, these guidance commands will migrate to a local optimum until the global optimum is reached. This paper implements the PSO guidance algorithm in two pursuit-evasion scenarios and the simulation results show that the proposed design technique is able to generate a missile guidance law which has satisfied performance in execution time, terminal miss distance, time of interception and robust pursuit capability.


2013 ◽  
Vol 427-429 ◽  
pp. 1934-1938
Author(s):  
Zhong Rong Zhang ◽  
Jin Peng Liu ◽  
Ke De Fei ◽  
Zhao Shan Niu

The aim is to improve the convergence of the algorithm, and increase the population diversity. Adaptively particles of groups fallen into local optimum is adjusted in order to realize global optimal. by judging groups spatial location of concentration and fitness variance. At the same time, the global factors are adjusted dynamically with the action of the current particle fitness. Four typical function optimization problems are drawn into simulation experiment. The results show that the improved particle swarm optimization algorithm is convergent, robust and accurate.


Author(s):  
Jiarui Zhou ◽  
Junshan Yang ◽  
Ling Lin ◽  
Zexuan Zhu ◽  
Zhen Ji

Particle swarm optimization (PSO) is a swarm intelligence algorithm well known for its simplicity and high efficiency on various problems. Conventional PSO suffers from premature convergence due to the rapid convergence speed and lack of population diversity. It is easy to get trapped in local optima. For this reason, improvements are made to detect stagnation during the optimization and reactivate the swarm to search towards the global optimum. This chapter imposes the reflecting bound-handling scheme and von Neumann topology on PSO to increase the population diversity. A novel crown jewel defense (CJD) strategy is introduced to restart the swarm when it is trapped in a local optimum region. The resultant algorithm named LCJDPSO-rfl is tested on a group of unimodal and multimodal benchmark functions with rotation and shifting. Experimental results suggest that the LCJDPSO-rfl outperforms state-of-the-art PSO variants on most of the functions.


Entropy ◽  
2019 ◽  
Vol 21 (8) ◽  
pp. 738 ◽  
Author(s):  
Łukasz Strąk ◽  
Rafał Skinderowicz ◽  
Urszula Boryczka ◽  
Arkadiusz Nowakowski

This paper presents a discrete particle swarm optimization (DPSO) algorithm with heterogeneous (non-uniform) parameter values for solving the dynamic traveling salesman problem (DTSP). The DTSP can be modeled as a sequence of static sub-problems, each of which is an instance of the TSP. In the proposed DPSO algorithm, the information gathered while solving a sub-problem is retained in the form of a pheromone matrix and used by the algorithm while solving the next sub-problem. We present a method for automatically setting the values of the key DPSO parameters (except for the parameters directly related to the computation time and size of a problem).We show that the diversity of parameters values has a positive effect on the quality of the generated results. Furthermore, the population in the proposed algorithm has a higher level of entropy. We compare the performance of the proposed heterogeneous DPSO with two ant colony optimization (ACO) algorithms. The proposed algorithm outperforms the base DPSO and is competitive with the ACO.


2019 ◽  
Vol 18 (03) ◽  
pp. 833-866 ◽  
Author(s):  
Mi Li ◽  
Huan Chen ◽  
Xiaodong Wang ◽  
Ning Zhong ◽  
Shengfu Lu

The particle swarm optimization (PSO) algorithm is simple to implement and converges quickly, but it easily falls into a local optimum; on the one hand, it lacks the ability to balance global exploration and local exploitation of the population, and on the other hand, the population lacks diversity. To solve these problems, this paper proposes an improved adaptive inertia weight particle swarm optimization (AIWPSO) algorithm. The AIWPSO algorithm includes two strategies: (1) An inertia weight adjustment method based on the optimal fitness value of individual particles is proposed, so that different particles have different inertia weights. This method increases the diversity of inertia weights and is conducive to balancing the capabilities of global exploration and local exploitation. (2) A mutation threshold is used to determine which particles need to be mutated. This method compensates for the inaccuracy of random mutation, effectively increasing the diversity of the population. To evaluate the performance of the proposed AIWPSO algorithm, benchmark functions are used for testing. The results show that AIWPSO achieves satisfactory results compared with those of other PSO algorithms. This outcome shows that the AIWPSO algorithm is conducive to balancing the abilities of the global exploration and local exploitation of the population, while increasing the diversity of the population, thereby significantly improving the optimization ability of the PSO algorithm.


Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 884
Author(s):  
Petr Stodola ◽  
Karel Michenka ◽  
Jan Nohel ◽  
Marian Rybanský

The dynamic traveling salesman problem (DTSP) falls under the category of combinatorial dynamic optimization problems. The DTSP is composed of a primary TSP sub-problem and a series of TSP iterations; each iteration is created by changing the previous iteration. In this article, a novel hybrid metaheuristic algorithm is proposed for the DTSP. This algorithm combines two metaheuristic principles, specifically ant colony optimization (ACO) and simulated annealing (SA). Moreover, the algorithm exploits knowledge about the dynamic changes by transferring the information gathered in previous iterations in the form of a pheromone matrix. The significance of the hybridization, as well as the use of knowledge about the dynamic environment, is examined and validated on benchmark instances including small, medium, and large DTSP problems. The results are compared to the four other state-of-the-art metaheuristic approaches with the conclusion that they are significantly outperformed by the proposed algorithm. Furthermore, the behavior of the algorithm is analyzed from various points of view (including, for example, convergence speed to local optimum, progress of population diversity during optimization, and time dependence and computational complexity).


2017 ◽  
Vol 2017 ◽  
pp. 1-13 ◽  
Author(s):  
Lei Zhao ◽  
Zhicheng Jia ◽  
Lei Chen ◽  
Yanju Guo

Backtracking search algorithm (BSA) is a relatively new evolutionary algorithm, which has a good optimization performance just like other population-based algorithms. However, there is also an insufficiency in BSA regarding its convergence speed and convergence precision. For solving the problem shown in BSA, this article proposes an improved BSA named COBSA. Enlightened by particle swarm optimization (PSO) algorithm, population control factor is added to the variation equation aiming to improve the convergence speed of BSA, so as to make algorithm have a better ability of escaping the local optimum. In addition, enlightened by differential evolution (DE) algorithm, this article proposes a novel evolutionary equation based on the fact that the disadvantaged group will search just around the best individual chosen from previous iteration to enhance the ability of local search. Simulation experiments based on a set of 18 benchmark functions show that, in general, COBSA displays obvious superiority in convergence speed and convergence precision when compared with BSA and the comparison algorithms.


Sign in / Sign up

Export Citation Format

Share Document