Hybrid Multi-Population and Adaptive Search Range Strategy With Particle Swarm Optimization for Multimodal Optimization

2021 ◽  
Vol 12 (4) ◽  
pp. 146-168
Author(s):  
Shiqi Wang ◽  
Zepeng Shen ◽  
Yao Peng

This paper proposes an algorithm named hybrid multi-population and adaptive search range strategy with particle swarm optimization (ARPSO) for solving multimodal optimization problems. The main idea of the algorithm is to divide the global search space into multiple sub-populations searching in parallel and independently. For diversity increasing, each sub-population will continuously change the search area adaptively according to whether there are local optimal solutions in its search space and the position of the global optimal solution, and in each iteration, the optimal solution in this area will be reserved. For the purpose of accelerating convergence, at the global and local levels, when the global optimal solution or local optimal solution is found, the global search space and local search space will shrink toward the optimal solution. Experiments show that ARPSO has unique advantages for solving multi-dimensional problems, especially problems with only one global optimal solution but multiple local optimal solutions.

PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0260231
Author(s):  
Yufeng Meng ◽  
Jianhua He ◽  
Shichu Luo ◽  
Siqi Tao ◽  
Jiancheng Xu

Focusing on the problem incurred during particle swarm optimization (PSO) that tends to fall into local optimization when solving Nash equilibrium solutions of games, as well as the problem of slow convergence when solving higher order game pay off matrices, this paper proposes an improved Predator-Prey particle swarm optimization (IPP-PSO) algorithm based on a Predator-Prey particle swarm optimization (PP-PSO) algorithm. First, the convergence of the algorithm is advanced by improving the distribution of the initial predator and prey. By improving the inertia weight of both predator and prey, the problem of “precocity” of the algorithm is improved. By improving the formula used to represent particle velocity, the problems of local optimizations and slowed convergence rates are solved. By increasing pathfinder weight, the diversity of the population is increased, and the global search ability of the algorithm is improved. Then, by solving the Nash equilibrium solution of both a zero-sum game and a non-zero-sum game, the convergence speed and global optimal performance of the original PSO, the PP-PSO and the IPP-PSO are compared. Simulation results demonstrated that the improved Predator-Prey algorithm is convergent and effective. The convergence speed of the IPP-PSO is significantly higher than that of the other two algorithms. In the simulation, the PSO does not converge to the global optimal solution, and PP-PSO approximately converges to the global optimal solution after about 40 iterations, while IPP-PSO approximately converges to the global optimal solution after about 20 iterations. Furthermore, the IPP-PSO is superior to the other two algorithms in terms of global optimization and accuracy.


Author(s):  
Loc Nguyen

Particle swarm optimization (PSO) is an effective algorithm to solve the optimization problem in case that derivative of target function is inexistent or difficult to be determined. Because PSO has many parameters and variants, I propose a general framework of PSO called GPSO which aggregates important parameters and generalizes important variants so that researchers can customize PSO easily. Moreover, two main properties of PSO are exploration and exploitation. The exploration property aims to avoid premature converging so as to reach global optimal solution whereas the exploitation property aims to motivate PSO to converge as fast as possible. These two aspects are equally important. Therefore, GPSO also aims to balance the exploration and the exploitation. It is expected that GPSO supports users to tune parameters for not only solving premature problem but also fast convergence.


Author(s):  
Alaa Tharwat ◽  
Tarek Gaber ◽  
Aboul Ella Hassanien ◽  
Basem E. Elnaghi

Optimization algorithms are necessary to solve many problems such as parameter tuning. Particle Swarm Optimization (PSO) is one of these optimization algorithms. The aim of PSO is to search for the optimal solution in the search space. This paper highlights the basic background needed to understand and implement the PSO algorithm. This paper starts with basic definitions of the PSO algorithm and how the particles are moved in the search space to find the optimal or near optimal solution. Moreover, a numerical example is illustrated to show how the particles are moved in a convex optimization problem. Another numerical example is illustrated to show how the PSO trapped in a local minima problem. Two experiments are conducted to show how the PSO searches for the optimal parameters in one-dimensional and two-dimensional spaces to solve machine learning problems.


2021 ◽  
Vol 258 ◽  
pp. 06052
Author(s):  
Olga Purchina ◽  
Anna Poluyan ◽  
Dmitry Fugarov

The main aim of the research is the development of effective methods and algorithms based on the hybrid principles functioning of the immune system and evolutionary search to determine a global optimal solution to optimisation problems. Artificial immune algorithms are characterised as diverse ones, extremely reliable and implicitly parallel. The integration of modified evolutionary algorithms and immune algorithms is proposed to be used for the solution of above problem. There is no exact method for the efficient solving unclear optimisation problems within the polynomial time. However, by determining close to optimal solutions within the reasonable time, the hybrid immune algorithm (HIA) is capable to offer multiple solutions, which provide compromise between several goals. Quite few researches have been focused on the optimisation of more than one goal and even fewer used to have distinctly considered diversity of solutions that plays fundamental role in good performance of any evolutionary calculation method.


Mathematics ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 1004
Author(s):  
Mojtaba Borza ◽  
Azmin Sham Rambely

Optimizing the sum of linear fractional functions over a set of linear inequalities (S-LFP) has been considered by many researchers due to the fact that there are a number of real-world problems which are modelled mathematically as S-LFP problems. Solving the S-LFP is not easy in practice since the problem may have several local optimal solutions which makes the structure complex. To our knowledge, existing methods dealing with S-LFP are iterative algorithms that are based on branch and bound algorithms. Using these methods requires high computational cost and time. In this paper, we present a non-iterative and straightforward method with less computational expenses to deal with S-LFP. In the method, a new S-LFP is constructed based on the membership functions of the objectives multiplied by suitable weights. This new problem is then changed into a linear programming problem (LPP) using variable transformations. It was proven that the optimal solution of the LPP becomes the global optimal solution for the S-LFP. Numerical examples are given to illustrate the method.


Author(s):  
Bhupinder Singh ◽  
Priyanka Anand

Butterfly optimization algorithm (BOA) is an interesting bio-inspired algorithm that uses a nature inspired simulation model, based on the food foraging behavior of butterflies. A common problem with BOA is that in early stages of simulation process, it may converge to sub-optimal solutions due to the loss of diversity in its population. The sensory modality is the critical parameter which is responsible for searching new solutions in the nearby regions. In this work, an adaptive butterfly optimization algorithm is proposed in which a novel phenomenon of changing the sensory modality of BOA is employed during the optimization process in order to achieve better results in comparison to traditional BOA. The proposed Adaptive butterfly optimization algorithm (ABOA) is tested against seventeen standard bench mark functions. Its performance is then compared against existing standard optimization algorithms, namely artificial bee colony, firefly algorithm and standard butterfly optimization algorithm. The results indicate that the proposed adaptive BOA with improved parameter calculation mechanism produces superior results in terms of convergence and achievement of the global optimal solution efficiently.


Author(s):  
José André Brito ◽  
Leonardo de Lima ◽  
Pedro Henrique González ◽  
Breno Oliveira ◽  
Nelson Maculan

The problem of finding an optimal sample stratification has been extensively studied in the literature. In this paper, we propose a heuristic optimization method for solving the univariate optimum stratification problem aiming at minimizing the sample size for a given precision level. The method is based on the variable neighborhood search metaheuristic, which was combined with an exact method. Numerical experiments were performed over a dataset of 24 instances, and the results of the proposed algorithm were compared with two very well-known methods from the literature. Our results outperformed $94\%$ of  the considered cases. In addition, we developed an enumeration algorithm to find the global optimal solution in some populations and scenarios, which enabled us to validate our metaheuristic method. Furthermore, we find that our algorithm obtained the global optimal solutions for the vast majority of the cases.


2020 ◽  
Vol 8 (5) ◽  
pp. 3686-3692

When the supply of items need urgent/earliest delivery to the destinations, the Time Minimization Transportation Problems (TMTPs) are indispensable. Traditionally these problems have been solved using the exact techniques, however, the (meta) heuristic techniques have provided a great breakthrough in search space exploration. Particle Swarm Optimization is one such meta-heuristic technique that has been applied on a wide variety of continuous optimization problems. For discrete problems, either the mathematical model of problem or the solution procedure has been changed. In this paper, the PSO has been modified to incorporate the discrete nature of variables and the non-linearity of the objective function. The proposed PSO is tested on the problems available in the literature and the optimal solutions are obtained efficiently. The exhaustive search capability of proposed PSO is established by obtaining alternate optimal solutions and the combinations of the allocated cells that are beyond ( ) m n   1 in number. This proposed solution technique, therefore, provides an effective alternate to the analytical techniques for decision making in the logistic systems.


2010 ◽  
Vol 139-141 ◽  
pp. 1779-1784
Author(s):  
Quan Wang ◽  
Jin Chao Liu ◽  
Pan Wang ◽  
Juan Ying Qin

Many researchers have indicated that standard genetic algorithm suffers from the dilemma---premature or non-convergence. Most researchers focused on finding better search strategies, and designing various new heuristic methods. It seemed effective. From another view, we can transform search space with a samestate-mapping. A special genetic algorithm applied to the new search space would achieve better performance. Thus, we present a new genetic algorithm based on optimal solution orientation. In this paper, a new genetic algorithm based on optimum solution orientation is presented. The algorithm is divided into "optimum solution orientation" phase and "highly accurately searching in local domain of global optimal solution" phase. Theoretical analysis and experiments indicate that OSOGA can find the "optimal" sub domain effectively. Cooperating with local search algorithm, OSOGA can achieve highly precision solution with limited computing resources.


Author(s):  
Anuj Chandila ◽  
Shailesh Tiwari ◽  
K. K. Mishra ◽  
Akash Punhani

This article describes how optimization is a process of finding out the best solutions among all available solutions for a problem. Many randomized algorithms have been designed to identify optimal solutions in optimization problems. Among these algorithms evolutionary programming, evolutionary strategy, genetic algorithm, particle swarm optimization and genetic programming are widely accepted for the optimization problems. Although a number of randomized algorithms are available in literature for solving optimization problems yet their design objectives are same. Each algorithm has been designed to meet certain goals like minimizing total number of fitness evaluations to capture nearly optimal solutions, to capture diverse optimal solutions in multimodal solutions when needed and also to avoid the local optimal solution in multi modal problems. This article discusses a novel optimization algorithm named as Environmental Adaption Method (EAM) foable 3r solving the optimization problems. EAM is designed to reduce the overall processing time for retrieving optimal solution of the problem, to improve the quality of solutions and particularly to avoid being trapped in local optima. The results of the proposed algorithm are compared with the latest version of existing algorithms such as particle swarm optimization (PSO-TVAC), and differential evolution (SADE) on benchmark functions and the proposed algorithm proves its effectiveness over the existing algorithms in all the taken cases.


Sign in / Sign up

Export Citation Format

Share Document