Comparison of Different Bat Initialization Techniques for Global Optimization Problems

2021 ◽  
Vol 12 (1) ◽  
pp. 157-184
Author(s):  
Wasqas Haider Bangyal ◽  
Jamil Ahmad ◽  
Hafiz Tayyab Rauf

Bat algorithm (BA) is a population-based stochastic search technique that has been widely used to solve the diverse kind of optimization problems. Population initialization is the current ongoing research problem in evolutionary computing algorithms. Appropriate population initialization assists the algorithm to investigate the swarm search space effectively. BA faces premature convergence problem to find actual global optimization value. Low discrepancy sequences are slightly lesser random number than pseudo-random; however, they are more powerful for computational approaches. In this work, new population initialization approach Halton (BA-HA), Sobol (BA-SO), and Torus (BA-TO) are proposed, which helps bats to avoid from the premature convergence. The proposed approaches are examined on standard benchmark functions, and simulation results are compared with standard BA initialized with uniform distribution. The results depict that substantial enhancement can be attained in the performance of standard BA while varying the random numbers sequences to low discrepancy sequences.

2021 ◽  
Vol 11 (3) ◽  
pp. 1286 ◽  
Author(s):  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Ali Dehghani ◽  
Om P. Malik ◽  
Ruben Morales-Menendez ◽  
...  

One of the most powerful tools for solving optimization problems is optimization algorithms (inspired by nature) based on populations. These algorithms provide a solution to a problem by randomly searching in the search space. The design’s central idea is derived from various natural phenomena, the behavior and living conditions of living organisms, laws of physics, etc. A new population-based optimization algorithm called the Binary Spring Search Algorithm (BSSA) is introduced to solve optimization problems. BSSA is an algorithm based on a simulation of the famous Hooke’s law (physics) for the traditional weights and springs system. In this proposal, the population comprises weights that are connected by unique springs. The mathematical modeling of the proposed algorithm is presented to be used to achieve solutions to optimization problems. The results were thoroughly validated in different unimodal and multimodal functions; additionally, the BSSA was compared with high-performance algorithms: binary grasshopper optimization algorithm, binary dragonfly algorithm, binary bat algorithm, binary gravitational search algorithm, binary particle swarm optimization, and binary genetic algorithm. The results show the superiority of the BSSA. The results of the Friedman test corroborate that the BSSA is more competitive.


2015 ◽  
Vol 24 (05) ◽  
pp. 1550017 ◽  
Author(s):  
Aderemi Oluyinka Adewumi ◽  
Akugbe Martins Arasomwan

This paper presents an improved particle swarm optimization (PSO) technique for global optimization. Many variants of the technique have been proposed in literature. However, two major things characterize many of these variants namely, static search space and velocity limits, which bound their flexibilities in obtaining optimal solutions for many optimization problems. Furthermore, the problem of premature convergence persists in many variants despite the introduction of additional parameters such as inertia weight and extra computation ability. This paper proposes an improved PSO algorithm without inertia weight. The proposed algorithm dynamically adjusts the search space and velocity limits for the swarm in each iteration by picking the highest and lowest values among all the dimensions of the particles, calculates their absolute values and then uses the higher of the two values to define a new search range and velocity limits for next iteration. The efficiency and performance of the proposed algorithm was shown using popular benchmark global optimization problems with low and high dimensions. Results obtained demonstrate better convergence speed and precision, stability, robustness with better global search ability when compared with six recent variants of the original algorithm.


2018 ◽  
Vol 8 (10) ◽  
pp. 1945 ◽  
Author(s):  
Tarik Eltaeib ◽  
Ausif Mahmood

Differential evolution (DE) has been extensively used in optimization studies since its development in 1995 because of its reputation as an effective global optimizer. DE is a population-based metaheuristic technique that develops numerical vectors to solve optimization problems. DE strategies have a significant impact on DE performance and play a vital role in achieving stochastic global optimization. However, DE is highly dependent on the control parameters involved. In practice, the fine-tuning of these parameters is not always easy. Here, we discuss the improvements and developments that have been made to DE algorithms. In particular, we present a state-of-the-art survey of the literature on DE and its recent advances, such as the development of adaptive, self-adaptive and hybrid techniques.


2014 ◽  
Vol 5 (4) ◽  
pp. 1-25 ◽  
Author(s):  
Shahryar Rahnamayan ◽  
Jude Jesuthasan ◽  
Farid Bourennani ◽  
Greg F. Naterer ◽  
Hojjat Salehinejad

The capabilities of evolutionary algorithms (EAs) in solving nonlinear and non-convex optimization problems are significant. Differential evolution (DE) is an effective population-based EA, which has emerged as very competitive. Since its inception in 1995, multiple variants of DE have been proposed with higher performance. Among these DE variants, opposition-based differential evolution (ODE) established a novel concept in which individuals must compete with theirs opposites in order to make an entry in the next generation. The generation of opposite points is based on the current extreme points (i.e., maximum and minimum) in the search space. This paper develops a new scheme that utilizes the centroid point of a population to calculate opposite individuals. The classical scheme of an opposite point is modified. Incorporating this new scheme into DE leads to an enhanced ODE that is identified as centroid opposition-based differential evolution (CODE). The accuracy of the CODE algorithm is comprehensively evaluated on well-known complex benchmark functions and compared with the performance of conventional DE, ODE, and other state-of-the-art algorithms. The results for CODE are found to be promising.


Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 1004
Author(s):  
Marco Antonio Florenzano Mollinetti ◽  
Bernardo Bentes Gatto ◽  
Mário Tasso Ribeiro Serra Neto ◽  
Takahito Kuno

Artificial Bee Colony (ABC) is a Swarm Intelligence optimization algorithm well known for its versatility. The selection of decision variables to update is purely stochastic, incurring several issues to the local search capability of the ABC. To address these issues, a self-adaptive decision variable selection mechanism is proposed with the goal of balancing the degree of exploration and exploitation throughout the execution of the algorithm. This selection, named Adaptive Decision Variable Matrix (A-DVM), represents both stochastic and deterministic parameter selection in a binary matrix and regulates the extent of how much each selection is employed based on the estimation of the sparsity of the solutions in the search space. The influence of the proposed approach to performance and robustness of the original algorithm is validated by experimenting on 15 highly multimodal benchmark optimization problems. Numerical comparison on those problems is made against the ABC and their variants and prominent population-based algorithms (e.g., Particle Swarm Optimization and Differential Evolution). Results show an improvement in the performance of the algorithms with the A-DVM in the most challenging instances.


2018 ◽  
Vol 51 (2) ◽  
pp. 265-285 ◽  
Author(s):  
Abdulbaset Saad ◽  
Zuomin Dong ◽  
Brad Buckham ◽  
Curran Crawford ◽  
Adel Younis ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Guoming Du ◽  
Yangbo Chen ◽  
Wei Sun

Complex nonlinear optimization problems are involved in optimal spatial search, such as location allocation problems that occur in multidimensional geographic space. Such search problems are generally difficult to solve by using traditional methods. The bat algorithm (BA) is an effective method for solving optimization problems. However, the solution of the standard BA is easily trapped at one of its local optimum values. The main cause of premature convergence is the loss of diversity in the population. The niche technique is an effective method to maintain the population diversity, to enhance the exploration of the new search domains, and to avoid premature convergence. In this paper, a geographic information system- (GIS-) based niche hybrid bat algorithm (NHBA) is proposed for solving the optimal spatial search. The NHBA is able to avoid the premature convergence and obtain the global optimal values. The GIS technique provides robust support for processing a substantial amount of geographical data. A case in Fangcun District, Guangzhou City, China, is used to test the NHBA. The comparative experiments illustrate that the BA, GA, FA, PSO, and NHBA algorithms outperform the brute-force algorithm in terms of computational efficiency, and the optimal solutions are more easily obtained with NHBA than with BA, GA, FA, and PSO. Moreover, the precision of NHBA is higher and the convergence of NHBA is faster than those of the other algorithms under the same conditions.


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Martins Akugbe Arasomwan ◽  
Aderemi Oluyinka Adewumi

Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted.


2014 ◽  
Vol 1065-1069 ◽  
pp. 3438-3441
Author(s):  
Guo Jun Li

Harmony search (HS) algorithm is a new population based algorithm, which imitates the phenomenon of musical improvisation process. Its own potential and shortage, one shortage is that it easily trapped into local optima. In this paper, a hybrid harmony search algorithm (HHS) is proposed based on the conception of swarm intelligence. HHS employed a local search method to replace the pitch adjusting operation, and designed an elitist preservation strategy to modify the selection operation. Experiment results demonstrated that the proposed method performs much better than the HS and its improved algorithms (IHS, GHS and NGHS).


2021 ◽  
Vol 11 (16) ◽  
pp. 7591
Author(s):  
Waqas Haider Bangyal ◽  
Kashif Nisar ◽  
Ag. Asri Bin Ag. Ibrahim ◽  
Muhammad Reazul Haque ◽  
Joel J. P. C. Rodrigues ◽  
...  

Metaheuristic algorithms have been widely used to solve diverse kinds of optimization problems. For an optimization problem, population initialization plays a significant role in metaheuristic algorithms. These algorithms can influence the convergence to find an efficient optimal solution. Mainly, for recognizing the importance of diversity, several researchers have worked on the performance for the improvement of metaheuristic algorithms. Population initialization is a vital factor in metaheuristic algorithms such as PSO and DE. Instead of applying the random distribution for the initialization of the population, quasirandom sequences are more useful for the improvement the diversity and convergence factors. This study presents three new low-discrepancy sequences named WELL sequence, Knuth sequence, and Torus sequence to initialize the population in the search space. This paper also gives a comprehensive survey of the various PSO and DE initialization approaches based on the family of quasirandom sequences such as Sobol sequence, Halton sequence, and uniform random distribution. The proposed methods for PSO (TO-PSO, KN-PSO, and WE-PSO) and DE (DE-TO, DE-WE, and DE-KN) have been examined for well-known benchmark test problems and training of the artificial neural network. The finding of our techniques shows promising performance using the family of low-discrepancy sequences over uniform random numbers. For a fair comparison, the approaches using low-discrepancy sequences for PSO and DE are compared with the other family of low-discrepancy sequences and uniform random number and depict the superior results. The experimental results show that the low-discrepancy sequences-based initialization performed exceptionally better than a uniform random number. Moreover, the outcome of our work presents a foresight on how the proposed technique profoundly impacts convergence and diversity. It is anticipated that this low-discrepancy sequence comparative simulation survey would be helpful for studying the metaheuristic algorithm in detail for the researcher.


Sign in / Sign up

Export Citation Format

Share Document