DISTURBANCE CHAOTIC ANT SWARM

2011 ◽  
Vol 21 (09) ◽  
pp. 2597-2622 ◽  
Author(s):  
FANGZHEN GE ◽  
ZHEN WEI ◽  
YANG LU ◽  
LIXIANG LI ◽  
YIXIAN YANG

Chaotic Ant Swarm (CAS) is an optimization algorithm based on swarm intelligence theory, which has been applied to find the global optimum solution in search space. However, it often loses its effectiveness and advantages when applied to large and complex problems, e.g. those with high dimensions. To resolve the problems of high computational complexity and low solution accuracy existing in CAS, we propose a Disturbance Chaotic Ant Swarm (DCAS) algorithm to significantly improve the performance of the original algorithm. The aim of this paper is achieved by three strategies which include modifying the method of updating ant's best position, neighbor selection method and establishing a self-adaptive disturbance strategy. The global convergence of the DCAS algorithm is proved in this paper. Extensive computational simulations and comparisons are carried out to validate the performance of the DCAS on two sets of benchmark functions with up to 1000 dimensions. The results show clearly that DCAS substantially enhances the performance of the CAS paradigm in terms of computational complexity, global optimality, solution accuracy and algorithm reliability for complex high-dimensional optimization problems.

2021 ◽  
Vol 2021 ◽  
pp. 1-25
Author(s):  
Yuxian Duan ◽  
Changyun Liu ◽  
Song Li ◽  
Xiangke Guo ◽  
Chunlin Yang

Elephant herding optimization (EHO) has received widespread attention due to its few control parameters and simple operation but still suffers from slow convergence and low solution accuracy. In this paper, an improved algorithm to solve the above shortcomings, called Gaussian perturbation specular reflection learning and golden-sine-mechanism-based EHO (SRGS-EHO), is proposed. First, specular reflection learning is introduced into the algorithm to enhance the diversity and ergodicity of the initial population and improve the convergence speed. Meanwhile, Gaussian perturbation is used to further increase the diversity of the initial population. Second, the golden sine mechanism is introduced to improve the way of updating the position of the patriarch in each clan, which can make the best-positioned individual in each generation move toward the global optimum and enhance the global exploration and local exploitation ability of the algorithm. To evaluate the effectiveness of the proposed algorithm, tests are performed on 23 benchmark functions. In addition, Wilcoxon rank-sum tests and Friedman tests with 5% are invoked to compare it with other eight metaheuristic algorithms. In addition, sensitivity analysis to parameters and experiments of the different modifications are set up. To further validate the effectiveness of the enhanced algorithm, SRGS-EHO is also applied to solve two classic engineering problems with a constrained search space (pressure-vessel design problem and tension-/compression-string design problem). The results show that the algorithm can be applied to solve the problems encountered in real production.


2015 ◽  
Vol 24 (05) ◽  
pp. 1550017 ◽  
Author(s):  
Aderemi Oluyinka Adewumi ◽  
Akugbe Martins Arasomwan

This paper presents an improved particle swarm optimization (PSO) technique for global optimization. Many variants of the technique have been proposed in literature. However, two major things characterize many of these variants namely, static search space and velocity limits, which bound their flexibilities in obtaining optimal solutions for many optimization problems. Furthermore, the problem of premature convergence persists in many variants despite the introduction of additional parameters such as inertia weight and extra computation ability. This paper proposes an improved PSO algorithm without inertia weight. The proposed algorithm dynamically adjusts the search space and velocity limits for the swarm in each iteration by picking the highest and lowest values among all the dimensions of the particles, calculates their absolute values and then uses the higher of the two values to define a new search range and velocity limits for next iteration. The efficiency and performance of the proposed algorithm was shown using popular benchmark global optimization problems with low and high dimensions. Results obtained demonstrate better convergence speed and precision, stability, robustness with better global search ability when compared with six recent variants of the original algorithm.


2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Octavio Camarena ◽  
Erik Cuevas ◽  
Marco Pérez-Cisneros ◽  
Fernando Fausto ◽  
Adrián González ◽  
...  

The Locust Search (LS) algorithm is a swarm-based optimization method inspired in the natural behavior of the desert locust. LS considers the inclusion of two distinctive nature-inspired search mechanism, namely, their solitary phase and social phase operators. These interesting search schemes allow LS to overcome some of the difficulties that commonly affect other similar methods, such as premature convergence and the lack of diversity on solutions. Recently, computer vision experiments in insect tracking methods have conducted to the development of more accurate locust motion models than those produced by simple behavior observations. The most distinctive characteristic of such new models is the use of probabilities to emulate the locust decision process. In this paper, a modification to the original LS algorithm, referred to as LS-II, is proposed to better handle global optimization problems. In LS-II, the locust motion model of the original algorithm is modified incorporating the main characteristics of the new biological formulations. As a result, LS-II improves its original capacities of exploration and exploitation of the search space. In order to test its performance, the proposed LS-II method is compared against several the state-of-the-art evolutionary methods considering a set of benchmark functions and engineering problems. Experimental results demonstrate the superior performance of the proposed approach in terms of solution quality and robustness.


2021 ◽  
Vol 27 (11) ◽  
pp. 563-574
Author(s):  
V. V. Kureychik ◽  
◽  
S. I. Rodzin ◽  

Computational models of bio heuristics based on physical and cognitive processes are presented. Data on such characteristics of bio heuristics (including evolutionary and swarm bio heuristics) are compared.) such as the rate of convergence, computational complexity, the required amount of memory, the configuration of the algorithm parameters, the difficulties of software implementation. The balance between the convergence rate of bio heuristics and the diversification of the search space for solutions to optimization problems is estimated. Experimental results are presented for the problem of placing Peco graphs in a lattice with the minimum total length of the graph edges.


2016 ◽  
pp. 450-475
Author(s):  
Dipti Singh ◽  
Kusum Deep

Due to their wide applicability and easy implementation, Genetic algorithms (GAs) are preferred to solve many optimization problems over other techniques. When a local search (LS) has been included in Genetic algorithms, it is known as Memetic algorithms. In this chapter, a new variant of single-meme Memetic Algorithm is proposed to improve the efficiency of GA. Though GAs are efficient at finding the global optimum solution of nonlinear optimization problems but usually converge slow and sometimes arrive at premature convergence. On the other hand, LS algorithms are fast but are poor global searchers. To exploit the good qualities of both techniques, they are combined in a way that maximum benefits of both the approaches are reaped. It lets the population of individuals evolve using GA and then applies LS to get the optimal solution. To validate our claims, it is tested on five benchmark problems of dimension 10, 30 and 50 and a comparison between GA and MA has been made.


Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 1004
Author(s):  
Marco Antonio Florenzano Mollinetti ◽  
Bernardo Bentes Gatto ◽  
Mário Tasso Ribeiro Serra Neto ◽  
Takahito Kuno

Artificial Bee Colony (ABC) is a Swarm Intelligence optimization algorithm well known for its versatility. The selection of decision variables to update is purely stochastic, incurring several issues to the local search capability of the ABC. To address these issues, a self-adaptive decision variable selection mechanism is proposed with the goal of balancing the degree of exploration and exploitation throughout the execution of the algorithm. This selection, named Adaptive Decision Variable Matrix (A-DVM), represents both stochastic and deterministic parameter selection in a binary matrix and regulates the extent of how much each selection is employed based on the estimation of the sparsity of the solutions in the search space. The influence of the proposed approach to performance and robustness of the original algorithm is validated by experimenting on 15 highly multimodal benchmark optimization problems. Numerical comparison on those problems is made against the ABC and their variants and prominent population-based algorithms (e.g., Particle Swarm Optimization and Differential Evolution). Results show an improvement in the performance of the algorithms with the A-DVM in the most challenging instances.


2012 ◽  
Vol 2012 ◽  
pp. 1-7 ◽  
Author(s):  
Alireza Rowhanimanesh ◽  
Sohrab Efati

Evolutionary methods are well-known techniques for solving nonlinear constrained optimization problems. Due to the exploration power of evolution-based optimizers, population usually converges to a region around global optimum after several generations. Although this convergence can be efficiently used to reduce search space, in most of the existing optimization methods, search is still continued over original space and considerable time is wasted for searching ineffective regions. This paper proposes a simple and general approach based on search space reduction to improve the exploitation power of the existing evolutionary methods without adding any significant computational complexity. After a number of generations when enough exploration is performed, search space is reduced to a small subspace around the best individual, and then search is continued over this reduced space. If the space reduction parameters (red_gen and red_factor) are adjusted properly, reduced space will include global optimum. The proposed scheme can help the existing evolutionary methods to find better near-optimal solutions in a shorter time. To demonstrate the power of the new approach, it is applied to a set of benchmark constrained optimization problems and the results are compared with a previous work in the literature.


2013 ◽  
Vol 421 ◽  
pp. 507-511 ◽  
Author(s):  
Nurezayana Zainal ◽  
Azlan Mohd Zain ◽  
Nor Haizan Mohamed Radzi ◽  
Amirmudin Udin

Glowworm Swarm Optimization (GSO) algorithm is a derivative-free, meta-heuristic algorithm and mimicking the glow behavior of glowworms which can efficiently capture all the maximum multimodal function. Nevertheless, there are several weaknesses to locate the global optimum solution for instance low calculation accuracy, simply falling into the local optimum, convergence rate of success and slow speed to converge. This paper reviews the exposition of a new method of swarm intelligence in solving optimization problems using GSO. Recently the GSO algorithm was used simultaneously to find solutions of multimodal function optimization problem in various fields in today industry such as science, engineering, network and robotic. From the paper review, we could conclude that the basic GSO algorithm, GSO with modification or improvement and GSO with hybridization are considered by previous researchers in order to solve the optimization problem. However, based on the literature review, many researchers applied basic GSO algorithm in their research rather than others.


Author(s):  
Liqun Wang ◽  
Songqing Shan ◽  
G. Gary Wang

The presence of black-box functions in engineering design, which are usually computation-intensive, demands efficient global optimization methods. This work proposes a new global optimization method for black-box functions. The global optimization method is based on a novel mode-pursuing sampling (MPS) method which systematically generates more sample points in the neighborhood of the function mode while statistically covers the entire search space. Quadratic regression is performed to detect the region containing the global optimum. The sampling and detection process iterates until the global optimum is obtained. Through intensive testing, this method is found to be effective, efficient, robust, and applicable to both continuous and discontinuous functions. It supports simultaneous computation and applies to both unconstrained and constrained optimization problems. Because it does not call any existing global optimization tool, it can be used as a standalone global optimization method for inexpensive problems as well. Limitation of the method is also identified and discussed.


Author(s):  
Ken Ferens ◽  
Darcy Cook ◽  
Witold Kinsner

This paper proposes the application of chaos in large search space problems, and suggests that this represents the next evolutionary step in the development of adaptive and intelligent systems towards cognitive machines and systems. Three different versions of chaotic simulated annealing (XSA) were applied to combinatorial optimization problems in multiprocessor task allocation. Chaotic walks in the solution space were taken to search for the global optimum or “good enough” task-to-processor allocation solutions. Chaotic variables were generated to set the number of perturbations made in each iteration of a XSA algorithm. In addition, parameters of a chaotic variable generator were adjusted to create different chaotic distributions with which to search the solution space. The results show that the convergence rate of the XSA algorithm is faster than simulated annealing when the solutions are far apart in the solution space. In particular, the XSA algorithms found simulated annealing’s best result on average about 4 times faster than simulated annealing.


Sign in / Sign up

Export Citation Format

Share Document