scholarly journals A Competitive Memory Paradigm for Multimodal Optimization Driven by Clustering and Chaos

Mathematics ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. 934
Author(s):  
Jorge Gálvez ◽  
Erik Cuevas ◽  
Krishna Gopal Dhal

Evolutionary Computation Methods (ECMs) are proposed as stochastic search methods to solve complex optimization problems where classical optimization methods are not suitable. Most of the proposed ECMs aim to find the global optimum for a given function. However, from a practical point of view, in engineering, finding the global optimum may not always be useful, since it may represent solutions that are not physically, mechanically or even structurally realizable. Commonly, the evolutionary operators of ECMs are not designed to efficiently register multiple optima by executing them a single run. Under such circumstances, there is a need to incorporate certain mechanisms to allow ECMs to maintain and register multiple optima at each generation executed in a single run. On the other hand, the concept of dominance found in animal behavior indicates the level of social interaction among two animals in terms of aggressiveness. Such aggressiveness keeps two or more individuals as distant as possible from one another, where the most dominant individual prevails as the other withdraws. In this paper, the concept of dominance is computationally abstracted in terms of a data structure called “competitive memory” to incorporate multimodal capabilities into the evolutionary operators of the recently proposed Cluster-Chaotic-Optimization (CCO). Under CCO, the competitive memory is implemented as a memory mechanism to efficiently register and maintain all possible optimal values within a single execution of the algorithm. The performance of the proposed method is numerically compared against several multimodal schemes over a set of benchmark functions. The experimental study suggests that the proposed approach outperforms its competitors in terms of robustness, quality, and precision.

2021 ◽  
Vol 12 (4) ◽  
pp. 98-116
Author(s):  
Noureddine Boukhari ◽  
Fatima Debbat ◽  
Nicolas Monmarché ◽  
Mohamed Slimane

Evolution strategies (ES) are a family of strong stochastic methods for global optimization and have proved their capability in avoiding local optima more than other optimization methods. Many researchers have investigated different versions of the original evolution strategy with good results in a variety of optimization problems. However, the convergence rate of the algorithm to the global optimum stays asymptotic. In order to accelerate the convergence rate, a hybrid approach is proposed using the nonlinear simplex method (Nelder-Mead) and an adaptive scheme to control the local search application, and the authors demonstrate that such combination yields significantly better convergence. The new proposed method has been tested on 15 complex benchmark functions and applied to the bi-objective portfolio optimization problem and compared with other state-of-the-art techniques. Experimental results show that the performance is improved by this hybridization in terms of solution eminence and strong convergence.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-18
Author(s):  
Feng Qian ◽  
Mohammad Reza Mahmoudi ◽  
Hamïd Parvïn ◽  
Kim-Hung Pho ◽  
Bui Anh Tuan

Conventional optimization methods are not efficient enough to solve many of the naturally complicated optimization problems. Thus, inspired by nature, metaheuristic algorithms can be utilized as a new kind of problem solvers in solution to these types of optimization problems. In this paper, an optimization algorithm is proposed which is capable of finding the expected quality of different locations and also tuning its exploration-exploitation dilemma to the location of an individual. A novel particle swarm optimization algorithm is presented which implements the conditioning learning behavior so that the particles are led to perform a natural conditioning behavior on an unconditioned motive. In the problem space, particles are classified into several categories so that if a particle lies within a low diversity category, it would have a tendency to move towards its best personal experience. But, if the particle’s category is with high diversity, it would have the tendency to move towards the global optimum of that category. The idea of the birds’ sensitivity to its flying space is also utilized to increase the particles’ speed in undesired spaces in order to leave those spaces as soon as possible. However, in desirable spaces, the particles’ velocity is reduced to provide a situation in which the particles have more time to explore their environment. In the proposed algorithm, the birds’ instinctive behavior is implemented to construct an initial population randomly or chaotically. Experiments provided to compare the proposed algorithm with the state-of-the-art methods show that our optimization algorithm is one of the most efficient and appropriate ones to solve the static optimization problems.


Author(s):  
Adel A. Younis ◽  
George H. Cheng ◽  
G. Gary Wang ◽  
Zuomin Dong

Metamodel based design optimization (MBDO) algorithms have attracted considerable interests in recent years due to their special capability in dealing with complex optimization problems with computationally expensive objective and constraint functions and local optima. Conventional unimodal-based optimization algorithms and stochastic global optimization algorithms either miss the global optimum frequently or require unacceptable computation time. In this work, a generic testbed/platform for evaluating various MBDO algorithms has been introduced. The purpose of the platform is to facilitate quantitative comparison of different MBDO algorithms using standard test problems, test procedures, and test outputs, as well as to improve the efficiency of new algorithm testing and improvement. The platform consists of a comprehensive test function database that contains about 100 benchmark functions and engineering problems. The testbed accepts any optimization algorithm to be tested, and only requires minor modifications to meet the test-bed requirements. The testbed is useful in comparing the performance of competing algorithms through execution of same problems. It allows researchers and practitioners to test and choose the most suitable optimization tool for their specific needs. It also helps to increase confidence and reliability of the newly developed MBDO tools. Many new MBDO algorithms, including Mode Pursuing Sampling (MPS), Pareto Set Pursuing (PSP), and Space Exploration and Unimodal Region Elimination (SEUMRE), were tested in this work to demonstrate its functionality and benefits.


2012 ◽  
Vol 2012 ◽  
pp. 1-7 ◽  
Author(s):  
Alireza Rowhanimanesh ◽  
Sohrab Efati

Evolutionary methods are well-known techniques for solving nonlinear constrained optimization problems. Due to the exploration power of evolution-based optimizers, population usually converges to a region around global optimum after several generations. Although this convergence can be efficiently used to reduce search space, in most of the existing optimization methods, search is still continued over original space and considerable time is wasted for searching ineffective regions. This paper proposes a simple and general approach based on search space reduction to improve the exploitation power of the existing evolutionary methods without adding any significant computational complexity. After a number of generations when enough exploration is performed, search space is reduced to a small subspace around the best individual, and then search is continued over this reduced space. If the space reduction parameters (red_gen and red_factor) are adjusted properly, reduced space will include global optimum. The proposed scheme can help the existing evolutionary methods to find better near-optimal solutions in a shorter time. To demonstrate the power of the new approach, it is applied to a set of benchmark constrained optimization problems and the results are compared with a previous work in the literature.


Author(s):  
Liqun Wang ◽  
Songqing Shan ◽  
G. Gary Wang

The presence of black-box functions in engineering design, which are usually computation-intensive, demands efficient global optimization methods. This work proposes a new global optimization method for black-box functions. The global optimization method is based on a novel mode-pursuing sampling (MPS) method which systematically generates more sample points in the neighborhood of the function mode while statistically covers the entire search space. Quadratic regression is performed to detect the region containing the global optimum. The sampling and detection process iterates until the global optimum is obtained. Through intensive testing, this method is found to be effective, efficient, robust, and applicable to both continuous and discontinuous functions. It supports simultaneous computation and applies to both unconstrained and constrained optimization problems. Because it does not call any existing global optimization tool, it can be used as a standalone global optimization method for inexpensive problems as well. Limitation of the method is also identified and discussed.


2011 ◽  
Vol 181-182 ◽  
pp. 937-942
Author(s):  
Bo Liu ◽  
Hong Xia Pan

Particle swarm optimization (PSO) is widely used to solve complex optimization problems. However, classical PSO may be trapped in local optima and fails to converge to global optimum. In this paper, the concept of the self particles and the random particles is introduced into classical PSO to keep the particle diversity. All particles are divided into the standard particles, the self particles and the random particles according to special proportion. The feature of the proposed algorithm is analyzed and several testing functions are performed in simulation study. Experimental results show that, the proposed PDPSO algorithm can escape from local minima and significantly enhance the convergence precision.


2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
Nian-Ze Hu ◽  
Han-Lin Li ◽  
Jung-Fa Tsai

Packing optimization problems aim to seek the best way of placing a given set of rectangular boxes within a minimum volume rectangular box. Current packing optimization methods either find it difficult to obtain an optimal solution or require too many extra 0-1 variables in the solution process. This study develops a novel method to convert the nonlinear objective function in a packing program into an increasing function with single variable and two fixed parameters. The original packing program then becomes a linear program promising to obtain a global optimum. Such a linear program is decomposed into several subproblems by specifying various parameter values, which is solvable simultaneously by a distributed computation algorithm. A reference solution obtained by applying a genetic algorithm is used as an upper bound of the optimal solution, used to reduce the entire search region.


2013 ◽  
Vol 2013 ◽  
pp. 1-14 ◽  
Author(s):  
Gaige Wang ◽  
Lihong Guo ◽  
Amir Hossein Gandomi ◽  
Lihua Cao ◽  
Amir Hossein Alavi ◽  
...  

To improve the performance of the krill herd (KH) algorithm, in this paper, a Lévy-flight krill herd (LKH) algorithm is proposed for solving optimization tasks within limited computing time. The improvement includes the addition of a new local Lévy-flight (LLF) operator during the process when updating krill in order to improve its efficiency and reliability coping with global numerical optimization problems. The LLF operator encourages the exploitation and makes the krill individuals search the space carefully at the end of the search. The elitism scheme is also applied to keep the best krill during the process when updating the krill. Fourteen standard benchmark functions are used to verify the effects of these improvements and it is illustrated that, in most cases, the performance of this novel metaheuristic LKH method is superior to, or at least highly competitive with, the standard KH and other population-based optimization methods. Especially, this new method can accelerate the global convergence speed to the true global optimum while preserving the main feature of the basic KH.


2012 ◽  
Vol 2012 ◽  
pp. 1-36 ◽  
Author(s):  
Jui-Yu Wu

This work presents a hybrid real-coded genetic algorithm with a particle swarm optimization (RGA-PSO) algorithm and a hybrid artificial immune algorithm with a PSO (AIA-PSO) algorithm for solving 13 constrained global optimization (CGO) problems, including six nonlinear programming and seven generalized polynomial programming optimization problems. External RGA and AIA approaches are used to optimize the constriction coefficient, cognitive parameter, social parameter, penalty parameter, and mutation probability of an internal PSO algorithm. CGO problems are then solved using the internal PSO algorithm. The performances of the proposed RGA-PSO and AIA-PSO algorithms are evaluated using 13 CGO problems. Moreover, numerical results obtained using the proposed RGA-PSO and AIA-PSO algorithms are compared with those obtained using published individual GA and AIA approaches. Experimental results indicate that the proposed RGA-PSO and AIA-PSO algorithms converge to a global optimum solution to a CGO problem. Furthermore, the optimum parameter settings of the internal PSO algorithm can be obtained using the external RGA and AIA approaches. Also, the proposed RGA-PSO and AIA-PSO algorithms outperform some published individual GA and AIA approaches. Therefore, the proposed RGA-PSO and AIA-PSO algorithms are highly promising stochastic global optimization methods for solving CGO problems.


2019 ◽  
Vol 8 (1) ◽  
pp. 17-21
Author(s):  
Nika Topuria ◽  
Omar Kikvidze

Use of non-deterministic algorithms for solving multi-variable optimization problems is widely used nowadays. Genetic Algorithm belongs to a group of stochastic biomimicry algorithms, it allows us to achieve optimal or near-optimal results in large optimization problems in exceptionally short time (compared to standard optimization methods). Major advantage of Genetic Algorithm is the ability to fuse genes, to mutate and do selection based on fitness parameter. These methods protect us from being trapped in local optima (Most of deterministic algorithms are prone to getting stuck on local optima). In this paper we experimentally show the upper hand of Genetic Algorithms compared to other traditional optimization methods by solving complex optimization problem.


Sign in / Sign up

Export Citation Format

Share Document