scholarly journals Properties of Gray and Binary Representations

2004 ◽  
Vol 12 (1) ◽  
pp. 47-76 ◽  
Author(s):  
Jonathan Rowe ◽  
Darrell Whitley ◽  
Laura Barbulescu ◽  
Jean-Paul Watson

Representations are formalized as encodings that map the search space to the vertex set of a graph. We define the notion of bit equivalent encodings and show that for such encodings the corresponding Walsh coefficients are also conserved. We focus on Gray codes as particular types of encoding and present a review of properties related to the use of Gray codes. Gray codes are widely used in conjunction with genetic algorithms and bit-climbing algorithms for parameter optimization problems. We present new convergence proofs for a special class of unimodal functions; the proofs show that a steepest ascent bit climber using any reflected Gray code representation reaches the global optimum in a number of steps that is linear with respect to the encoding size. There are in fact many different Gray codes.Shifting is defined as a mechanism for dynamically switching from one Gray code representation to another in order to escape local optima. Theoretical results that substantially improve our understanding of the Gray codes and the shifting mechanism are presented. New proofs also shed light on the number of unique Gray code neighborhoods accessible via shifting and on how neighborhood structure changes during shifting. We show that shifting can improve the performance of both a local search algorithm as well as one of the best genetic algorithms currently available.

2013 ◽  
Vol 21 (3) ◽  
pp. 471-495 ◽  
Author(s):  
Carlos Echegoyen ◽  
Alexander Mendiburu ◽  
Roberto Santana ◽  
Jose A. Lozano

Understanding the relationship between a search algorithm and the space of problems is a fundamental issue in the optimization field. In this paper, we lay the foundations to elaborate taxonomies of problems under estimation of distribution algorithms (EDAs). By using an infinite population model and assuming that the selection operator is based on the rank of the solutions, we group optimization problems according to the behavior of the EDA. Throughout the definition of an equivalence relation between functions it is possible to partition the space of problems in equivalence classes in which the algorithm has the same behavior. We show that only the probabilistic model is able to generate different partitions of the set of possible problems and hence, it predetermines the number of different behaviors that the algorithm can exhibit. As a natural consequence of our definitions, all the objective functions are in the same equivalence class when the algorithm does not impose restrictions to the probabilistic model. The taxonomy of problems, which is also valid for finite populations, is studied in depth for a simple EDA that considers independence among the variables of the problem. We provide the sufficient and necessary condition to decide the equivalence between functions and then we develop the operators to describe and count the members of a class. In addition, we show the intrinsic relation between univariate EDAs and the neighborhood system induced by the Hamming distance by proving that all the functions in the same class have the same number of local optima and that they are in the same ranking positions. Finally, we carry out numerical simulations in order to analyze the different behaviors that the algorithm can exhibit for the functions defined over the search space [Formula: see text].


2013 ◽  
Vol 21 (4) ◽  
pp. 625-658 ◽  
Author(s):  
Leticia Hernando ◽  
Alexander Mendiburu ◽  
Jose A. Lozano

The solution of many combinatorial optimization problems is carried out by metaheuristics, which generally make use of local search algorithms. These algorithms use some kind of neighborhood structure over the search space. The performance of the algorithms strongly depends on the properties that the neighborhood imposes on the search space. One of these properties is the number of local optima. Given an instance of a combinatorial optimization problem and a neighborhood, the estimation of the number of local optima can help not only to measure the complexity of the instance, but also to choose the most convenient neighborhood to solve it. In this paper we review and evaluate several methods to estimate the number of local optima in combinatorial optimization problems. The methods reviewed not only come from the combinatorial optimization literature, but also from the statistical literature. A thorough evaluation in synthetic as well as real problems is given. We conclude by providing recommendations of methods for several scenarios.


Author(s):  
Sajad Ahmad Rather ◽  
P. Shanthi Bala

In recent years, various heuristic algorithms based on natural phenomena and swarm behaviors were introduced to solve innumerable optimization problems. These optimization algorithms show better performance than conventional algorithms. Recently, the gravitational search algorithm (GSA) is proposed for optimization which is based on Newton's law of universal gravitation and laws of motion. Within a few years, GSA became popular among the research community and has been applied to various fields such as electrical science, power systems, computer science, civil and mechanical engineering, etc. This chapter shows the importance of GSA, its hybridization, and applications in solving clustering and classification problems. In clustering, GSA is hybridized with other optimization algorithms to overcome the drawbacks such as curse of dimensionality, trapping in local optima, and limited search space of conventional data clustering algorithms. GSA is also applied to classification problems for pattern recognition, feature extraction, and increasing classification accuracy.


2014 ◽  
Vol 2014 ◽  
pp. 1-20 ◽  
Author(s):  
Erik Cuevas ◽  
Adolfo Reyna-Orta

Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration.


Author(s):  
Chang-Wook Han ◽  
◽  
Hajime Nobuhara ◽  

Genetic algorithms (GA) are well known and very popular stochastic optimization algorithm. Although, GA is very powerful method to find the global optimum, it has some drawbacks, for example, premature convergence to local optima, slow convergence speed to global optimum. To enhance the performance of the GA, this paper proposes an adaptive genetic algorithm based on partitioning method. The partitioning method, which enables a genetic algorithm to find a solution very effectively, adaptively divides the search space into promising sub-spaces to reduce the complexity of optimization. This partitioning method is more effective as the complexity of the search space is increasing. The validity of the proposed method is confirmed by applying it to several bench mark test function examples and a traveling salesman problem.


Author(s):  
Umit Can ◽  
Bilal Alatas

The classical optimization algorithms are not efficient in solving complex search and optimization problems. Thus, some heuristic optimization algorithms have been proposed. In this paper, exploration of association rules within numerical databases with Gravitational Search Algorithm (GSA) has been firstly performed. GSA has been designed as search method for quantitative association rules from the databases which can be regarded as search space. Furthermore, determining the minimum values of confidence and support for every database which is a hard job has been eliminated by GSA. Apart from this, the fitness function used for GSA is very flexible. According to the interested problem, some parameters can be removed from or added to the fitness function. The range values of the attributes have been automatically adjusted during the time of mining of the rules. That is why there is not any requirements for the pre-processing of the data. Attributes interaction problem has also been eliminated with the designed GSA. GSA has been tested with four real databases and promising results have been obtained. GSA seems an effective search method for complex numerical sequential patterns mining, numerical classification rules mining, and clustering rules mining tasks of data mining.


Author(s):  
Prachi Agrawal ◽  
Talari Ganesh ◽  
Ali Wagdy Mohamed

AbstractThis article proposes a novel binary version of recently developed Gaining Sharing knowledge-based optimization algorithm (GSK) to solve binary optimization problems. GSK algorithm is based on the concept of how humans acquire and share knowledge during their life span. A binary version of GSK named novel binary Gaining Sharing knowledge-based optimization algorithm (NBGSK) depends on mainly two binary stages: binary junior gaining sharing stage and binary senior gaining sharing stage with knowledge factor 1. These two stages enable NBGSK for exploring and exploitation of the search space efficiently and effectively to solve problems in binary space. Moreover, to enhance the performance of NBGSK and prevent the solutions from trapping into local optima, NBGSK with population size reduction (PR-NBGSK) is introduced. It decreases the population size gradually with a linear function. The proposed NBGSK and PR-NBGSK applied to set of knapsack instances with small and large dimensions, which shows that NBGSK and PR-NBGSK are more efficient and effective in terms of convergence, robustness, and accuracy.


2021 ◽  
Vol 11 (3) ◽  
pp. 1286 ◽  
Author(s):  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Ali Dehghani ◽  
Om P. Malik ◽  
Ruben Morales-Menendez ◽  
...  

One of the most powerful tools for solving optimization problems is optimization algorithms (inspired by nature) based on populations. These algorithms provide a solution to a problem by randomly searching in the search space. The design’s central idea is derived from various natural phenomena, the behavior and living conditions of living organisms, laws of physics, etc. A new population-based optimization algorithm called the Binary Spring Search Algorithm (BSSA) is introduced to solve optimization problems. BSSA is an algorithm based on a simulation of the famous Hooke’s law (physics) for the traditional weights and springs system. In this proposal, the population comprises weights that are connected by unique springs. The mathematical modeling of the proposed algorithm is presented to be used to achieve solutions to optimization problems. The results were thoroughly validated in different unimodal and multimodal functions; additionally, the BSSA was compared with high-performance algorithms: binary grasshopper optimization algorithm, binary dragonfly algorithm, binary bat algorithm, binary gravitational search algorithm, binary particle swarm optimization, and binary genetic algorithm. The results show the superiority of the BSSA. The results of the Friedman test corroborate that the BSSA is more competitive.


2016 ◽  
Vol 25 (06) ◽  
pp. 1650033 ◽  
Author(s):  
Hossam Faris ◽  
Ibrahim Aljarah ◽  
Nailah Al-Madi ◽  
Seyedali Mirjalili

Evolutionary Neural Networks are proven to be beneficial in solving challenging datasets mainly due to the high local optima avoidance. Stochastic operators in such techniques reduce the probability of stagnation in local solutions and assist them to supersede conventional training algorithms such as Back Propagation (BP) and Levenberg-Marquardt (LM). According to the No-Free-Lunch (NFL), however, there is no optimization technique for solving all optimization problems. This means that a Neural Network trained by a new algorithm has the potential to solve a new set of problems or outperform the current techniques in solving existing problems. This motivates our attempts to investigate the efficiency of the recently proposed Evolutionary Algorithm called Lightning Search Algorithm (LSA) in training Neural Network for the first time in the literature. The LSA-based trainer is benchmarked on 16 popular medical diagnosis problems and compared to BP, LM, and 6 other evolutionary trainers. The quantitative and qualitative results show that the LSA algorithm is able to show not only better local solutions avoidance but also faster convergence speed compared to the other algorithms employed. In addition, the statistical test conducted proves that the LSA-based trainer is significantly superior in comparison with the current algorithms on the majority of datasets.


2021 ◽  
Vol 12 (4) ◽  
pp. 98-116
Author(s):  
Noureddine Boukhari ◽  
Fatima Debbat ◽  
Nicolas Monmarché ◽  
Mohamed Slimane

Evolution strategies (ES) are a family of strong stochastic methods for global optimization and have proved their capability in avoiding local optima more than other optimization methods. Many researchers have investigated different versions of the original evolution strategy with good results in a variety of optimization problems. However, the convergence rate of the algorithm to the global optimum stays asymptotic. In order to accelerate the convergence rate, a hybrid approach is proposed using the nonlinear simplex method (Nelder-Mead) and an adaptive scheme to control the local search application, and the authors demonstrate that such combination yields significantly better convergence. The new proposed method has been tested on 15 complex benchmark functions and applied to the bi-objective portfolio optimization problem and compared with other state-of-the-art techniques. Experimental results show that the performance is improved by this hybridization in terms of solution eminence and strong convergence.


Sign in / Sign up

Export Citation Format

Share Document