The Limits of Estimation of Distribution Algorithms

2014 ◽  
Vol 926-930 ◽  
pp. 3294-3297
Author(s):  
Cai Chang Ding ◽  
Wen Xiu Peng ◽  
Wei Ming Wang

In this paper, we study the ability limit of EDAs to effectively solve problems in relation to the number of interactions among the variables. More in particular, we numerically analyze the learning limits that different EDA implementations encounter to solve problems on a sequence of additively decomposable functions (ADFs) in which new sub-functions are progressively added. The study is carried out in a worst-case scenario where the sub-functions are defined as deceptive functions. We argue that the limits for this type of algorithm are mainly imposed by the probabilistic model they rely on. Beyond the limitations of the approximate learning methods, the results suggest that, in general, the use of bayesian networks can entail strong computational restrictions to overcome the limits of applicability.

2014 ◽  
Vol 926-930 ◽  
pp. 3594-3597
Author(s):  
Cai Chang Ding ◽  
Wen Xiu Peng ◽  
Wei Ming Wang

Estimation of Distribution Algorithms (EDAs) are a set of algorithms that belong to the field of Evolutionary Computation. In EDAs there are neither crossover nor mutation operators. Instead, the new population of individuals is sampled from a probability distribution, which is estimated from a database that contains the selected individuals from the previous generation. Thus, the interrelations between the different variables that represent the individuals may be explicitly expressed through the joint probability distribution associated with the individuals selected at each generation.


2013 ◽  
Vol 21 (3) ◽  
pp. 471-495 ◽  
Author(s):  
Carlos Echegoyen ◽  
Alexander Mendiburu ◽  
Roberto Santana ◽  
Jose A. Lozano

Understanding the relationship between a search algorithm and the space of problems is a fundamental issue in the optimization field. In this paper, we lay the foundations to elaborate taxonomies of problems under estimation of distribution algorithms (EDAs). By using an infinite population model and assuming that the selection operator is based on the rank of the solutions, we group optimization problems according to the behavior of the EDA. Throughout the definition of an equivalence relation between functions it is possible to partition the space of problems in equivalence classes in which the algorithm has the same behavior. We show that only the probabilistic model is able to generate different partitions of the set of possible problems and hence, it predetermines the number of different behaviors that the algorithm can exhibit. As a natural consequence of our definitions, all the objective functions are in the same equivalence class when the algorithm does not impose restrictions to the probabilistic model. The taxonomy of problems, which is also valid for finite populations, is studied in depth for a simple EDA that considers independence among the variables of the problem. We provide the sufficient and necessary condition to decide the equivalence between functions and then we develop the operators to describe and count the members of a class. In addition, we show the intrinsic relation between univariate EDAs and the neighborhood system induced by the Hamming distance by proving that all the functions in the same class have the same number of local optima and that they are in the same ranking positions. Finally, we carry out numerical simulations in order to analyze the different behaviors that the algorithm can exhibit for the functions defined over the search space [Formula: see text].


2005 ◽  
Vol 13 (1) ◽  
pp. 43-66 ◽  
Author(s):  
J. M. Peña ◽  
J. A. Lozano ◽  
P. Larrañaga

Many optimization problems are what can be called globally multimodal, i.e., they present several global optima. Unfortunately, this is a major source of difficulties for most estimation of distribution algorithms, making their effectiveness and efficiency degrade, due to genetic drift. With the aim of overcoming these drawbacks for discrete globally multimodal problem optimization, this paper introduces and evaluates a new estimation of distribution algorithm based on unsupervised learning of Bayesian networks. We report the satisfactory results of our experiments with symmetrical binary optimization problems.


Mathematics ◽  
2021 ◽  
Vol 9 (17) ◽  
pp. 2137
Author(s):  
Margarita Antoniou ◽  
Gregor Papa

Worst-case scenario optimization deals with the minimization of the maximum output in all scenarios of a problem, and it is usually formulated as a min-max problem. Employing nested evolutionary algorithms to solve the problem requires numerous function evaluations. This work proposes a differential evolution with an estimation of distribution algorithm. The algorithm has a nested form, where a differential evolution is applied for both the design and scenario space optimization. To reduce the computational cost, we estimate the distribution of the best worst solution for the best solutions found so far. The probabilistic model is used to sample part of the initial population of the scenario space differential evolution, using a priori knowledge of the previous generations. The method is compared with a state-of-the-art algorithm on both benchmark problems and an engineering application, and the related results are reported.


Sign in / Sign up

Export Citation Format

Share Document