scholarly journals A Survey on Recent Progress in the Theory of Evolutionary Algorithms for Discrete Optimization

2021 ◽  
Vol 1 (4) ◽  
pp. 1-43
Author(s):  
Benjamin Doerr ◽  
Frank Neumann

The theory of evolutionary computation for discrete search spaces has made significant progress since the early 2010s. This survey summarizes some of the most important recent results in this research area. It discusses fine-grained models of runtime analysis of evolutionary algorithms, highlights recent theoretical insights on parameter tuning and parameter control, and summarizes the latest advances for stochastic and dynamic problems. We regard how evolutionary algorithms optimize submodular functions, and we give an overview over the large body of recent results on estimation of distribution algorithms. Finally, we present the state of the art of drift analysis, one of the most powerful analysis technique developed in this field.

2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Peng-Yeng Yin ◽  
Hsi-Li Wu

The estimation of distribution algorithm (EDA) aims to explicitly model the probability distribution of the quality solutions to the underlying problem. By iterative filtering for quality solution from competing ones, the probability model eventually approximates the distribution of global optimum solutions. In contrast to classic evolutionary algorithms (EAs), EDA framework is flexible and is able to handle inter variable dependence, which usually imposes difficulties on classic EAs. The success of EDA relies on effective and efficient building of the probability model. This paper facilitates EDA from the adaptive memory programming (AMP) domain which has developed several improved forms of EAs using the Cyber-EA framework. The experimental result on benchmark TSP instances supports our anticipation that the AMP strategies can enhance the performance of classic EDA by deriving a better approximation for the true distribution of the target solutions.


Author(s):  
Rogelio Salinas-Gutiérrez ◽  
Ángel Eduardo Muñoz-Zavala ◽  
José Antonio Guerrero-Díaz de León ◽  
Arturo Hernández-Aguirre

2018 ◽  
Vol 27 (4) ◽  
pp. 643-666 ◽  
Author(s):  
J. LENGLER ◽  
A. STEGER

One of the easiest randomized greedy optimization algorithms is the following evolutionary algorithm which aims at maximizing a function f: {0,1}n → ℝ. The algorithm starts with a random search point ξ ∈ {0,1}n, and in each round it flips each bit of ξ with probability c/n independently at random, where c > 0 is a fixed constant. The thus created offspring ξ' replaces ξ if and only if f(ξ') ≥ f(ξ). The analysis of the runtime of this simple algorithm for monotone and for linear functions turned out to be highly non-trivial. In this paper we review known results and provide new and self-contained proofs of partly stronger results.


Sign in / Sign up

Export Citation Format

Share Document