Spatially Structured Evolutionary Algorithms: Graph Degree, Population Size and Convergence Speed

Author(s):  
Carlos M. Fernandes ◽  
Juan L. J. Laredo ◽  
Agostinho C. Rosa
2011 ◽  
Vol 403-408 ◽  
pp. 1834-1838
Author(s):  
Jing Zhao ◽  
Chong Zhao Han ◽  
Bin Wei ◽  
De Qiang Han

Discretization of continuous attributes have played an important role in machine learning and data mining. They can not only improve the performance of the classifier, but also reduce the space of the storage. Univariate Marginal Distribution Algorithm is a modified Evolutionary Algorithms, which has some advantages over classical Evolutionary Algorithms such as the fast convergence speed and few parameters need to be tuned. In this paper, we proposed a bottom-up, global, dynamic, and supervised discretization method on the basis of Univariate Marginal Distribution Algorithm.The experimental results showed that the proposed method could effectively improve the accuracy of classifier.


2009 ◽  
Vol 01 (02) ◽  
pp. 108-119
Author(s):  
Oscar MONTIEL ◽  
Oscar CASTILLO ◽  
Patricia MELIN ◽  
Roberto SEPULVEDA

2020 ◽  
Vol 28 (1) ◽  
pp. 55-85
Author(s):  
Bo Song ◽  
Victor O.K. Li

Infinite population models are important tools for studying population dynamics of evolutionary algorithms. They describe how the distributions of populations change between consecutive generations. In general, infinite population models are derived from Markov chains by exploiting symmetries between individuals in the population and analyzing the limit as the population size goes to infinity. In this article, we study the theoretical foundations of infinite population models of evolutionary algorithms on continuous optimization problems. First, we show that the convergence proofs in a widely cited study were in fact problematic and incomplete. We further show that the modeling assumption of exchangeability of individuals cannot yield the transition equation. Then, in order to analyze infinite population models, we build an analytical framework based on convergence in distribution of random elements which take values in the metric space of infinite sequences. The framework is concise and mathematically rigorous. It also provides an infrastructure for studying the convergence of the stacking of operators and of iterating the algorithm which previous studies failed to address. Finally, we use the framework to prove the convergence of infinite population models for the mutation operator and the [Formula: see text]-ary recombination operator. We show that these operators can provide accurate predictions for real population dynamics as the population size goes to infinity, provided that the initial population is identically and independently distributed.


2017 ◽  
Vol 186 ◽  
pp. 341-348 ◽  
Author(s):  
Daniel Mora-Melià ◽  
F. Javier Martínez-Solano ◽  
Pedro L. Iglesias-Rey ◽  
Jimmy H. Gutiérrez-Bahamondes

2005 ◽  
Vol 13 (4) ◽  
pp. 413-440 ◽  
Author(s):  
Thomas Jansen ◽  
Kenneth A. De Jong ◽  
Ingo Wegener

Evolutionary algorithms (EAs) generally come with a large number of parameters that have to be set before the algorithm can be used. Finding appropriate settings is a diffi- cult task. The influence of these parameters on the efficiency of the search performed by an evolutionary algorithm can be very high. But there is still a lack of theoretically justified guidelines to help the practitioner find good values for these parameters. One such parameter is the offspring population size. Using a simplified but still realistic evolutionary algorithm, a thorough analysis of the effects of the offspring population size is presented. The result is a much better understanding of the role of offspring population size in an EA and suggests a simple way to dynamically adapt this parameter when necessary.


Sign in / Sign up

Export Citation Format

Share Document