Knowledge-based self-adaptation in evolutionary programming using cultural algorithms

Author(s):  
R.G. Reynolds ◽  
ChanJin Chung
Author(s):  
CHAN-JIN CHUNG ◽  
ROBERT G. REYNOLDS

Self-adaptation has been frequently employed in evolutionary computation. Angeline1 defined three distinct adaptive levels which are: population, individual and component levels. Cultural Algorithms have been shown to provide a framework in which to model self-adaptation at each of these levels. Here, we examine the role that different forms of knowledge can play in the self-adaptation process at the population level for evolution-based function optimizers. In particular, we compare the relative performance of normative and situational knowledge in guiding the search process. An acceptance function using a fuzzy inference engine is employed to select acceptable individuals for forming the generalized knowledge in the belief space. Evolutionary programming is used to implement the population space. The results suggest that the use of a cultural framework can produce substantial performance improvements in execution time and accuracy for a given set of function minimization problems over population-only evolutionary systems.


1998 ◽  
Vol 07 (03) ◽  
pp. 239-291 ◽  
Author(s):  
Chan-Jin Chung ◽  
Robert G. Reynolds

Cultural Algorithms are computational self-adaptive models which consist of a population and a belief space. The problem-solving experience of individuals selected from the population space by the acceptance function is generalized and stored in the belief space. This knowledge can then control the evolution of the population component by means of the influence function. Here, we examine the role that different forms of knowledge can play in the self-adaptation process within cultural systems. In particular, we compare various approaches that use normative and situational knowledge in different ways to guide the function optimization process. The results in this study demonstrate that Cultural Algorithms are a naturally useful framework for self-adaptation and that the use of a cultural framework to support self-adaptation in Evolutionary Programming can produce substantial performance improvements over population-only systems as expressed in terms of (1) systems success ratio, (2) execution CPU time, and (3) convergence (mean best solution) for a given set of 34 function minimization problems. The nature of these improvements and the type of knowledge that is most effective in producing them depend on the problem's functional landscape. In addition, it was found that the same held true for the population-only self-adaptive EP systems. Each level of self-adaptation (component, individual, and population) outperformed the others for problems with particular landscape features.


Author(s):  
Thomas Bäck

Given the discussions about Evolutionary Algorithms from the previous chapters, we shall now apply them to the artificial topologies just presented. This will be done by simply running the algorithms in their standard forms (according to the definitions of standard forms as given in sections 2.1.6, 2.2.6, and 2.3.6) for a reasonable number of function evaluations on these problems. The experiment compares an algorithm that self-adapts n standard deviations and uses recombination (the Evolution Strategy), an algorithm that self-adapts n standard deviations and renounces recombination (meta-Evolutionary Programming), and an algorithm that renounces self-adaptation but stresses the role of recombination (the Genetic Algorithm). Furthermore, all algorithms rely on different selection mechanisms. With respect to the level of self-adaptation, the choice of the Evolution Strategy and Evolutionary Programming variants is fair, while the Genetic Algorithm leaves us no choice (i.e., no self-adaptation mechanism is used within the standard Genetic Algorithm). Concerning the population size the number of offspring individuals (λ) is adjusted to a common value of λ = 100 in order to achieve comparability of population sizes while at the same time limiting the computational requirements to a justifiable amount. This results in the following three algorithmic instances that are compared here (using the standard notation introduced in chapter 2): • ES(n,0,rdI, s(15,100)): An Evolution Strategy that self-adapts n standard deviations but does not use correlated mutations. Recombination is discrete on object variables and global intermediate on standard deviations, and the algorithm uses a (15,100)-selection mechanism. • mEP(6,10,100): A meta-Evolutionary Programming algorithm that — by default — self-adapts n variances and controls mutation of variances by a meta-parameter ζ = 6. The tournament size for selection and the population size amount to q = 10 and μ = 100, respectively. • GA(30,0.001,r{0.6, 2}, 5,100): A Genetic Algorithm that evolves a population of μ = 100 bitstrings of length l = 30 • n, each. The scaling window size for linear dynamic scaling is set to ω = 5. Proportional selection, a two-point crossover operator with application rate 0.6 and a mutation operator with bit-reversal probability 1.0·10−3 complete the algorithm.


Sign in / Sign up

Export Citation Format

Share Document