scholarly journals A Revisit of Infinite Population Models for Evolutionary Algorithms on Continuous Optimization Problems

2020 ◽  
Vol 28 (1) ◽  
pp. 55-85
Author(s):  
Bo Song ◽  
Victor O.K. Li

Infinite population models are important tools for studying population dynamics of evolutionary algorithms. They describe how the distributions of populations change between consecutive generations. In general, infinite population models are derived from Markov chains by exploiting symmetries between individuals in the population and analyzing the limit as the population size goes to infinity. In this article, we study the theoretical foundations of infinite population models of evolutionary algorithms on continuous optimization problems. First, we show that the convergence proofs in a widely cited study were in fact problematic and incomplete. We further show that the modeling assumption of exchangeability of individuals cannot yield the transition equation. Then, in order to analyze infinite population models, we build an analytical framework based on convergence in distribution of random elements which take values in the metric space of infinite sequences. The framework is concise and mathematically rigorous. It also provides an infrastructure for studying the convergence of the stacking of operators and of iterating the algorithm which previous studies failed to address. Finally, we use the framework to prove the convergence of infinite population models for the mutation operator and the [Formula: see text]-ary recombination operator. We show that these operators can provide accurate predictions for real population dynamics as the population size goes to infinity, provided that the initial population is identically and independently distributed.

2016 ◽  
Vol 369 ◽  
pp. 419-440 ◽  
Author(s):  
Harold Dias de Mello Junior ◽  
Luis Martí ◽  
André V. Abs da Cruz ◽  
Marley M.B. Rebuzzi Vellasco

2015 ◽  
Vol 137 (7) ◽  
Author(s):  
Jong-Chen Chen

Continuous optimization plays an increasingly significant role in everyday decision-making situations. Our group had previously developed a multilevel system called the artificial neuromolecular system (ANM) that possessed structure richness allowing variation and/or selection operators to act on it in order to generate a broad range of dynamic behaviors. In this paper, we used the ANM system to control the motions of a wooden walking robot named Miky. The robot was used to investigate the ANM system's capability to deal with continuous optimization problems through self-organized learning. Evolutionary learning algorithm was used to train the system and generate appropriate control. The experimental results showed that Miky was capable of learning in a continued manner in a physical environment. A further experiment was conducted by making some changes to Miky's physical structure in order to observe the system's capability to deal with the change. Detailed analysis of the experimental results showed that Miky responded to the change by appropriately adjusting its leg movements in space and time. The results showed that the ANM system possessed continuous optimization capability in coping with the change. Our findings from the empirical experiments might provide us another dimension of information of how to design an intelligent system comparatively friendlier than the traditional systems in assisting humans to walk.


2020 ◽  
Vol 34 (05) ◽  
pp. 7111-7118
Author(s):  
Moumita Choudhury ◽  
Saaduddin Mahmud ◽  
Md. Mosaddek Khan

Distributed Constraint Optimization Problems (DCOPs) are a widely studied constraint handling framework. The objective of a DCOP algorithm is to optimize a global objective function that can be described as the aggregation of several distributed constraint cost functions. In a DCOP, each of these functions is defined by a set of discrete variables. However, in many applications, such as target tracking or sleep scheduling in sensor networks, continuous valued variables are more suited than the discrete ones. Considering this, Functional DCOPs (F-DCOPs) have been proposed that can explicitly model a problem containing continuous variables. Nevertheless, state-of-the-art F-DCOPs approaches experience onerous memory or computation overhead. To address this issue, we propose a new F-DCOP algorithm, namely Particle Swarm based F-DCOP (PFD), which is inspired by a meta-heuristic, Particle Swarm Optimization (PSO). Although it has been successfully applied to many continuous optimization problems, the potential of PSO has not been utilized in F-DCOPs. To be exact, PFD devises a distributed method of solution construction while significantly reducing the computation and memory requirements. Moreover, we theoretically prove that PFD is an anytime algorithm. Finally, our empirical results indicate that PFD outperforms the state-of-the-art approaches in terms of solution quality and computation overhead.


Sign in / Sign up

Export Citation Format

Share Document