cooperative coevolution
Recently Published Documents


TOTAL DOCUMENTS

219
(FIVE YEARS 52)

H-INDEX

22
(FIVE YEARS 5)

2021 ◽  
Author(s):  
◽  
Carlton Downey

<p>Linear Genetic Programming (LGP) is a powerful problem-solving technique, but one with several significant weaknesses. LGP programs consist of a linear sequence of instructions, where each instruction may reuse previously computed results. This structure makes LGP programs compact and powerful, however it also introduces the problem of instruction dependencies. The notion of instruction dependencies expresses the concept that certain instructions rely on other instructions. Instruction dependencies are often disrupted during crossover or mutation when one or more instructions undergo modification. This disruption can cause disproportionately large changes in program output resulting in non-viable offspring and poor algorithm performance. Motivated by biological inspiration and the issue of code disruption, we develop a new form of LGP called Parallel LGP (PLGP). PLGP programs consist of n lists of instructions. These lists are executed in parallel, and the resulting vectors are summed to produce the overall program output. PLGP limits the disruptive effects of crossover and mutation, which allows PLGP to significantly outperform regular LGP. We examine the PLGP architecture and determine that large PLGP programs can be slow to converge. To improve the convergence time of large PLGP programs we develop a new form of PLGP called Cooperative Coevolution PLGP (CC PLGP). CC PLGP adapts the concept of cooperative coevolution to the PLGP architecture. CC PLGP optimizes all program components in parallel, allowing CC PLGP to converge significantly faster than conventional PLGP. We examine the CC PLGP architecture and determine that performance</p>


2021 ◽  
Author(s):  
◽  
Carlton Downey

<p>Linear Genetic Programming (LGP) is a powerful problem-solving technique, but one with several significant weaknesses. LGP programs consist of a linear sequence of instructions, where each instruction may reuse previously computed results. This structure makes LGP programs compact and powerful, however it also introduces the problem of instruction dependencies. The notion of instruction dependencies expresses the concept that certain instructions rely on other instructions. Instruction dependencies are often disrupted during crossover or mutation when one or more instructions undergo modification. This disruption can cause disproportionately large changes in program output resulting in non-viable offspring and poor algorithm performance. Motivated by biological inspiration and the issue of code disruption, we develop a new form of LGP called Parallel LGP (PLGP). PLGP programs consist of n lists of instructions. These lists are executed in parallel, and the resulting vectors are summed to produce the overall program output. PLGP limits the disruptive effects of crossover and mutation, which allows PLGP to significantly outperform regular LGP. We examine the PLGP architecture and determine that large PLGP programs can be slow to converge. To improve the convergence time of large PLGP programs we develop a new form of PLGP called Cooperative Coevolution PLGP (CC PLGP). CC PLGP adapts the concept of cooperative coevolution to the PLGP architecture. CC PLGP optimizes all program components in parallel, allowing CC PLGP to converge significantly faster than conventional PLGP. We examine the CC PLGP architecture and determine that performance</p>


2021 ◽  
Author(s):  
◽  
Rohitash Chandra

<p>One way to train neural networks is to use evolutionary algorithms such as cooperative coevolution - a method that decomposes the network's learnable parameters into subsets, called subcomponents. Cooperative coevolution gains advantage over other methods by evolving particular subcomponents independently from the rest of the network. Its success depends strongly on how the problem decomposition is carried out. This thesis suggests new forms of problem decomposition, based on a novel and intuitive choice of modularity, and examines in detail at what stage and to what extent the different decomposition methods should be used. The new methods are evaluated by training feedforward networks to solve pattern classification tasks, and by training recurrent networks to solve grammatical inference problems. Efficient problem decomposition methods group interacting variables into the same subcomponents. We examine the methods from the literature and provide an analysis of the nature of the neural network optimization problem in terms of interacting variables. We then present a novel problem decomposition method that groups interacting variables and that can be generalized to neural networks with more than a single hidden layer. We then incorporate local search into cooperative neuro-evolution. We present a memetic cooperative coevolution method that takes into account the cost of employing local search across several sub-populations. The optimisation process changes during evolution in terms of diversity and interacting variables. To address this, we examine the adaptation of the problem decomposition method during the evolutionary process. The results in this thesis show that the proposed methods improve performance in terms of optimization time, scalability and robustness. As a further test, we apply the problem decomposition and adaptive cooperative coevolution methods for training recurrent neural networks on chaotic time series problems. The proposed methods show better performance in terms of accuracy and robustness.</p>


2021 ◽  
Author(s):  
◽  
Rohitash Chandra

<p>One way to train neural networks is to use evolutionary algorithms such as cooperative coevolution - a method that decomposes the network's learnable parameters into subsets, called subcomponents. Cooperative coevolution gains advantage over other methods by evolving particular subcomponents independently from the rest of the network. Its success depends strongly on how the problem decomposition is carried out. This thesis suggests new forms of problem decomposition, based on a novel and intuitive choice of modularity, and examines in detail at what stage and to what extent the different decomposition methods should be used. The new methods are evaluated by training feedforward networks to solve pattern classification tasks, and by training recurrent networks to solve grammatical inference problems. Efficient problem decomposition methods group interacting variables into the same subcomponents. We examine the methods from the literature and provide an analysis of the nature of the neural network optimization problem in terms of interacting variables. We then present a novel problem decomposition method that groups interacting variables and that can be generalized to neural networks with more than a single hidden layer. We then incorporate local search into cooperative neuro-evolution. We present a memetic cooperative coevolution method that takes into account the cost of employing local search across several sub-populations. The optimisation process changes during evolution in terms of diversity and interacting variables. To address this, we examine the adaptation of the problem decomposition method during the evolutionary process. The results in this thesis show that the proposed methods improve performance in terms of optimization time, scalability and robustness. As a further test, we apply the problem decomposition and adaptive cooperative coevolution methods for training recurrent neural networks on chaotic time series problems. The proposed methods show better performance in terms of accuracy and robustness.</p>


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
H. D. Yue ◽  
Y. Sun

Cooperative coevolution (CC) is an effective framework for solving large-scale global optimization (LSGO) problems. However, CC with static decomposition method is ineffective for fully nonseparable problems, and CC with dynamic decomposition method to decompose problems is computationally costly. Therefore, a two-stage decomposition (TSD) method is proposed in this paper to decompose LSGO problems using as few computational resources as possible. In the first stage, to decompose problems using low computational resources, a hybrid-pool differential grouping (HPDG) method is proposed, which contains a hybrid-pool-based detection structure (HPDS) and a unit vector-based perturbation (UVP) strategy. In the second stage, to decompose the fully nonseparable problems, a known information-based dynamic decomposition (KIDD) method is proposed. Analytical methods are used to demonstrate that HPDG has lower decomposition complexity compared to state-of-the-art static decomposition methods. Experiments show that CC with TSD is a competitive algorithm for solving LSGO problems.


Author(s):  
Jiaru Yang ◽  
Yu Zhang ◽  
Ziqian Wang ◽  
Yuki Todo ◽  
Bo Lu ◽  
...  

AbstractThe algorithm wingsuit flying search (WFS) mimics the procedure of landing the vehicle. The outstanding feature of WFS is parameterless and of rapid convergence. However, WFS also has its shortcomings, sometimes it will inevitably be trapped into local optima, thereby yield inferior solutions owing to its relatively weak exploration ability. Spherical evolution (SE) adopts a novel spherical search pattern that takes aim at splendid search ability. Cooperative coevolution is a useful parallel structure for reconciling algorithmic performance. Considering the complementary strengths of both algorithms, we herein propose a new hybrid algorithm that is comprised of SE and WFS using cooperative coevolution. During the search for optimal solutions in WFS, we replaced the original search matrix and introduced the spherical mechanism of SE, in parallel with coevolution to enhance the competitiveness of the population. The two distinct search dynamics were combined in a parallel and coevolutionary way, thereby getting a good search performance. The resultant hybrid algorithm, CCWFSSE, was tested on the CEC2017 benchmark set and 22 CEC 2011 real-world problems. The experimental data obtained can verify that CCWFSSE outperforms other algorithms in aspects of effectiveness and robustness.


2021 ◽  
Vol 1 (3) ◽  
pp. 1-26
Author(s):  
Peilan Xu ◽  
Wenjian Luo ◽  
Xin Lin ◽  
Jiajia Zhang ◽  
Yingying Qiao ◽  
...  

Large-scale optimization problems and constrained optimization problems have attracted considerable attention in the swarm and evolutionary intelligence communities and exemplify two common features of real problems, i.e., a large scale and constraint limitations. However, only a little work on solving large-scale continuous constrained optimization problems exists. Moreover, the types of benchmarks proposed for large-scale continuous constrained optimization algorithms are not comprehensive at present. In this article, first, a constraint-objective cooperative coevolution (COCC) framework is proposed for large-scale continuous constrained optimization problems, which is based on the dual nature of the objective and constraint functions: modular and imbalanced components. The COCC framework allocates the computing resources to different components according to the impact of objective values and constraint violations. Second, a benchmark for large-scale continuous constrained optimization is presented, which takes into account the modular nature, as well as both imbalanced and overlapping characteristics of components. Finally, three different evolutionary algorithms are embedded into the COCC framework for experiments, and the experimental results show that COCC performs competitively.


Sign in / Sign up

Export Citation Format

Share Document