ACM Transactions on Evolutionary Learning and Optimization
Latest Publications


TOTAL DOCUMENTS

18
(FIVE YEARS 18)

H-INDEX

0
(FIVE YEARS 0)

Published By Association For Computing Machinery (ACM)

2688-299x, 2688-3007

2022 ◽  
Vol 2 (1) ◽  
pp. 1-29
Author(s):  
Sukrit Mittal ◽  
Dhish Kumar Saxena ◽  
Kalyanmoy Deb ◽  
Erik D. Goodman

Learning effective problem information from already explored search space in an optimization run, and utilizing it to improve the convergence of subsequent solutions, have represented important directions in Evolutionary Multi-objective Optimization (EMO) research. In this article, a machine learning (ML)-assisted approach is proposed that: (a) maps the solutions from earlier generations of an EMO run to the current non-dominated solutions in the decision space ; (b) learns the salient patterns in the mapping using an ML method, here an artificial neural network (ANN); and (c) uses the learned ML model to advance some of the subsequent offspring solutions in an adaptive manner. Such a multi-pronged approach, quite different from the popular surrogate-modeling methods, leads to what is here referred to as the Innovized Progress (IP) operator. On several test and engineering problems involving two and three objectives, with and without constraints, it is shown that an EMO algorithm assisted by the IP operator offers faster convergence behavior, compared to its base version independent of the IP operator. The results are encouraging, pave a new path for the performance improvement of EMO algorithms, and set the motivation for further exploration on more challenging problems.


2021 ◽  
Vol 1 (4) ◽  
pp. 1-26
Author(s):  
Faramarz Khosravi ◽  
Alexander Rass ◽  
Jürgen Teich

Real-world problems typically require the simultaneous optimization of multiple, often conflicting objectives. Many of these multi-objective optimization problems are characterized by wide ranges of uncertainties in their decision variables or objective functions. To cope with such uncertainties, stochastic and robust optimization techniques are widely studied aiming to distinguish candidate solutions with uncertain objectives specified by confidence intervals, probability distributions, sampled data, or uncertainty sets. In this scope, this article first introduces a novel empirical approach for the comparison of candidate solutions with uncertain objectives that can follow arbitrary distributions. The comparison is performed through accurate and efficient calculations of the probability that one solution dominates the other in terms of each uncertain objective. Second, such an operator can be flexibly used and combined with many existing multi-objective optimization frameworks and techniques by just substituting their standard comparison operator, thus easily enabling the Pareto front optimization of problems with multiple uncertain objectives. Third, a new benchmark for evaluating uncertainty-aware optimization techniques is introduced by incorporating different types of uncertainties into a well-known benchmark for multi-objective optimization problems. Fourth, the new comparison operator and benchmark suite are integrated into an existing multi-objective optimization framework that features a selection of multi-objective optimization problems and algorithms. Fifth, the efficiency in terms of performance and execution time of the proposed comparison operator is evaluated on the introduced uncertainty benchmark. Finally, statistical tests are applied giving evidence of the superiority of the new comparison operator in terms of \epsilon -dominance and attainment surfaces in comparison to previously proposed approaches.


2021 ◽  
Vol 1 (4) ◽  
pp. 1-28
Author(s):  
Denis Antipov ◽  
Benjamin Doerr

To gain a better theoretical understanding of how evolutionary algorithms (EAs) cope with plateaus of constant fitness, we propose the n -dimensional \textsc {Plateau} _k function as natural benchmark and analyze how different variants of the (1 + 1)  EA optimize it. The \textsc {Plateau} _k function has a plateau of second-best fitness in a ball of radius k around the optimum. As evolutionary algorithm, we regard the (1 + 1)  EA using an arbitrary unbiased mutation operator. Denoting by \alpha the random number of bits flipped in an application of this operator and assuming that \Pr [\alpha = 1] has at least some small sub-constant value, we show the surprising result that for all constant k \ge 2 , the runtime  T follows a distribution close to the geometric one with success probability equal to the probability to flip between 1 and k bits divided by the size of the plateau. Consequently, the expected runtime is the inverse of this number, and thus only depends on the probability to flip between 1 and k bits, but not on other characteristics of the mutation operator. Our result also implies that the optimal mutation rate for standard bit mutation here is approximately  k/(en) . Our main analysis tool is a combined analysis of the Markov chains on the search point space and on the Hamming level space, an approach that promises to be useful also for other plateau problems.


2021 ◽  
Vol 1 (4) ◽  
pp. 1-21
Author(s):  
Manuel López-ibáñez ◽  
Juergen Branke ◽  
Luís Paquete

Experimental studies are prevalent in Evolutionary Computation ( EC ), and concerns about the reproducibility and replicability of such studies have increased in recent times, reflecting similar concerns in other scientific fields. In this article, we discuss, within the context of EC, the different types of reproducibility and suggest a classification that refines the badge system of the Association of Computing Machinery ( ACM ) adopted by ACM Transactions on Evolutionary Learning and Optimization ( TELO ). We identify cultural and technical obstacles to reproducibility in the EC field. Finally, we provide guidelines and suggest tools that may help to overcome some of these reproducibility obstacles.


2021 ◽  
Vol 1 (4) ◽  
pp. 1-43
Author(s):  
Benjamin Doerr ◽  
Frank Neumann

The theory of evolutionary computation for discrete search spaces has made significant progress since the early 2010s. This survey summarizes some of the most important recent results in this research area. It discusses fine-grained models of runtime analysis of evolutionary algorithms, highlights recent theoretical insights on parameter tuning and parameter control, and summarizes the latest advances for stochastic and dynamic problems. We regard how evolutionary algorithms optimize submodular functions, and we give an overview over the large body of recent results on estimation of distribution algorithms. Finally, we present the state of the art of drift analysis, one of the most powerful analysis technique developed in this field.


2021 ◽  
Vol 1 (3) ◽  
pp. 1-41
Author(s):  
Stephen Kelly ◽  
Robert J. Smith ◽  
Malcolm I. Heywood ◽  
Wolfgang Banzhaf

Modularity represents a recurring theme in the attempt to scale evolution to the design of complex systems. However, modularity rarely forms the central theme of an artificial approach to evolution. In this work, we report on progress with the recently proposed Tangled Program Graph (TPG) framework in which programs are modules. The combination of the TPG representation and its variation operators enable both teams of programs and graphs of teams of programs to appear in an emergent process. The original development of TPG was limited to tasks with, for the most part, complete information. This work details two recent approaches for scaling TPG to tasks that are dominated by partially observable sources of information using different formulations of indexed memory. One formulation emphasizes the incremental construction of memory, again as an emergent process, resulting in a distributed view of state. The second formulation assumes a single global instance of memory and develops it as a communication medium, thus a single global view of state. The resulting empirical evaluation demonstrates that TPG equipped with memory is able to solve multi-task recursive time-series forecasting problems and visual navigation tasks expressed in two levels of a commercial first-person shooter environment.


2021 ◽  
Vol 1 (3) ◽  
pp. 1-26
Author(s):  
Peilan Xu ◽  
Wenjian Luo ◽  
Xin Lin ◽  
Jiajia Zhang ◽  
Yingying Qiao ◽  
...  

Large-scale optimization problems and constrained optimization problems have attracted considerable attention in the swarm and evolutionary intelligence communities and exemplify two common features of real problems, i.e., a large scale and constraint limitations. However, only a little work on solving large-scale continuous constrained optimization problems exists. Moreover, the types of benchmarks proposed for large-scale continuous constrained optimization algorithms are not comprehensive at present. In this article, first, a constraint-objective cooperative coevolution (COCC) framework is proposed for large-scale continuous constrained optimization problems, which is based on the dual nature of the objective and constraint functions: modular and imbalanced components. The COCC framework allocates the computing resources to different components according to the impact of objective values and constraint violations. Second, a benchmark for large-scale continuous constrained optimization is presented, which takes into account the modular nature, as well as both imbalanced and overlapping characteristics of components. Finally, three different evolutionary algorithms are embedded into the COCC framework for experiments, and the experimental results show that COCC performs competitively.


2021 ◽  
Vol 1 (3) ◽  
pp. 1-19
Author(s):  
Miqing Li

In evolutionary multiobjective optimisation ( EMO ), archiving is a common component that maintains an (external or internal) set during the search process, typically with a fixed size, in order to provide a good representation of high-quality solutions produced. Such an archive set can be used solely to store the final results shown to the decision maker, but in many cases may participate in the process of producing solutions (e.g., as a solution pool where the parental solutions are selected). Over the last three decades, archiving stands as an important issue in EMO, leading to the emergence of various methods such as those based on Pareto, indicator, or decomposition criteria. Such methods have demonstrated their effectiveness in literature and have been believed to be good options to many problems, particularly those having a regular Pareto front shape, e.g., a simplex shape. In this article, we challenge this belief. We do this through artificially constructing several sequences with extremely simple shapes, i.e., 1D/2D simplex Pareto front. We show the struggle of predominantly used archiving methods which have been deemed to well handle such shapes. This reveals that the order of solutions entering the archive matters, and that current EMO algorithms may not be fully capable of maintaining a representative population on problems with linear Pareto fronts even in the case that all of their optimal solutions can be found.


2021 ◽  
Vol 1 (3) ◽  
pp. 1-38
Author(s):  
Yi Liu ◽  
Will N. Browne ◽  
Bing Xue

Learning Classifier Systems (LCSs) are a paradigm of rule-based evolutionary computation (EC). LCSs excel in data-mining tasks regarding helping humans to understand the explored problem, often through visualizing the discovered patterns linking features to classes. Due to the stochastic nature of EC, LCSs unavoidably produce and keep redundant rules, which obscure the patterns. Thus, rule compaction methods are invoked to produce a better population by removing problematic rules. Previously, compaction methods have neither been tested on large-scale problems nor been assessed on the performance of capturing patterns. We review and test the most popular compaction algorithms, finding that across multiple LCSs’ populations for the same task, although the redundant rules can be different, the accurate rules are common. Furthermore, the patterns contained consistently refer to the nature of the explored domain, e.g., the data distribution or the importance of features for determining actions. This extends the [ O ] set hypothesis proposed by Butz et al. [1], in which an LCS is expected to evolve a minimal number of non-overlapped rules to represent an addressed domain. Two new compaction algorithms are introduced to search at the rule level and the population level by compacting multiple LCSs’ populations. Two visualization methods are employed for verifying the interpretability of these populations. Successful compaction is demonstrated on complex and real problems with clean datasets, e.g., the 11-bits Majority-On problem that requires 924 different interacting rules in the optimal solution to be uniquely identified to enable correct visualization. For the first time, the patterns contained in learned models for the large-scale 70-bits Multiplexer problem are visualized successfully.


2021 ◽  
Vol 1 (2) ◽  
pp. 1-23
Author(s):  
Arkadiy Dushatskiy ◽  
Tanja Alderliesten ◽  
Peter A. N. Bosman

Surrogate-assisted evolutionary algorithms have the potential to be of high value for real-world optimization problems when fitness evaluations are expensive, limiting the number of evaluations that can be performed. In this article, we consider the domain of pseudo-Boolean functions in a black-box setting. Moreover, instead of using a surrogate model as an approximation of a fitness function, we propose to precisely learn the coefficients of the Walsh decomposition of a fitness function and use the Walsh decomposition as a surrogate. If the coefficients are learned correctly, then the Walsh decomposition values perfectly match with the fitness function, and, thus, the optimal solution to the problem can be found by optimizing the surrogate without any additional evaluations of the original fitness function. It is known that the Walsh coefficients can be efficiently learned for pseudo-Boolean functions with k -bounded epistasis and known problem structure. We propose to learn dependencies between variables first and, therefore, substantially reduce the number of Walsh coefficients to be calculated. After the accurate Walsh decomposition is obtained, the surrogate model is optimized using GOMEA, which is considered to be a state-of-the-art binary optimization algorithm. We compare the proposed approach with standard GOMEA and two other Walsh decomposition-based algorithms. The benchmark functions in the experiments are well-known trap functions, NK-landscapes, MaxCut, and MAX3SAT problems. The experimental results demonstrate that the proposed approach is scalable at the supposed complexity of O (ℓ log ℓ) function evaluations when the number of subfunctions is O (ℓ) and all subfunctions are k -bounded, outperforming all considered algorithms.


Sign in / Sign up

Export Citation Format

Share Document