Multi-Objective Evolutionary Algorithms

Author(s):  
Sanjoy Das ◽  
Bijaya K. Panigrahi

Real world optimization problems are often too complex to be solved through analytical means. Evolutionary algorithms, a class of algorithms that borrow paradigms from nature, are particularly well suited to address such problems. These algorithms are stochastic methods of optimization that have become immensely popular recently, because they are derivative-free methods, are not as prone to getting trapped in local minima (as they are population based), and are shown to work well for many complex optimization problems. Although evolutionary algorithms have conventionally focussed on optimizing single objective functions, most practical problems in engineering are inherently multi-objective in nature. Multi-objective evolutionary optimization is a relatively new, and rapidly expanding area of research in evolutionary computation that looks at ways to address these problems. In this chapter, we provide an overview of some of the most significant issues in multi-objective optimization (Deb, 2001).

Author(s):  
Sanjoy Das

Real world optimization problems are often too complex to be solved through analytic means. Evolutionary algorithms are a class of algorithms that borrow paradigms from nature to address them. These are stochastic methods of optimization that maintain a population of individual solutions, which correspond to points in the search space of the problem. These algorithms have been immensely popular as they are derivativefree techniques, are not as prone to getting trapped in local minima, and can be tailored specifically to suit any given problem. The performance of evolutionary algorithms can be improved further by adding a local search component to them. The Nelder-Mead simplex algorithm (Nelder & Mead, 1965) is a simple local search algorithm that has been routinely applied to improve the search process in evolutionary algorithms, and such a strategy has met with great success. In this article, we provide an overview of the various strategies that have been adopted to hybridize two wellknown evolutionary algorithms - genetic algorithms (GA) and particle swarm optimization (PSO).


2021 ◽  
pp. 1-24
Author(s):  
S. C. Maree ◽  
T. Alderliesten ◽  
P. A. N. Bosman

Abstract Domination-based multi-objective (MO) evolutionary algorithms (EAs) are today arguably the most frequently used type of MOEA. These methods however stagnate when the majority of the population becomes non-dominated, preventing further convergence to the Pareto set. Hypervolume-based MO optimization has shown promising results to overcome this. Direct use of the hypervolume however results in no selection pressure for dominated solutions. The recently introduced Sofomore framework overcomes this by solving multiple interleaved single-objective dynamic problems that iteratively improve a single approximation set, based on the uncrowded hypervolume improvement (UHVI). It thereby however loses many advantages of population-based MO optimization, such as handling multimodality. Here, we reformulate the UHVI as a quality measure for approximation sets, called the uncrowded hypervolume (UHV), which can be used to directly solve MO optimization problems with a single-objective optimizer. We use the state-of-the-art gene-pool optimal mixing evolutionary algorithm (GOMEA) that is capable of efficiently exploiting the intrinsically available greybox properties of this problem. The resulting algorithm, UHV-GOMEA, is compared to Sofomore equipped with GOMEA, and the domination-based MO-GOMEA. In doing so, we investigate in which scenarios either domination-based or hypervolume-based methods are preferred. Finally, we construct a simple hybrid approach that combines MO-GOMEA with UHV-GOMEA and outperforms both.


Mathematics ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. 2018
Author(s):  
Mohammed Mahrach ◽  
Gara Miranda ◽  
Coromoto León ◽  
Eduardo Segredo

One of the main components of most modern Multi-Objective Evolutionary Algorithms (MOEAs) is to maintain a proper diversity within a population in order to avoid the premature convergence problem. Due to this implicit feature that most MOEAs share, their application for Single-Objective Optimization (SO) might be helpful, and provides a promising field of research. Some common approaches to this topic are based on adding extra—and generally artificial—objectives to the problem formulation. However, when applying MOEAs to implicit Multi-Objective Optimization Problems (MOPs), it is not common to analyze how effective said approaches are in relation to optimizing each objective separately. In this paper, we present a comparative study between MOEAs and Single-Objective Evolutionary Algorithms (SOEAs) when optimizing every objective in a MOP, considering here the bi-objective case. For the study, we focus on two well-known and widely studied optimization problems: the Knapsack Problem (KNP) and the Travelling Salesman Problem (TSP). The experimental study considers three MOEAs and two SOEAs. Each SOEA is applied independently for each optimization objective, such that the optimized values obtained for each objective can be compared to the multi-objective solutions achieved by the MOEAs. MOEAs, however, allow optimizing two objectives at once, since the resulting Pareto fronts can be used to analyze the endpoints, i.e., the point optimizing objective 1 and the point optimizing objective 2. The experimental results show that, although MOEAs have to deal with several objectives simultaneously, they can compete with SOEAs, especially when dealing with strongly correlated or large instances.


2013 ◽  
Vol 4 (3) ◽  
pp. 1-21 ◽  
Author(s):  
Yuhui Shi ◽  
Jingqian Xue ◽  
Yali Wu

In recent years, many evolutionary algorithms and population-based algorithms have been developed for solving multi-objective optimization problems. In this paper, the authors propose a new multi-objective brain storm optimization algorithm in which the clustering strategy is applied in the objective space instead of in the solution space in the original brain storm optimization algorithm for solving single objective optimization problems. Two versions of multi-objective brain storm optimization algorithm with different characteristics of diverging operation were tested to validate the usefulness and effectiveness of the proposed algorithm. Experimental results show that the proposed multi-objective brain storm optimization algorithm is a very promising algorithm, at least for solving these tested multi-objective optimization problems.


Author(s):  
Zhenkun Wang ◽  
Qingyan Li ◽  
Qite Yang ◽  
Hisao Ishibuchi

AbstractIt has been acknowledged that dominance-resistant solutions (DRSs) extensively exist in the feasible region of multi-objective optimization problems. Recent studies show that DRSs can cause serious performance degradation of many multi-objective evolutionary algorithms (MOEAs). Thereafter, various strategies (e.g., the $$\epsilon $$ ϵ -dominance and the modified objective calculation) to eliminate DRSs have been proposed. However, these strategies may in turn cause algorithm inefficiency in other aspects. We argue that these coping strategies prevent the algorithm from obtaining some boundary solutions of an extremely convex Pareto front (ECPF). That is, there is a dilemma between eliminating DRSs and preserving boundary solutions of the ECPF. To illustrate such a dilemma, we propose a new multi-objective optimization test problem with the ECPF as well as DRSs. Using this test problem, we investigate the performance of six representative MOEAs in terms of boundary solutions preservation and DRS elimination. The results reveal that it is quite challenging to distinguish between DRSs and boundary solutions of the ECPF.


A test blueprint/test template, also known as the table of specifications, represents the structure of a test. It has been highly recommended in assessment textbook to carry out the preparation of a test with a test blueprint. This chapter focuses on modeling a dynamic test paper template using multi-objective optimization algorithm and makes use of the template in dynamic generation of examination test paper. Multi-objective optimization-based models are realistic models for many complex optimization problems. Modeling a dynamic test paper template, similar to many real-life problems, includes solving multiple conflicting objectives satisfying the template specifications.


2012 ◽  
Vol 433-440 ◽  
pp. 2808-2816
Author(s):  
Jian Jin Zheng ◽  
You Shen Xia

This paper presents a new interactive neural network for solving constrained multi-objective optimization problems. The constrained multi-objective optimization problem is reformulated into two constrained single objective optimization problems and two neural networks are designed to obtain the optimal weight and the optimal solution of the two optimization problems respectively. The proposed algorithm has a low computational complexity and is easy to be implemented. Moreover, the proposed algorithm is well applied to the design of digital filters. Computed results illustrate the good performance of the proposed algorithm.


Sign in / Sign up

Export Citation Format

Share Document