Evolutionary Large-Scale Multi-Objective Optimization: A Survey

2022 ◽  
Vol 54 (8) ◽  
pp. 1-34
Author(s):  
Ye Tian ◽  
Langchun Si ◽  
Xingyi Zhang ◽  
Ran Cheng ◽  
Cheng He ◽  
...  

Multi-objective evolutionary algorithms (MOEAs) have shown promising performance in solving various optimization problems, but their performance may deteriorate drastically when tackling problems containing a large number of decision variables. In recent years, much effort been devoted to addressing the challenges brought by large-scale multi-objective optimization problems. This article presents a comprehensive survey of stat-of-the-art MOEAs for solving large-scale multi-objective optimization problems. We start with a categorization of these MOEAs into decision variable grouping based, decision space reduction based, and novel search strategy based MOEAs, discussing their strengths and weaknesses. Then, we review the benchmark problems for performance assessment and a few important and emerging applications of MOEAs for large-scale multi-objective optimization. Last, we discuss some remaining challenges and future research directions of evolutionary large-scale multi-objective optimization.

Author(s):  
Yajie Zhang ◽  
Ye Tian ◽  
Xingyi Zhang

AbstractSparse large-scale multi-objective optimization problems (LSMOPs) widely exist in real-world applications, which have the properties of involving a large number of decision variables and sparse Pareto optimal solutions, i.e., most decision variables of these solutions are zero. In recent years, sparse LSMOPs have attracted increasing attentions in the evolutionary computation community. However, all the recently tailored algorithms for sparse LSMOPs put the sparsity detection and maintenance in the first place, where the nonzero variables can hardly be optimized sufficiently within a limited budget of function evaluations. To address this issue, this paper proposes to enhance the connection between real variables and binary variables within the two-layer encoding scheme with the assistance of variable grouping techniques. In this way, more efforts can be devoted to the real part of nonzero variables, achieving the balance between sparsity maintenance and variable optimization. According to the experimental results on eight benchmark problems and three real-world applications, the proposed algorithm is superior over existing state-of-the-art evolutionary algorithms for sparse LSMOPs.


Author(s):  
Amarjeet Prajapati

AbstractOver the past 2 decades, several multi-objective optimizers (MOOs) have been proposed to address the different aspects of multi-objective optimization problems (MOPs). Unfortunately, it has been observed that many of MOOs experiences performance degradation when applied over MOPs having a large number of decision variables and objective functions. Specially, the performance of MOOs rapidly decreases when the number of decision variables and objective functions increases by more than a hundred and three, respectively. To address the challenges caused by such special case of MOPs, some large-scale multi-objective optimization optimizers (L-MuOOs) and large-scale many-objective optimization optimizers (L-MaOOs) have been developed in the literature. Even after vast development in the direction of L-MuOOs and L-MaOOs, the supremacy of these optimizers has not been tested on real-world optimization problems containing a large number of decision variables and objectives such as large-scale many-objective software clustering problems (L-MaSCPs). In this study, the performance of nine L-MuOOs and L-MaOOs (i.e., S3-CMA-ES, LMOSCO, LSMOF, LMEA, IDMOPSO, ADC-MaOO, NSGA-III, H-RVEA, and DREA) is evaluated and compared over five L-MaSCPs in terms of IGD, Hypervolume, and MQ metrics. The experimentation results show that the S3-CMA-ES and LMOSCO perform better compared to the LSMOF, LMEA, IDMOPSO, ADC-MaOO, NSGA-III, H-RVEA, and DREA in most of the cases. The LSMOF, LMEA, IDMOPSO, ADC-MaOO, NSGA-III, and DREA, are the average performer, and H-RVEA is the worst performer.


Author(s):  
Wen-Jing Hong ◽  
Peng Yang ◽  
Ke Tang

AbstractLarge-scale multi-objective optimization problems (MOPs) that involve a large number of decision variables, have emerged from many real-world applications. While evolutionary algorithms (EAs) have been widely acknowledged as a mainstream method for MOPs, most research progress and successful applications of EAs have been restricted to MOPs with small-scale decision variables. More recently, it has been reported that traditional multi-objective EAs (MOEAs) suffer severe deterioration with the increase of decision variables. As a result, and motivated by the emergence of real-world large-scale MOPs, investigation of MOEAs in this aspect has attracted much more attention in the past decade. This paper reviews the progress of evolutionary computation for large-scale multi-objective optimization from two angles. From the key difficulties of the large-scale MOPs, the scalability analysis is discussed by focusing on the performance of existing MOEAs and the challenges induced by the increase of the number of decision variables. From the perspective of methodology, the large-scale MOEAs are categorized into three classes and introduced respectively: divide and conquer based, dimensionality reduction based and enhanced search-based approaches. Several future research directions are also discussed.


2021 ◽  
pp. 1-21
Author(s):  
Xin Li ◽  
Xiaoli Li ◽  
Kang Wang

The key characteristic of multi-objective evolutionary algorithm is that it can find a good approximate multi-objective optimal solution set when solving multi-objective optimization problems(MOPs). However, most multi-objective evolutionary algorithms perform well on regular multi-objective optimization problems, but their performance on irregular fronts deteriorates. In order to remedy this issue, this paper studies the existing algorithms and proposes a multi-objective evolutionary based on niche selection to deal with irregular Pareto fronts. In this paper, the crowding degree is calculated by the niche method in the process of selecting parents when the non-dominated solutions converge to the first front, which improves the the quality of offspring solutions and which is beneficial to local search. In addition, niche selection is adopted into the process of environmental selection through considering the number and the location of the individuals in its niche radius, which improve the diversity of population. Finally, experimental results on 23 benchmark problems including MaF and IMOP show that the proposed algorithm exhibits better performance than the compared MOEAs.


Author(s):  
Xiaohui Yuan ◽  
Zhihuan Chen ◽  
Yanbin Yuan ◽  
Yuehua Huang ◽  
Xiaopan Zhang

A novel strength Pareto gravitational search algorithm (SPGSA) is proposed to solve multi-objective optimization problems. This SPGSA algorithm utilizes the strength Pareto concept to assign the fitness values for agents and uses a fine-grained elitism selection mechanism to keep the population diversity. Furthermore, the recombination operators are modeled in this approach to decrease the possibility of trapping in local optima. Experiments are conducted on a series of benchmark problems that are characterized by difficulties in local optimality, nonuniformity, and nonconvexity. The results show that the proposed SPGSA algorithm performs better in comparison with other related works. On the other hand, the effectiveness of two subtle means added to the GSA are verified, i.e. the fine-grained elitism selection and the use of SBX and PMO operators. Simulation results show that these measures not only improve the convergence ability of original GSA, but also preserve the population diversity adequately, which enables the SPGSA algorithm to have an excellent ability that keeps a desirable balance between the exploitation and exploration so as to accelerate the convergence speed to the true Pareto-optimal front.


Symmetry ◽  
2020 ◽  
Vol 12 (3) ◽  
pp. 465 ◽  
Author(s):  
Peng Ni ◽  
Jiale Gao ◽  
Yafei Song ◽  
Wen Quan ◽  
Qinghua Xing

In the real world, multi-objective optimization problems always change over time in most projects. Once the environment changes, the distribution of the optimal solutions would also be changed in decision space. Sometimes, such change may obey the law of symmetry, i.e., the minimum of the objective function in such environment is its maximum in another environment. In such cases, the optimal solutions keep unchanged or vibrate in a small range. However, in most cases, they do not obey the law of symmetry. In order to continue the search that maintains previous search advantages in the changed environment, some prediction strategy would be used to predict the operation position of the Pareto set. Because of this, the segment and multi-directional prediction is proposed in this paper, which consists of three mechanisms. First, by segmenting the optimal solutions set, the prediction about the changes in the distribution of the Pareto front can be ensured. Second, by introducing the cloud theory, the distance error of direction prediction can be offset effectively. Third, by using extra angle search, the angle error of prediction caused by the Pareto set nonlinear variation can also be offset effectively. Finally, eight benchmark problems were used to verify the performance of the proposed algorithm and compared algorithms. The results indicate that the algorithm proposed in this paper has good convergence and distribution, as well as a quick response ability to the changed environment.


2015 ◽  
Vol 23 (1) ◽  
pp. 69-100 ◽  
Author(s):  
Handing Wang ◽  
Licheng Jiao ◽  
Ronghua Shang ◽  
Shan He ◽  
Fang Liu

There can be a complicated mapping relation between decision variables and objective functions in multi-objective optimization problems (MOPs). It is uncommon that decision variables influence objective functions equally. Decision variables act differently in different objective functions. Hence, often, the mapping relation is unbalanced, which causes some redundancy during the search in a decision space. In response to this scenario, we propose a novel memetic (multi-objective) optimization strategy based on dimension reduction in decision space (DRMOS). DRMOS firstly analyzes the mapping relation between decision variables and objective functions. Then, it reduces the dimension of the search space by dividing the decision space into several subspaces according to the obtained relation. Finally, it improves the population by the memetic local search strategies in these decision subspaces separately. Further, DRMOS has good portability to other multi-objective evolutionary algorithms (MOEAs); that is, it is easily compatible with existing MOEAs. In order to evaluate its performance, we embed DRMOS in several state of the art MOEAs to facilitate our experiments. The results show that DRMOS has the advantage in terms of convergence speed, diversity maintenance, and portability when solving MOPs with an unbalanced mapping relation between decision variables and objective functions.


Author(s):  
Saad M. Alzahrani ◽  
Naruemon Wattanapongsakorn

Nowadays, most real-world optimization problems consist of many and often conflicting objectives to be optimized simultaneously. Although, many current Multi-Objective optimization algorithms can efficiently solve problems with 3 or less objectives, their performance deteriorates proportionally with the increasing of the objectives number. Furthermore, in many situations the decision maker (DM) is not interested in all trade-off solutions obtained but rather interested in a single optimum solution or a small set of those trade-offs. Therefore, determining an optimum solution or a small set of trade-off solutions is a difficult task. However, an interesting method for finding such solutions is identifying solutions in the Knee region. Solutions in the Knee region can be considered the best obtained solution in the obtained trade-off set especially if there is no preference or equally important objectives. In this paper, a pruning strategy was used to find solutions in the Knee region of Pareto optimal fronts for some benchmark problems obtained by NSGA-II, MOEA/D-DE and a promising new Multi-Objective optimization algorithm NSGA-III. Lastly, those knee solutions found were compared and evaluated using a generational distance performance metric, computation time and a statistical one-way ANOVA test.


2020 ◽  
Vol 25 (4) ◽  
pp. 80
Author(s):  
Fernanda Beltrán ◽  
Oliver Cuate ◽  
Oliver Schütze

Problems where several incommensurable objectives have to be optimized concurrently arise in many engineering and financial applications. Continuation methods for the treatment of such multi-objective optimization methods (MOPs) are very efficient if all objectives are continuous since in that case one can expect that the solution set forms at least locally a manifold. Recently, the Pareto Tracer (PT) has been proposed, which is such a multi-objective continuation method. While the method works reliably for MOPs with box and equality constraints, no strategy has been proposed yet to adequately treat general inequalities, which we address in this work. We formulate the extension of the PT and present numerical results on some selected benchmark problems. The results indicate that the new method can indeed handle general MOPs, which greatly enhances its applicability.


Sign in / Sign up

Export Citation Format

Share Document