A Meta-Objective Approach for Many-Objective Evolutionary Optimization

2020 ◽  
Vol 28 (1) ◽  
pp. 1-25 ◽  
Author(s):  
Dunwei Gong ◽  
Yiping Liu ◽  
Gary G. Yen

Pareto-based multi-objective evolutionary algorithms experience grand challenges in solving many-objective optimization problems due to their inability to maintain both convergence and diversity in a high-dimensional objective space. Exiting approaches usually modify the selection criteria to overcome this issue. Different from them, we propose a novel meta-objective (MeO) approach that transforms the many-objective optimization problems in which the new optimization problems become easier to solve by the Pareto-based algorithms. MeO converts a given many-objective optimization problem into a new one, which has the same Pareto optimal solutions and the number of objectives with the original one. Each meta-objective in the new problem consists of two components which measure the convergence and diversity performances of a solution, respectively. Since MeO only converts the problem formulation, it can be readily incorporated within any multi-objective evolutionary algorithms, including those non-Pareto-based ones. Particularly, it can boost the Pareto-based algorithms' ability to solve many-objective optimization problems. Due to separately evaluating the convergence and diversity performances of a solution, the traditional density-based selection criteria, for example, crowding distance, will no longer mistake a solution with poor convergence performance for a solution with low density value. By penalizing a solution in term of its convergence performance in the meta-objective space, the Pareto dominance becomes much more effective for a many-objective optimization problem. Comparative study validates the competitive performance of the proposed meta-objective approach in solving many-objective optimization problems.

Mathematics ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. 2018
Author(s):  
Mohammed Mahrach ◽  
Gara Miranda ◽  
Coromoto León ◽  
Eduardo Segredo

One of the main components of most modern Multi-Objective Evolutionary Algorithms (MOEAs) is to maintain a proper diversity within a population in order to avoid the premature convergence problem. Due to this implicit feature that most MOEAs share, their application for Single-Objective Optimization (SO) might be helpful, and provides a promising field of research. Some common approaches to this topic are based on adding extra—and generally artificial—objectives to the problem formulation. However, when applying MOEAs to implicit Multi-Objective Optimization Problems (MOPs), it is not common to analyze how effective said approaches are in relation to optimizing each objective separately. In this paper, we present a comparative study between MOEAs and Single-Objective Evolutionary Algorithms (SOEAs) when optimizing every objective in a MOP, considering here the bi-objective case. For the study, we focus on two well-known and widely studied optimization problems: the Knapsack Problem (KNP) and the Travelling Salesman Problem (TSP). The experimental study considers three MOEAs and two SOEAs. Each SOEA is applied independently for each optimization objective, such that the optimized values obtained for each objective can be compared to the multi-objective solutions achieved by the MOEAs. MOEAs, however, allow optimizing two objectives at once, since the resulting Pareto fronts can be used to analyze the endpoints, i.e., the point optimizing objective 1 and the point optimizing objective 2. The experimental results show that, although MOEAs have to deal with several objectives simultaneously, they can compete with SOEAs, especially when dealing with strongly correlated or large instances.


Mathematics ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 991
Author(s):  
Xavier Blasco ◽  
Gilberto Reynoso-Meza ◽  
Enrique A. Sánchez-Pérez ◽  
Juan Vicente Sánchez-Pérez ◽  
Natalia Jonard-Pérez

Including designer preferences in every phase of the resolution of a multi-objective optimization problem is a fundamental issue to achieve a good quality in the final solution. To consider preferences, the proposal of this paper is based on the definition of what we call a preference basis that shows the preferred optimization directions in the objective space. Associated to this preference basis a new basis in the objective space—dominance basis—is computed. With this new basis the meaning of dominance is reinterpreted to include the designer’s preferences. In this paper, we show the effect of changing the geometric properties of the underlying structure of the Euclidean objective space by including preferences. This way of incorporating preferences is very simple and can be used in two ways: by redefining the optimization problem and/or in the decision-making phase. The approach can be used with any multi-objective optimization algorithm. An advantage of including preferences in the optimization process is that the solutions obtained are focused on the region of interest to the designer and the number of solutions is reduced, which facilitates the interpretation and analysis of the results. The article shows an example of the use of the preference basis and its associated dominance basis in the reformulation of the optimization problem, as well as in the decision-making phase.


2018 ◽  
Vol 8 (9) ◽  
pp. 1673 ◽  
Author(s):  
Xinxin Xu ◽  
Yanyan Tan ◽  
Wei Zheng ◽  
Shengtao Li

Decomposition-based multi-objective evolutionary algorithms provide a good framework for static multi-objective optimization. Nevertheless, there are few studies on their use in dynamic optimization. To solve dynamic multi-objective optimization problems, this paper integrates the framework into dynamic multi-objective optimization and proposes a memory-enhanced dynamic multi-objective evolutionary algorithm based on L p decomposition (denoted by dMOEA/D- L p ). Specifically, dMOEA/D- L p decomposes a dynamic multi-objective optimization problem into a number of dynamic scalar optimization subproblems and coevolves them simultaneously, where the L p decomposition method is adopted for decomposition. Meanwhile, a subproblem-based bunchy memory scheme that stores good solutions from old environments and reuses them as necessary is designed to respond to environmental change. Experimental results verify the effectiveness of the L p decomposition method in dynamic multi-objective optimization. Moreover, the proposed dMOEA/D- L p achieves better performance than other popular memory-enhanced dynamic multi-objective optimization algorithms.


Mathematics ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 999
Author(s):  
Alberto Pajares ◽  
Xavier Blasco ◽  
Juan Manuel Herrero ◽  
Miguel A. Martínez

In a multi-objective optimization problem, in addition to optimal solutions, multimodal and/or nearly optimal alternatives can also provide additional useful information for the decision maker. However, obtaining all nearly optimal solutions entails an excessive number of alternatives. Therefore, to consider the nearly optimal solutions, it is convenient to obtain a reduced set, putting the focus on the potentially useful alternatives. These solutions are the alternatives that are close to the optimal solutions in objective space, but which differ significantly in the decision space. To characterize this set, it is essential to simultaneously analyze the decision and objective spaces. One of the crucial points in an evolutionary multi-objective optimization algorithm is the archiving strategy. This is in charge of keeping the solution set, called the archive, updated during the optimization process. The motivation of this work is to analyze the three existing archiving strategies proposed in the literature (ArchiveUpdatePQ,ϵDxy, Archive_nevMOGA, and targetSelect) that aim to characterize the potentially useful solutions. The archivers are evaluated on two benchmarks and in a real engineering example. The contribution clearly shows the main differences between the three archivers. This analysis is useful for the design of evolutionary algorithms that consider nearly optimal solutions.


Author(s):  
Zhenkun Wang ◽  
Qingyan Li ◽  
Qite Yang ◽  
Hisao Ishibuchi

AbstractIt has been acknowledged that dominance-resistant solutions (DRSs) extensively exist in the feasible region of multi-objective optimization problems. Recent studies show that DRSs can cause serious performance degradation of many multi-objective evolutionary algorithms (MOEAs). Thereafter, various strategies (e.g., the $$\epsilon $$ ϵ -dominance and the modified objective calculation) to eliminate DRSs have been proposed. However, these strategies may in turn cause algorithm inefficiency in other aspects. We argue that these coping strategies prevent the algorithm from obtaining some boundary solutions of an extremely convex Pareto front (ECPF). That is, there is a dilemma between eliminating DRSs and preserving boundary solutions of the ECPF. To illustrate such a dilemma, we propose a new multi-objective optimization test problem with the ECPF as well as DRSs. Using this test problem, we investigate the performance of six representative MOEAs in terms of boundary solutions preservation and DRS elimination. The results reveal that it is quite challenging to distinguish between DRSs and boundary solutions of the ECPF.


Author(s):  
Weijun Wang ◽  
Stéphane Caro ◽  
Fouad Bennis ◽  
Oscar Brito Augusto

For Multi-Objective Robust Optimization Problem (MOROP), it is important to obtain design solutions that are both optimal and robust. To find these solutions, usually, the designer need to set a threshold of the variation of Performance Functions (PFs) before optimization, or add the effects of uncertainties on the original PFs to generate a new Pareto robust front. In this paper, we divide a MOROP into two Multi-Objective Optimization Problems (MOOPs). One is the original MOOP, another one is that we take the Robustness Functions (RFs), robust counterparts of the original PFs, as optimization objectives. After solving these two MOOPs separately, two sets of solutions come out, namely the Pareto Performance Solutions (PP) and the Pareto Robustness Solutions (PR). Make a further development on these two sets, we can get two types of solutions, namely the Pareto Robustness Solutions among the Pareto Performance Solutions (PR(PP)), and the Pareto Performance Solutions among the Pareto Robustness Solutions (PP(PR)). Further more, the intersection of PR(PP) and PP(PR) can represent the intersection of PR and PP well. Then the designer can choose good solutions by comparing the results of PR(PP) and PP(PR). Thanks to this method, we can find out the optimal and robust solutions without setting the threshold of the variation of PFs nor losing the initial Pareto front. Finally, an illustrative example highlights the contributions of the paper.


2011 ◽  
Vol 133 (6) ◽  
Author(s):  
W. Hu ◽  
M. Li ◽  
S. Azarm ◽  
A. Almansoori

Many engineering optimization problems are multi-objective, constrained and have uncertainty in their inputs. For such problems it is desirable to obtain solutions that are multi-objectively optimum and robust. A robust solution is one that as a result of input uncertainty has variations in its objective and constraint functions which are within an acceptable range. This paper presents a new approximation-assisted MORO (AA-MORO) technique with interval uncertainty. The technique is a significant improvement, in terms of computational effort, over previously reported MORO techniques. AA-MORO includes an upper-level problem that solves a multi-objective optimization problem whose feasible domain is iteratively restricted by constraint cuts determined by a lower-level optimization problem. AA-MORO also includes an online approximation wherein optimal solutions from the upper- and lower-level optimization problems are used to iteratively improve an approximation to the objective and constraint functions. Several examples are used to test the proposed technique. The test results show that the proposed AA-MORO reasonably approximates solutions obtained from previous MORO approaches while its computational effort, in terms of the number of function calls, is significantly reduced compared to the previous approaches.


2012 ◽  
Vol 433-440 ◽  
pp. 2808-2816
Author(s):  
Jian Jin Zheng ◽  
You Shen Xia

This paper presents a new interactive neural network for solving constrained multi-objective optimization problems. The constrained multi-objective optimization problem is reformulated into two constrained single objective optimization problems and two neural networks are designed to obtain the optimal weight and the optimal solution of the two optimization problems respectively. The proposed algorithm has a low computational complexity and is easy to be implemented. Moreover, the proposed algorithm is well applied to the design of digital filters. Computed results illustrate the good performance of the proposed algorithm.


Sign in / Sign up

Export Citation Format

Share Document