Handling Constrained Multi-Objective optimization with Objective Space Mapping to Decision Space Based on Extreme Learning Machine

Author(s):  
Hao Zhang ◽  
Ku Tao ◽  
Lianbo Ma ◽  
Yibo Yong
Author(s):  
Bin Zhang ◽  
Kamran Shafi ◽  
Hussein Abbass

A number of benchmark problems exist for evaluating multi-objective evolutionary algorithms (MOEAs) in the objective space. However, the decision space performance analysis is a recent and relatively less explored topic in evolutionary multi-objective optimization research. Among other implications, such analysis can lead to designing more realistic test problems, gaining better understanding about optimal and robust design areas, and design and evaluation of knowledge-based optimization algorithms. This paper complements the existing research in this area and proposes a new method to generate multi-objective optimization test problems with clustered Pareto sets in hyper-rectangular defined areas of decision space. The test problem is parametrized to control number of decision variables, number and position of optimal areas in the decision space and modality of fitness landscape. Three leading MOEAs, including NSGA-II, NSGA-III, and MOEA/D, are evaluated on a number of problem instances with varying characteristics. A new metric is proposed that measures the performance of algorithms in terms of their coverage of the optimal areas in the decision space. The empirical analysis presented in this research shows that the decision space performance may not necessarily be reflective of the objective space performance and that all algorithms are sensitive to population size parameter for the new test problems.


Mathematics ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 999
Author(s):  
Alberto Pajares ◽  
Xavier Blasco ◽  
Juan Manuel Herrero ◽  
Miguel A. Martínez

In a multi-objective optimization problem, in addition to optimal solutions, multimodal and/or nearly optimal alternatives can also provide additional useful information for the decision maker. However, obtaining all nearly optimal solutions entails an excessive number of alternatives. Therefore, to consider the nearly optimal solutions, it is convenient to obtain a reduced set, putting the focus on the potentially useful alternatives. These solutions are the alternatives that are close to the optimal solutions in objective space, but which differ significantly in the decision space. To characterize this set, it is essential to simultaneously analyze the decision and objective spaces. One of the crucial points in an evolutionary multi-objective optimization algorithm is the archiving strategy. This is in charge of keeping the solution set, called the archive, updated during the optimization process. The motivation of this work is to analyze the three existing archiving strategies proposed in the literature (ArchiveUpdatePQ,ϵDxy, Archive_nevMOGA, and targetSelect) that aim to characterize the potentially useful solutions. The archivers are evaluated on two benchmarks and in a real engineering example. The contribution clearly shows the main differences between the three archivers. This analysis is useful for the design of evolutionary algorithms that consider nearly optimal solutions.


Mathematics ◽  
2019 ◽  
Vol 7 (2) ◽  
pp. 129 ◽  
Author(s):  
Yan Pei ◽  
Jun Yu ◽  
Hideyuki Takagi

We propose a method to accelerate evolutionary multi-objective optimization (EMO) search using an estimated convergence point. Pareto improvement from the last generation to the current generation supports information of promising Pareto solution areas in both an objective space and a parameter space. We use this information to construct a set of moving vectors and estimate a non-dominated Pareto point from these moving vectors. In this work, we attempt to use different methods for constructing moving vectors, and use the convergence point estimated by using the moving vectors to accelerate EMO search. From our evaluation results, we found that the landscape of Pareto improvement has a uni-modal distribution characteristic in an objective space, and has a multi-modal distribution characteristic in a parameter space. Our proposed method can enhance EMO search when the landscape of Pareto improvement has a uni-modal distribution characteristic in a parameter space, and by chance also does that when landscape of Pareto improvement has a multi-modal distribution characteristic in a parameter space. The proposed methods can not only obtain more Pareto solutions compared with the conventional non-dominant sorting genetic algorithm (NSGA)-II algorithm, but can also increase the diversity of Pareto solutions. This indicates that our proposed method can enhance the search capability of EMO in both Pareto dominance and solution diversity. We also found that the method of constructing moving vectors is a primary issue for the success of our proposed method. We analyze and discuss this method with several evaluation metrics and statistical tests. The proposed method has potential to enhance EMO embedding deterministic learning methods in stochastic optimization algorithms.


Sign in / Sign up

Export Citation Format

Share Document