Evolutionary many-Objective algorithm based on fractional dominance relation and improved objective space decomposition strategy

2021 ◽  
Vol 60 ◽  
pp. 100776
Author(s):  
Wenbo Qiu ◽  
Jianghan Zhu ◽  
Guohua Wu ◽  
Mingfeng Fan ◽  
Ponnuthurai Nagaratnam Suganthan
2021 ◽  
pp. 1-26
Author(s):  
Ruochen Liu ◽  
Jianxia Li ◽  
Yaochu Jin ◽  
Licheng Jiao

Dynamic multi-objective optimization deals with simultaneous optimization of multiple conflicting objectives that change over time. Several response strategies for dynamic optimization have been proposed, which do not work well for all types of environmental changes. In this paper, we propose a new dynamic multi-objective evolutionary algorithm based on objective space decomposition, in which the maxi-min fitness function is adopted for selection and a self-adaptive response strategy integrating a number of different response strategies is designed to handle unknown environmental changes. The self-adaptive response strategy can adaptively select one of the strategies according to their contributions to the tracking performance in the previous environments. Experimental results indicate that the proposed algorithm is competitive and promising for solving different DMOPs in the presence of unknown environmental changes. Meanwhile, the proposed algorithm is applied to solve the parameter tuning problem of a proportional integral derivative (PID) controller of a dynamic system, obtaining better control effect.


Mathematics ◽  
2019 ◽  
Vol 7 (2) ◽  
pp. 129 ◽  
Author(s):  
Yan Pei ◽  
Jun Yu ◽  
Hideyuki Takagi

We propose a method to accelerate evolutionary multi-objective optimization (EMO) search using an estimated convergence point. Pareto improvement from the last generation to the current generation supports information of promising Pareto solution areas in both an objective space and a parameter space. We use this information to construct a set of moving vectors and estimate a non-dominated Pareto point from these moving vectors. In this work, we attempt to use different methods for constructing moving vectors, and use the convergence point estimated by using the moving vectors to accelerate EMO search. From our evaluation results, we found that the landscape of Pareto improvement has a uni-modal distribution characteristic in an objective space, and has a multi-modal distribution characteristic in a parameter space. Our proposed method can enhance EMO search when the landscape of Pareto improvement has a uni-modal distribution characteristic in a parameter space, and by chance also does that when landscape of Pareto improvement has a multi-modal distribution characteristic in a parameter space. The proposed methods can not only obtain more Pareto solutions compared with the conventional non-dominant sorting genetic algorithm (NSGA)-II algorithm, but can also increase the diversity of Pareto solutions. This indicates that our proposed method can enhance the search capability of EMO in both Pareto dominance and solution diversity. We also found that the method of constructing moving vectors is a primary issue for the success of our proposed method. We analyze and discuss this method with several evaluation metrics and statistical tests. The proposed method has potential to enhance EMO embedding deterministic learning methods in stochastic optimization algorithms.


Author(s):  
Lu Chen ◽  
Handing Wang ◽  
Wenping Ma

AbstractReal-world optimization applications in complex systems always contain multiple factors to be optimized, which can be formulated as multi-objective optimization problems. These problems have been solved by many evolutionary algorithms like MOEA/D, NSGA-III, and KnEA. However, when the numbers of decision variables and objectives increase, the computation costs of those mentioned algorithms will be unaffordable. To reduce such high computation cost on large-scale many-objective optimization problems, we proposed a two-stage framework. The first stage of the proposed algorithm combines with a multi-tasking optimization strategy and a bi-directional search strategy, where the original problem is reformulated as a multi-tasking optimization problem in the decision space to enhance the convergence. To improve the diversity, in the second stage, the proposed algorithm applies multi-tasking optimization to a number of sub-problems based on reference points in the objective space. In this paper, to show the effectiveness of the proposed algorithm, we test the algorithm on the DTLZ and LSMOP problems and compare it with existing algorithms, and it outperforms other compared algorithms in most cases and shows disadvantage on both convergence and diversity.


Sign in / Sign up

Export Citation Format

Share Document