Large-Scale Global Optimization Using a Binary Genetic Algorithm with EDA-Based Decomposition

Author(s):  
Evgenii Sopov
2021 ◽  
Author(s):  
Xin-long Luo ◽  
Hang Xiao

Abstract The global minimum point of an optimization problem is of interest in engineering fields and it is difficult to be solved, especially for a nonconvex large-scale optimization problem. In this article, we consider the continuation Newton method with the deflation technique and the quasi-genetic evolution for this problem. Firstly, we use the continuation Newton method with the deflation technique to find the stationary points from several determined initial points as many as possible. Then, we use those found stationary points as the initial evolutionary seeds of the quasi-genetic algorithm. After it evolves into several generations, we obtain a suboptimal point of the optimization problem. Finally, we use the continuation Newton method with this suboptimal point as the initial point to obtain the stationary point, and output the minimizer between this final stationary point and the found suboptimal point of the quasi-genetic algorithm. Finally, we compare it with the multi-start method (the built-in subroutine GlobalSearch.m of the MATLAB R2020a environment) and the differential evolution algorithm (the DE method, the subroutine de.m of the MATLAB Central File Exchange 2021), respectively. Numerical results show that the proposed method performs well for the large-scale global optimization problems, especially the problems of which are difficult to be solved by the known global optimization methods.


2021 ◽  
Vol 1821 (1) ◽  
pp. 012055
Author(s):  
M L Shahab ◽  
F Azizi ◽  
B A Sanjoyo ◽  
M I Irawan ◽  
N Hidayat ◽  
...  

Author(s):  
Kyle Robert Harrison ◽  
Azam Asilian Bidgoli ◽  
Shahryar Rahnamayan ◽  
Kalyanmoy Deb

Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 146
Author(s):  
Aleksei Vakhnin ◽  
Evgenii Sopov

Modern real-valued optimization problems are complex and high-dimensional, and they are known as “large-scale global optimization (LSGO)” problems. Classic evolutionary algorithms (EAs) perform poorly on this class of problems because of the curse of dimensionality. Cooperative Coevolution (CC) is a high-performed framework for performing the decomposition of large-scale problems into smaller and easier subproblems by grouping objective variables. The efficiency of CC strongly depends on the size of groups and the grouping approach. In this study, an improved CC (iCC) approach for solving LSGO problems has been proposed and investigated. iCC changes the number of variables in subcomponents dynamically during the optimization process. The SHADE algorithm is used as a subcomponent optimizer. We have investigated the performance of iCC-SHADE and CC-SHADE on fifteen problems from the LSGO CEC’13 benchmark set provided by the IEEE Congress of Evolutionary Computation. The results of numerical experiments have shown that iCC-SHADE outperforms, on average, CC-SHADE with a fixed number of subcomponents. Also, we have compared iCC-SHADE with some state-of-the-art LSGO metaheuristics. The experimental results have shown that the proposed algorithm is competitive with other efficient metaheuristics.


Sign in / Sign up

Export Citation Format

Share Document