scholarly journals A genetic algorithm for solving large scale global optimization problems

2021 ◽  
Vol 1821 (1) ◽  
pp. 012055
Author(s):  
M L Shahab ◽  
F Azizi ◽  
B A Sanjoyo ◽  
M I Irawan ◽  
N Hidayat ◽  
...  
2021 ◽  
Author(s):  
Xin-long Luo ◽  
Hang Xiao

Abstract The global minimum point of an optimization problem is of interest in engineering fields and it is difficult to be solved, especially for a nonconvex large-scale optimization problem. In this article, we consider the continuation Newton method with the deflation technique and the quasi-genetic evolution for this problem. Firstly, we use the continuation Newton method with the deflation technique to find the stationary points from several determined initial points as many as possible. Then, we use those found stationary points as the initial evolutionary seeds of the quasi-genetic algorithm. After it evolves into several generations, we obtain a suboptimal point of the optimization problem. Finally, we use the continuation Newton method with this suboptimal point as the initial point to obtain the stationary point, and output the minimizer between this final stationary point and the found suboptimal point of the quasi-genetic algorithm. Finally, we compare it with the multi-start method (the built-in subroutine GlobalSearch.m of the MATLAB R2020a environment) and the differential evolution algorithm (the DE method, the subroutine de.m of the MATLAB Central File Exchange 2021), respectively. Numerical results show that the proposed method performs well for the large-scale global optimization problems, especially the problems of which are difficult to be solved by the known global optimization methods.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 146
Author(s):  
Aleksei Vakhnin ◽  
Evgenii Sopov

Modern real-valued optimization problems are complex and high-dimensional, and they are known as “large-scale global optimization (LSGO)” problems. Classic evolutionary algorithms (EAs) perform poorly on this class of problems because of the curse of dimensionality. Cooperative Coevolution (CC) is a high-performed framework for performing the decomposition of large-scale problems into smaller and easier subproblems by grouping objective variables. The efficiency of CC strongly depends on the size of groups and the grouping approach. In this study, an improved CC (iCC) approach for solving LSGO problems has been proposed and investigated. iCC changes the number of variables in subcomponents dynamically during the optimization process. The SHADE algorithm is used as a subcomponent optimizer. We have investigated the performance of iCC-SHADE and CC-SHADE on fifteen problems from the LSGO CEC’13 benchmark set provided by the IEEE Congress of Evolutionary Computation. The results of numerical experiments have shown that iCC-SHADE outperforms, on average, CC-SHADE with a fixed number of subcomponents. Also, we have compared iCC-SHADE with some state-of-the-art LSGO metaheuristics. The experimental results have shown that the proposed algorithm is competitive with other efficient metaheuristics.


Author(s):  
Bernard K.S. Cheung

Genetic algorithms have been applied in solving various types of large-scale, NP-hard optimization problems. Many researchers have been investigating its global convergence properties using Schema Theory, Markov Chain, etc. A more realistic approach, however, is to estimate the probability of success in finding the global optimal solution within a prescribed number of generations under some function landscapes. Further investigation reveals that its inherent weaknesses that affect its performance can be remedied, while its efficiency can be significantly enhanced through the design of an adaptive scheme that integrates the crossover, mutation and selection operations. The advance of Information Technology and the extensive corporate globalization create great challenges for the solution of modern supply chain models that become more and more complex and size formidable. Meta-heuristic methods have to be employed to obtain near optimal solutions. Recently, a genetic algorithm has been reported to solve these problems satisfactorily and there are reasons for this.


Mathematics ◽  
2020 ◽  
Vol 8 (5) ◽  
pp. 758
Author(s):  
Andrea Ferigo ◽  
Giovanni Iacca

The ever-increasing complexity of industrial and engineering problems poses nowadays a number of optimization problems characterized by thousands, if not millions, of variables. For instance, very large-scale problems can be found in chemical and material engineering, networked systems, logistics and scheduling. Recently, Deb and Myburgh proposed an evolutionary algorithm capable of handling a scheduling optimization problem with a staggering number of variables: one billion. However, one important limitation of this algorithm is its memory consumption, which is in the order of 120 GB. Here, we follow up on this research by applying to the same problem a GPU-enabled “compact” Genetic Algorithm, i.e., an Estimation of Distribution Algorithm that instead of using an actual population of candidate solutions only requires and adapts a probabilistic model of their distribution in the search space. We also introduce a smart initialization technique and custom operators to guide the search towards feasible solutions. Leveraging the compact optimization concept, we show how such an algorithm can optimize efficiently very large-scale problems with millions of variables, with limited memory and processing power. To complete our analysis, we report the results of the algorithm on very large-scale instances of the OneMax problem.


2017 ◽  
Vol 2017 ◽  
pp. 1-18 ◽  
Author(s):  
Ali Wagdy Mohamed ◽  
Abdulaziz S. Almazyad

This paper presents Differential Evolution algorithm for solving high-dimensional optimization problems over continuous space. The proposed algorithm, namely, ANDE, introduces a new triangular mutation rule based on the convex combination vector of the triplet defined by the three randomly chosen vectors and the difference vectors between the best, better, and the worst individuals among the three randomly selected vectors. The mutation rule is combined with the basic mutation strategy DE/rand/1/bin, where the new triangular mutation rule is applied with the probability of 2/3 since it has both exploration ability and exploitation tendency. Furthermore, we propose a novel self-adaptive scheme for gradual change of the values of the crossover rate that can excellently benefit from the past experience of the individuals in the search space during evolution process which in turn can considerably balance the common trade-off between the population diversity and convergence speed. The proposed algorithm has been evaluated on the 20 standard high-dimensional benchmark numerical optimization problems for the IEEE CEC-2010 Special Session and Competition on Large Scale Global Optimization. The comparison results between ANDE and its versions and the other seven state-of-the-art evolutionary algorithms that were all tested on this test suite indicate that the proposed algorithm and its two versions are highly competitive algorithms for solving large scale global optimization problems.


Sign in / Sign up

Export Citation Format

Share Document