Application domain study of evolutionary algorithms in optimization problems

Author(s):  
P. Caamano ◽  
F. Bellas ◽  
J. A. Becerra ◽  
R. J. Duro
Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 146
Author(s):  
Aleksei Vakhnin ◽  
Evgenii Sopov

Modern real-valued optimization problems are complex and high-dimensional, and they are known as “large-scale global optimization (LSGO)” problems. Classic evolutionary algorithms (EAs) perform poorly on this class of problems because of the curse of dimensionality. Cooperative Coevolution (CC) is a high-performed framework for performing the decomposition of large-scale problems into smaller and easier subproblems by grouping objective variables. The efficiency of CC strongly depends on the size of groups and the grouping approach. In this study, an improved CC (iCC) approach for solving LSGO problems has been proposed and investigated. iCC changes the number of variables in subcomponents dynamically during the optimization process. The SHADE algorithm is used as a subcomponent optimizer. We have investigated the performance of iCC-SHADE and CC-SHADE on fifteen problems from the LSGO CEC’13 benchmark set provided by the IEEE Congress of Evolutionary Computation. The results of numerical experiments have shown that iCC-SHADE outperforms, on average, CC-SHADE with a fixed number of subcomponents. Also, we have compared iCC-SHADE with some state-of-the-art LSGO metaheuristics. The experimental results have shown that the proposed algorithm is competitive with other efficient metaheuristics.


1996 ◽  
Vol 4 (1) ◽  
pp. 1-32 ◽  
Author(s):  
Zbigniew Michalewicz ◽  
Marc Schoenauer

Evolutionary computation techniques have received a great deal of attention regarding their potential as optimization techniques for complex numerical functions. However, they have not produced a significant breakthrough in the area of nonlinear programming due to the fact that they have not addressed the issue of constraints in a systematic way. Only recently have several methods been proposed for handling nonlinear constraints by evolutionary algorithms for numerical optimization problems; however, these methods have several drawbacks, and the experimental results on many test cases have been disappointing. In this paper we (1) discuss difficulties connected with solving the general nonlinear programming problem; (2) survey several approaches that have emerged in the evolutionary computation community; and (3) provide a set of 11 interesting test cases that may serve as a handy reference for future methods.


Author(s):  
Zhenkun Wang ◽  
Qingyan Li ◽  
Qite Yang ◽  
Hisao Ishibuchi

AbstractIt has been acknowledged that dominance-resistant solutions (DRSs) extensively exist in the feasible region of multi-objective optimization problems. Recent studies show that DRSs can cause serious performance degradation of many multi-objective evolutionary algorithms (MOEAs). Thereafter, various strategies (e.g., the $$\epsilon $$ ϵ -dominance and the modified objective calculation) to eliminate DRSs have been proposed. However, these strategies may in turn cause algorithm inefficiency in other aspects. We argue that these coping strategies prevent the algorithm from obtaining some boundary solutions of an extremely convex Pareto front (ECPF). That is, there is a dilemma between eliminating DRSs and preserving boundary solutions of the ECPF. To illustrate such a dilemma, we propose a new multi-objective optimization test problem with the ECPF as well as DRSs. Using this test problem, we investigate the performance of six representative MOEAs in terms of boundary solutions preservation and DRS elimination. The results reveal that it is quite challenging to distinguish between DRSs and boundary solutions of the ECPF.


Author(s):  
Shufen Qin ◽  
Chan Li ◽  
Chaoli Sun ◽  
Guochen Zhang ◽  
Xiaobo Li

AbstractSurrogate-assisted evolutionary algorithms have been paid more and more attention to solve computationally expensive problems. However, model management still plays a significant importance in searching for the optimal solution. In this paper, a new method is proposed to measure the approximation uncertainty, in which the differences between the solution and its neighbour samples in the decision space, and the ruggedness of the objective space in its neighborhood are both considered. The proposed approximation uncertainty will be utilized in the surrogate-assisted global search to find a solution for exact objective evaluation to improve the exploration capability of the global search. On the other hand, the approximated fitness value is adopted as the infill criterion for the surrogate-assisted local search, which is utilized to improve the exploitation capability to find a solution close to the real optimal solution as much as possible. The surrogate-assisted global and local searches are conducted in sequence at each generation to balance the exploration and exploitation capabilities of the method. The performance of the proposed method is evaluated on seven benchmark problems with 10, 20, 30 and 50 dimensions, and one real-world application with 30 and 50 dimensions. The experimental results show that the proposed method is efficient for solving the low- and medium-dimensional expensive optimization problems by compared to the other six state-of-the-art surrogate-assisted evolutionary algorithms.


2021 ◽  
Vol 6 (4 (114)) ◽  
pp. 6-14
Author(s):  
Maan Afathi

The main purpose of using the hybrid evolutionary algorithm is to reach optimal values and achieve goals that traditional methods cannot reach and because there are different evolutionary computations, each of them has different advantages and capabilities. Therefore, researchers integrate more than one algorithm into a hybrid form to increase the ability of these algorithms to perform evolutionary computation when working alone. In this paper, we propose a new algorithm for hybrid genetic algorithm (GA) and particle swarm optimization (PSO) with fuzzy logic control (FLC) approach for function optimization. Fuzzy logic is applied to switch dynamically between evolutionary algorithms, in an attempt to improve the algorithm performance. The HEF hybrid evolutionary algorithms are compared to GA, PSO, GAPSO, and PSOGA. The comparison uses a variety of measurement functions. In addition to strongly convex functions, these functions can be uniformly distributed or not, and are valuable for evaluating our approach. Iterations of 500, 1000, and 1500 were used for each function. The HEF algorithm’s efficiency was tested on four functions. The new algorithm is often the best solution, HEF accounted for 75 % of all the tests. This method is superior to conventional methods in terms of efficiency


2018 ◽  
Vol 72 ◽  
pp. 14-29 ◽  
Author(s):  
Max de Castro Rodrigues ◽  
Solange Guimarães ◽  
Beatriz Souza Leite Pires de Lima

Sign in / Sign up

Export Citation Format

Share Document