An efficient and robust grey wolf optimizer algorithm for large-scale numerical optimization

2019 ◽  
Vol 24 (2) ◽  
pp. 997-1026 ◽  
Author(s):  
Wen Long ◽  
Shaohong Cai ◽  
Jianjun Jiao ◽  
Mingzhu Tang
2018 ◽  
Vol 38 ◽  
pp. 251-266 ◽  
Author(s):  
Lokesh Kumar Panwar ◽  
Srikanth Reddy K ◽  
Ashu Verma ◽  
B.K. Panigrahi ◽  
Rajesh Kumar

2018 ◽  
Vol 2018 ◽  
pp. 1-17 ◽  
Author(s):  
Jun Wang ◽  
Pengcheng Luo ◽  
Xinwu Hu ◽  
Xiaonan Zhang

We propose a hybrid discrete grey wolf optimizer (HDGWO) in this paper to solve the weapon target assignment (WTA) problem, a kind of nonlinear integer programming problems. To make the original grey wolf optimizer (GWO), which was only developed for problems with a continuous solution space, available in the context, we first modify it by adopting a decimal integer encoding method to represent solutions (wolves) and presenting a modular position update method to update solutions in the discrete solution space. By this means, we acquire a discrete grey wolf optimizer (DGWO) and then through combining it with a local search algorithm (LSA), we obtain the HDGWO. Moreover, we also introduce specific domain knowledge into both the encoding method and the local search algorithm to compress the feasible solution space. Finally, we examine the feasibility of the HDGWO and the scalability of the HDGWO, respectively, by adopting it to solve a benchmark case and ten large-scale WTA problems. All of the running results are compared with those of a discrete particle swarm optimization (DPSO), a genetic algorithm with greedy eugenics (GAWGE), and an adaptive immune genetic algorithm (AIGA). The detailed analysis proves the feasibility of the HDGWO in solving the benchmark case and demonstrates its scalability in solving large-scale WTA problems.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-18 ◽  
Author(s):  
Qinghua Gu ◽  
Xuexian Li ◽  
Song Jiang

Most real-world optimization problems tackle a large number of decision variables, known as Large-Scale Global Optimization (LSGO) problems. In general, the metaheuristic algorithms for solving such problems often suffer from the “curse of dimensionality.” In order to improve the disadvantage of Grey Wolf Optimizer when solving the LSGO problems, three genetic operators are embedded into the standard GWO and a Hybrid Genetic Grey Wolf Algorithm (HGGWA) is proposed. Firstly, the whole population using Opposition-Based Learning strategy is initialized. Secondly, the selection operation is performed by combining elite reservation strategy. Then, the whole population is divided into several subpopulations for cross-operation based on dimensionality reduction and population partition in order to increase the diversity of the population. Finally, the elite individuals in the population are mutated to prevent the algorithm from falling into local optimum. The performance of HGGWA is verified by ten benchmark functions, and the optimization results are compared with WOA, SSA, and ALO. On CEC’2008 LSGO problems, the performance of HGGWA is compared against several state-of-the-art algorithms, CCPSO2, DEwSAcc, MLCC, and EPUS-PSO. Simulation results show that the HGGWA has been greatly improved in convergence accuracy, which proves the effectiveness of HGGWA in solving LSGO problems.


Sign in / Sign up

Export Citation Format

Share Document