Hybrid Grey Wolf Optimizer Using Elite Opposition-Based Learning Strategy and Simplex Method

Sen Zhang ◽  
Qifang Luo ◽  
Yongquan Zhou

To overcome the poor population diversity and slow convergence rate of grey wolf optimizer (GWO), this paper introduces the elite opposition-based learning strategy and simplex method into GWO, and proposes a hybrid grey optimizer using elite opposition (EOGWO). The diversity of grey wolf population is increased and exploration ability is improved. The experiment results of 13 standard benchmark functions indicate that the proposed algorithm has strong global and local search ability, quick convergence rate and high accuracy. EOGWO is also effective and feasible in both low-dimensional and high-dimensional case. Compared to particle swarm optimization with chaotic search (CLSPSO), gravitational search algorithm (GSA), flower pollination algorithm (FPA), cuckoo search (CS) and bat algorithm (BA), the proposed algorithm shows a better optimization performance and robustness.

2020 ◽  
Vol 17 (3) ◽  
pp. 172988142092529
Jianzhong Huang ◽  
Yuwan Cen ◽  
Nenggang Xie ◽  
Xiaohua Ye

For the inverse calculation of laser-guided demolition robot, its global nonlinear mapping model from laser measuring point to joint cylinder stroke has been set up with an artificial neural network. Due to the contradiction between population diversity and convergence rate in the optimization of complex neural networks by using differential evolution, a gravitational search algorithm and differential evolution is proposed to accelerate the convergence rate of differential evolution population driven by gravity. Gravitational search algorithm and differential evolution is applied to optimize the inverse calculation neural network mapping model of demolition robot, and the algorithm simulation shows that gravity can effectively regulate the convergence process of differential evolution population. Compared with the standard differential evolution, the convergence speed and accuracy of gravitational search algorithm and differential evolution are significantly improved, which has better optimization stability. The calculation results show that the output accuracy of this gravitational and differential evolution neural network can meet the calculation requirements of the positioning control of demolition robot’s manipulator. The optimization using gravitational search algorithm and differential evolution is done with the connection weights of a neural network in this article, and as similar techniques can be applied to the other hyperparameter optimization problem. Moreover, such an inverse calculation method can provide a reference for the autonomous positioning of large hydraulic series manipulator, so as to improve the robotization level of construction machinery.

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Yongli Liu ◽  
Zhonghui Wang ◽  
Hao Chao

Traditional fuzzy clustering is sensitive to initialization and ignores the importance difference between features, so the performance is not satisfactory. In order to improve clustering robustness and accuracy, in this paper, a feature-weighted fuzzy clustering algorithm based on multistrategy grey wolf optimization is proposed. This algorithm cannot only improve clustering accuracy by considering the different importance of features and assigning each feature different weight but also can easily obtain the global optimal solution and avoid the impact of the initialization process by implementing multistrategy grey wolf optimization. This multistrategy optimization includes three components, a population diversity initialization strategy, a nonlinear adjustment strategy of the convergence factor, and a generalized opposition-based learning strategy. They can enhance the population diversity, better balance exploration and exploitation, and further enhance the global search capability, respectively. In order to evaluate the clustering performance of our clustering algorithm, UCI datasets are selected for experiments. Experimental results show that this algorithm can achieve higher accuracy and stronger robustness.

Upma Jain ◽  
Ritu Tiwari ◽  
W. Wilfred Godfrey

This chapter concerns the problem of odor source localization by a team of mobile robots. A brief overview of odor source localization is given which is followed by related work. Three methods are proposed for odor source localization. These methods are largely inspired by gravitational search algorithm, grey wolf optimizer, and particle swarm optimization. Objective of the proposed approaches is to reduce the time required to localize the odor source by a team of mobile robots. The intensity of odor across the plume area is assumed to follow the Gaussian distribution. Robots start search from the corner of the workspace. As robots enter in the vicinity of plume area, they form groups using K-nearest neighbor algorithm. To avoid stagnation of the robots at local optima, search counter concept is used. Proposed approaches are tested and validated through simulation.

Randa Jalaa Yahya ◽  
Nizar Hadi Abbas

A newly hybrid nature-inspired algorithm called HSSGWOA is presented with the combination of the salp swarm algorithm (SSA) and grey wolf optimizer (GWO). The major idea is to combine the salp swarm algorithm's exploitation ability with the grey wolf optimizer's exploration ability to generate both variants' strength. The proposed algorithm uses to tune the parameters of the integral sliding mode controller (ISMC) that design to improve the dynamic performance of the two-link flexible joint manipulator. The efficiency and the capability of the proposed hybrid algorithm are evaluated based on the selected test functions. It is clear that when compared to other algorithms like SSA, GWO, differential evolution (DE), gravitational search algorithm (GSA), particles swarm optimization (PSO), and whale optimization algorithm (WOA). The ISMC parameters were tuned using the SSA, which was then compared to the HSSGWOA algorithm. The simulation results show the capabilities of the proposed algorithm, which gives an enhancement percentage of 57.46% compared to the standard algorithm for one of the links, and 55.86% for the other.

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 68764-68785 ◽  
Alaa A. Alomoush ◽  
Abdulrahman A. Alsewari ◽  
Hammoudeh S. Alamri ◽  
Khalid Aloufi ◽  
Kamal Z. Zamli

2020 ◽  
Vol 10 (21) ◽  
pp. 7683 ◽  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Ali Dehghani ◽  
Haidar Samet ◽  
Carlos Sotelo ◽  

In recent decades, many optimization algorithms have been proposed by researchers to solve optimization problems in various branches of science. Optimization algorithms are designed based on various phenomena in nature, the laws of physics, the rules of individual and group games, the behaviors of animals, plants and other living things. Implementation of optimization algorithms on some objective functions has been successful and in others has led to failure. Improving the optimization process and adding modification phases to the optimization algorithms can lead to more acceptable and appropriate solution. In this paper, a new method called Dehghani method (DM) is introduced to improve optimization algorithms. DM effects on the location of the best member of the population using information of population location. In fact, DM shows that all members of a population, even the worst one, can contribute to the development of the population. DM has been mathematically modeled and its effect has been investigated on several optimization algorithms including: genetic algorithm (GA), particle swarm optimization (PSO), gravitational search algorithm (GSA), teaching-learning-based optimization (TLBO), and grey wolf optimizer (GWO). In order to evaluate the ability of the proposed method to improve the performance of optimization algorithms, the mentioned algorithms have been implemented in both version of original and improved by DM on a set of twenty-three standard objective functions. The simulation results show that the modified optimization algorithms with DM provide more acceptable and competitive performance than the original versions in solving optimization problems.

Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-18 ◽  
Qinghua Gu ◽  
Xuexian Li ◽  
Song Jiang

Most real-world optimization problems tackle a large number of decision variables, known as Large-Scale Global Optimization (LSGO) problems. In general, the metaheuristic algorithms for solving such problems often suffer from the “curse of dimensionality.” In order to improve the disadvantage of Grey Wolf Optimizer when solving the LSGO problems, three genetic operators are embedded into the standard GWO and a Hybrid Genetic Grey Wolf Algorithm (HGGWA) is proposed. Firstly, the whole population using Opposition-Based Learning strategy is initialized. Secondly, the selection operation is performed by combining elite reservation strategy. Then, the whole population is divided into several subpopulations for cross-operation based on dimensionality reduction and population partition in order to increase the diversity of the population. Finally, the elite individuals in the population are mutated to prevent the algorithm from falling into local optimum. The performance of HGGWA is verified by ten benchmark functions, and the optimization results are compared with WOA, SSA, and ALO. On CEC’2008 LSGO problems, the performance of HGGWA is compared against several state-of-the-art algorithms, CCPSO2, DEwSAcc, MLCC, and EPUS-PSO. Simulation results show that the HGGWA has been greatly improved in convergence accuracy, which proves the effectiveness of HGGWA in solving LSGO problems.

2021 ◽  
pp. 1-13
Nuzhat Fatema ◽  
Saeid Gholami Farkoush ◽  
Mashhood Hasan ◽  
H Malik

In this paper, a novel hybrid approach for deterministic and probabilistic occupancy detection is proposed with a novel heuristic optimization and Back-Propagation (BP) based algorithms. Generally, PB based neural network (BPNN) suffers with the optimal value of weight, bias, trapping problem in local minima and sluggish convergence rate. In this paper, the GSA (Gravitational Search Algorithm) is implemented as a new training technique for BPNN is order to enhance the performance of the BPNN algorithm by decreasing the problem of trapping in local minima, enhance the convergence rate and optimize the weight and bias value to reduce the overall error. The experimental results of BPNN with and without GSA are demonstrated and presented for fair comparison and adoptability. The demonstrated results show that BPNNGSA has outperformance for training and testing phase in form of enhancement of processing speed, convergence rate and avoiding the trapping problem of standard BPNN. The whole study is analyzed and demonstrated by using R language open access platform. The proposed approach is validated with different hidden-layer neurons for both experimental studies based on BPNN and BPNNGSA.

Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1190
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Štěpán Hubálovský

There are many optimization problems in the different disciplines of science that must be solved using the appropriate method. Population-based optimization algorithms are one of the most efficient ways to solve various optimization problems. Population-based optimization algorithms are able to provide appropriate solutions to optimization problems based on a random search of the problem-solving space without the need for gradient and derivative information. In this paper, a new optimization algorithm called the Group Mean-Based Optimizer (GMBO) is presented; it can be applied to solve optimization problems in various fields of science. The main idea in designing the GMBO is to use more effectively the information of different members of the algorithm population based on two selected groups, with the titles of the good group and the bad group. Two new composite members are obtained by averaging each of these groups, which are used to update the population members. The various stages of the GMBO are described and mathematically modeled with the aim of being used to solve optimization problems. The performance of the GMBO in providing a suitable quasi-optimal solution on a set of 23 standard objective functions of different types of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal is evaluated. In addition, the optimization results obtained from the proposed GMBO were compared with eight other widely used optimization algorithms, including the Marine Predators Algorithm (MPA), the Tunicate Swarm Algorithm (TSA), the Whale Optimization Algorithm (WOA), the Grey Wolf Optimizer (GWO), Teaching–Learning-Based Optimization (TLBO), the Gravitational Search Algorithm (GSA), Particle Swarm Optimization (PSO), and the Genetic Algorithm (GA). The optimization results indicated the acceptable performance of the proposed GMBO, and, based on the analysis and comparison of the results, it was determined that the GMBO is superior and much more competitive than the other eight algorithms.

Sign in / Sign up

Export Citation Format

Share Document