scholarly journals Hybrid Gradient Descent Grey Wolf Optimizer for Optimal Feature Selection

2021 ◽  
Vol 2021 ◽  
pp. 1-33
Author(s):  
Peter Mule Kitonyi ◽  
Davies Rene Segera

Feature selection is the process of decreasing the number of features in a dataset by removing redundant, irrelevant, and randomly class-corrected data features. By applying feature selection on large and highly dimensional datasets, the redundant features are removed, reducing the complexity of the data and reducing training time. The objective of this paper was to design an optimizer that combines the well-known metaheuristic population-based optimizer, the grey wolf algorithm, and the gradient descent algorithm and test it for applications in feature selection problems. The proposed algorithm was first compared against the original grey wolf algorithm in 23 continuous test functions. The proposed optimizer was altered for feature selection, and 3 binary implementations were developed with final implementation compared against the two implementations of the binary grey wolf optimizer and binary grey wolf particle swarm optimizer on 6 medical datasets from the UCI machine learning repository, on metrics such as accuracy, size of feature subsets, F -measure, accuracy, precision, and sensitivity. The proposed optimizer outperformed the three other optimizers in 3 of the 6 datasets in average metrics. The proposed optimizer showed promise in its capability to balance the two objectives in feature selection and could be further enhanced.

2020 ◽  
Vol 139 ◽  
pp. 112824 ◽  
Author(s):  
Mohamed Abdel-Basset ◽  
Doaa El-Shahat ◽  
Ibrahim El-henawy ◽  
Victor Hugo C. de Albuquerque ◽  
Seyedali Mirjalili

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 107635-107649 ◽  
Author(s):  
El-Sayed M. El-Kenawy ◽  
Marwa Metwally Eid ◽  
Mohamed Saber ◽  
Abdelhameed Ibrahim

2020 ◽  
Vol 10 (18) ◽  
pp. 6343
Author(s):  
Yuanyuan Liu ◽  
Jiahui Sun ◽  
Haiye Yu ◽  
Yueyong Wang ◽  
Xiaokang Zhou

Aimed at solving the problems of poor stability and easily falling into the local optimal solution in the grey wolf optimizer (GWO) algorithm, an improved GWO algorithm based on the differential evolution (DE) algorithm and the OTSU algorithm is proposed (DE-OTSU-GWO). The multithreshold OTSU, Tsallis entropy, and DE algorithm are combined with the GWO algorithm. The multithreshold OTSU algorithm is used to calculate the fitness of the initial population. The population is updated using the GWO algorithm and the DE algorithm through the Tsallis entropy algorithm for crossover steps. Multithreshold OTSU calculates the fitness in the initial population and makes the initial stage basically stable. Tsallis entropy calculates the fitness quickly. The DE algorithm can solve the local optimal solution of GWO. The performance of the DE-OTSU-GWO algorithm was tested using a CEC2005 benchmark function (23 test functions). Compared with existing particle swarm optimizer (PSO) and GWO algorithms, the experimental results showed that the DE-OTSU-GWO algorithm is more stable and accurate in solving functions. In addition, compared with other algorithms, a convergence behavior analysis proved the high quality of the DE-OTSU-GWO algorithm. In the results of classical agricultural image recognition problems, compared with GWO, PSO, DE-GWO, and 2D-OTSU-FA, the DE-OTSU-GWO algorithm had accuracy in straw image recognition and is applicable to practical problems. The OTSU algorithm improves the accuracy of the overall algorithm while increasing the running time. After adding the DE algorithm, the time complexity will increase, but the solution time can be shortened. Compared with GWO, DE-GWO, PSO, and 2D-OTSU-FA, the DE-OTSU-GWO algorithm has better results in segmentation assessment.


2018 ◽  
Vol 40 (9) ◽  
pp. 3344-3367 ◽  
Author(s):  
Fuding Xie ◽  
Cunkuan Lei ◽  
Fangfei Li ◽  
Dan Huang ◽  
Jun Yang

2018 ◽  
Vol 12 (7) ◽  
pp. 73 ◽  
Author(s):  
Esra F. Alzaghoul ◽  
Sandi N. Fakhouri

Grey wolf Optimizer (GWO) is one of the well known meta-heuristic algorithm for determining the minimum value among a set of values. In this paper, we proposed a novel optimization algorithm called collaborative strategy for grey wolf optimizer (CSGWO). This algorithm enhances the behaviour of GWO that enhances the search feature to search for more points in the search space, whereas more groups will search for the global minimal points. The algorithm has been tested on 23 well-known benchmark functions and the results are verified by comparing them with state of the art algorithms: Polar particle swarm optimizer, sine cosine Algorithm (SCA), multi-verse optimizer (MVO), supernova optimizer as well as particle swarm optimizer (PSO). The results show that the proposed algorithm enhanced GWO behaviour for reaching the best solution and showed competitive results that outperformed the compared meta-heuristics over the tested benchmarked functions.


2021 ◽  
Vol 20 ◽  
pp. 66-75
Author(s):  
Kennedy Ronoh ◽  
George Kamucha

TV white spaces (TVWS) can be utilized by Secondary Users (SUs) equipped with cognitive radio functionality on the condition that they do not cause harmful interference to Primary Users (PUs). Optimization of power allocation is necessary when there is a high density of secondary users in a network in order to reduce the level of interference among SUs and to protect PUs against harmful interference. Grey Wolf Optimizer (GWO) is relatively recent population based metaheuristic algorithm that has shown superior performance compared to other population based metaheuristic algorithms. Recent trend has been to hybridize population based metaheuristic algorithms in order to avoid the problem of getting trapped in a local optimum. This paper presents the design and analysis of performance of a hybrid grey wolf optimizer and Firefly Algorithm (FA) with Particle Swarm Optimization operators for optimization of power allocation in TVWS network power allocation as a continuous optimization problem. Matlab was used for simulation. The hybrid of GWO, FA and PSO (HFAGWOPSO) reduces sum power by 81.42% compared to GWO and improves sum throughput by 16.41% when compared to GWO. Simulation results also show that the algorithm has better convergence rate.


Sign in / Sign up

Export Citation Format

Share Document