scholarly journals On the performance improvement of Butterfly Optimization approaches for global optimization and Feature Selection

PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0242612
Author(s):  
Adel Saad Assiri

Butterfly Optimization Algorithm (BOA) is a recent metaheuristics algorithm that mimics the behavior of butterflies in mating and foraging. In this paper, three improved versions of BOA have been developed to prevent the original algorithm from getting trapped in local optima and have a good balance between exploration and exploitation abilities. In the first version, Opposition-Based Strategy has been embedded in BOA while in the second Chaotic Local Search has been embedded. Both strategies: Opposition-based & Chaotic Local Search have been integrated to get the most optimal/near-optimal results. The proposed versions are compared against original Butterfly Optimization Algorithm (BOA), Grey Wolf Optimizer (GWO), Moth-flame Optimization (MFO), Particle warm Optimization (PSO), Sine Cosine Algorithm (SCA), and Whale Optimization Algorithm (WOA) using CEC 2014 benchmark functions and 4 different real-world engineering problems namely: welded beam engineering design, tension/compression spring, pressure vessel design, and Speed reducer design problem. Furthermore, the proposed approches have been applied to feature selection problem using 5 UCI datasets. The results show the superiority of the third version (CLSOBBOA) in achieving the best results in terms of speed and accuracy.

Author(s):  
Hekmat Mohmmadzadeh

Selecting a feature in data mining is one of the most challenging and important activities in pattern recognition. The issue of feature selection is to find the most important subset of the main features in a specific domain, the main purpose of which is to remove additional or unrelated features and ultimately improve the accuracy of the categorization algorithms. As a result, the issue of feature selection can be considered as an optimization problem and to solve it, meta-innovative algorithms can be used. In this paper, a new hybrid model with a combination of whale optimization algorithms and flower pollination algorithms is presented to address the problem of feature selection based on the concept of opposition-based learning. In the proposed method, we tried to solve the problem of optimization of feature selection by using natural processes of whale optimization and flower pollination algorithms, and on the other hand, we used opposition-based learning method to ensure the convergence speed and accuracy of the proposed algorithm. In fact, in the proposed method, the whale optimization algorithm uses the bait siege process, bubble attack method and bait search, creates solutions in its search space and tries to improve the solutions to the feature selection problem, and along with this algorithm, Flower pollination algorithm with two national and local search processes improves the solution of the problem selection feature in contrasting solutions with the whale optimization algorithm. In fact, we used both search space solutions and contrasting search space solutions, all possible solutions to the feature selection problem. To evaluate the performance of the proposed algorithm, experiments are performed in two stages. In the first phase, experiments were performed on 10 sets of data selection features from the UCI data repository. In the second step, we tried to test the performance of the proposed algorithm by detecting spam emails. The results obtained from the first step show that the proposed algorithm, by running on 10 UCI data sets, has been able to be more successful in terms of average selection size and classification accuracy than other basic meta-heuristic algorithms. Also, the results obtained from the second step show that the proposed algorithm has been able to perform spam emails more accurately than other similar algorithms in terms of accuracy by detecting spam emails.


Author(s):  
Hekmat Mohmmadzadeh

Selecting a feature in data mining is one of the most challenging and important activities in pattern recognition. The issue of feature selection is to find the most important subset of the main features in a specific domain, the main purpose of which is to remove additional or unrelated features and ultimately improve the accuracy of the categorization algorithms. As a result, the issue of feature selection can be considered as an optimization problem and to solve it, meta-innovative algorithms can be used. In this paper, a new hybrid model with a combination of whale optimization algorithms and flower pollination algorithms is presented to address the problem of feature selection based on the concept of opposition-based learning. In the proposed method, we tried to solve the problem of optimization of feature selection by using natural processes of whale optimization and flower pollination algorithms, and on the other hand, we used opposition-based learning method to ensure the convergence speed and accuracy of the proposed algorithm. In fact, in the proposed method, the whale optimization algorithm uses the bait siege process, bubble attack method and bait search, creates solutions in its search space and tries to improve the solutions to the feature selection problem, and along with this algorithm, Flower pollination algorithm with two national and local search processes improves the solution of the problem selection feature in contrasting solutions with the whale optimization algorithm. In fact, we used both search space solutions and contrasting search space solutions, all possible solutions to the feature selection problem. To evaluate the performance of the proposed algorithm, experiments are performed in two stages. In the first phase, experiments were performed on 10 sets of data selection features from the UCI data repository. In the second step, we tried to test the performance of the proposed algorithm by detecting spam emails. The results obtained from the first step show that the proposed algorithm, by running on 10 UCI data sets, has been able to be more successful in terms of average selection size and classification accuracy than other basic meta-heuristic algorithms. Also, the results obtained from the second step show that the proposed algorithm has been able to perform spam emails more accurately than other similar algorithms in terms of accuracy by detecting spam emails.


Computation ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 68
Author(s):  
Zenab Mohamed Elgamal ◽  
Norizan Mohd Yasin ◽  
Aznul Qalid Md Sabri ◽  
Rami Sihwail ◽  
Mohammad Tubishat ◽  
...  

The rapid growth in biomedical datasets has generated high dimensionality features that negatively impact machine learning classifiers. In machine learning, feature selection (FS) is an essential process for selecting the most significant features and reducing redundant and irrelevant features. In this study, an equilibrium optimization algorithm (EOA) is used to minimize the selected features from high-dimensional medical datasets. EOA is a novel metaheuristic physics-based algorithm and newly proposed to deal with unimodal, multi-modal, and engineering problems. EOA is considered as one of the most powerful, fast, and best performing population-based optimization algorithms. However, EOA suffers from local optima and population diversity when dealing with high dimensionality features, such as in biomedical datasets. In order to overcome these limitations and adapt EOA to solve feature selection problems, a novel metaheuristic optimizer, the so-called improved equilibrium optimization algorithm (IEOA), is proposed. Two main improvements are included in the IEOA: The first improvement is applying elite opposite-based learning (EOBL) to improve population diversity. The second improvement is integrating three novel local search strategies to prevent it from becoming stuck in local optima. The local search strategies applied to enhance local search capabilities depend on three approaches: mutation search, mutation–neighborhood search, and a backup strategy. The IEOA has enhanced the population diversity, classification accuracy, and selected features, and increased the convergence speed rate. To evaluate the performance of IEOA, we conducted experiments on 21 biomedical benchmark datasets gathered from the UCI repository. Four standard metrics were used to test and evaluate IEOA’s performance: the number of selected features, classification accuracy, fitness value, and p-value statistical test. Moreover, the proposed IEOA was compared with the original EOA and other well-known optimization algorithms. Based on the experimental results, IEOA confirmed its better performance in comparison to the original EOA and the other optimization algorithms, for the majority of the used datasets.


2021 ◽  
pp. 107754632110034
Author(s):  
Ololade O Obadina ◽  
Mohamed A Thaha ◽  
Kaspar Althoefer ◽  
Mohammad H Shaheed

This article presents a novel hybrid algorithm based on the grey-wolf optimizer and whale optimization algorithm, referred here as grey-wolf optimizer–whale optimization algorithm, for the dynamic parametric modelling of a four degree-of-freedom master–slave robot manipulator system. The first part of this work consists of testing the feasibility of the grey-wolf optimizer–whale optimization algorithm by comparing its performance with a grey-wolf optimizer, whale optimization algorithm and particle swarm optimization using 10 benchmark functions. The grey-wolf optimizer–whale optimization algorithm is then used for the model identification of an experimental master–slave robot manipulator system using the autoregressive moving average with exogenous inputs model structure. Obtained results demonstrate that the hybrid algorithm is effective and can be a suitable substitute to solve the parameter identification problem of robot models.


Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1190
Author(s):  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Štěpán Hubálovský

There are many optimization problems in the different disciplines of science that must be solved using the appropriate method. Population-based optimization algorithms are one of the most efficient ways to solve various optimization problems. Population-based optimization algorithms are able to provide appropriate solutions to optimization problems based on a random search of the problem-solving space without the need for gradient and derivative information. In this paper, a new optimization algorithm called the Group Mean-Based Optimizer (GMBO) is presented; it can be applied to solve optimization problems in various fields of science. The main idea in designing the GMBO is to use more effectively the information of different members of the algorithm population based on two selected groups, with the titles of the good group and the bad group. Two new composite members are obtained by averaging each of these groups, which are used to update the population members. The various stages of the GMBO are described and mathematically modeled with the aim of being used to solve optimization problems. The performance of the GMBO in providing a suitable quasi-optimal solution on a set of 23 standard objective functions of different types of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal is evaluated. In addition, the optimization results obtained from the proposed GMBO were compared with eight other widely used optimization algorithms, including the Marine Predators Algorithm (MPA), the Tunicate Swarm Algorithm (TSA), the Whale Optimization Algorithm (WOA), the Grey Wolf Optimizer (GWO), Teaching–Learning-Based Optimization (TLBO), the Gravitational Search Algorithm (GSA), Particle Swarm Optimization (PSO), and the Genetic Algorithm (GA). The optimization results indicated the acceptable performance of the proposed GMBO, and, based on the analysis and comparison of the results, it was determined that the GMBO is superior and much more competitive than the other eight algorithms.


2021 ◽  
pp. 1-17
Author(s):  
Maodong Li ◽  
Guanghui Xu ◽  
Yuanwang Fu ◽  
Tingwei Zhang ◽  
Li Du

 In this paper, a whale optimization algorithm based on adaptive inertia weight and variable spiral position updating strategy is proposed. The improved algorithm is used to solve the problem that the whale optimization algorithm is more dependent on the randomness of the parameters, so that the algorithm’s convergence accuracy and convergence speed are insufficient. The adaptive inertia weight, which varies with the fitness of individual whales, is used to balance the algorithm’s global search ability and local exploitation ability. The variable spiral position update strategy based on the collaborative convergence mechanism is used to dynamically adjust the search range and search accuracy of the algorithm. The effective combination of the two can make the improved whale optimization algorithm converge to the optimal solution faster. It had been used 18 international standard test functions, including unimodal function, multimodal function, and fixed-dimensional function to test the improved whale optimization algorithm in this paper. The test results show that the improved algorithm has faster convergence speed and higher algorithm accuracy than the original algorithm and several classic algorithms. The algorithm can quickly converge to near the optimal value in the early stage, and then effectively jump out of the local optimal through adaptive adjustment, and has a certain ability to solve large-scale optimization problems.


Sign in / Sign up

Export Citation Format

Share Document