Feature Selection and Knapsack Problem Resolution Based on a Discrete Backtracking Optimization Algorithm

2021 ◽  
Vol 12 (2) ◽  
pp. 1-15
Author(s):  
Khadoudja Ghanem ◽  
Abdesslem Layeb

Backtracking search optimization algorithm is a recent stochastic-based global search algorithm for solving real-valued numerical optimization problems. In this paper, a binary version of backtracking algorithm is proposed to deal with 0-1 optimization problems such as feature selection and knapsack problems. Feature selection is the process of selecting a subset of relevant features for use in model construction. Irrelevant features can negatively impact model performances. On the other hand, knapsack problem is a well-known optimization problem used to assess discrete algorithms. The objective of this research is to evaluate the discrete version of backtracking algorithm on the two mentioned problems and compare obtained results with other binary optimization algorithms using four usual classifiers: logistic regression, decision tree, random forest, and support vector machine. Empirical study on biological microarray data and experiments on 0-1 knapsack problems show the effectiveness of the binary algorithm and its ability to achieve good quality solutions for both problems.

2019 ◽  
Vol 10 (2) ◽  
pp. 1-20 ◽  
Author(s):  
Sujata Dash ◽  
Ruppa Thulasiram ◽  
Parimala Thulasiraman

Conventional algorithms such as gradient-based optimization methods usually struggle to deal with high-dimensional non-linear problems and often land up with local minima. Recently developed nature-inspired optimization algorithms are the best approaches for finding global solutions for combinatorial optimization problems like microarray datasets. In this article, a novel hybrid swarm intelligence-based meta-search algorithm is proposed by combining a heuristic method called conditional mutual information maximization with chaos-based firefly algorithm. The combined algorithm is computed in an iterative manner to boost the sharing of information between fireflies, enhancing the search efficiency of chaos-based firefly algorithm and reduces the computational complexities of feature selection. The meta-search model is implemented using a well-established classifier, such as support vector machine as the modeler in a wrapper approach. The chaos-based firefly algorithm increases the global search mobility of fireflies. The efficiency of the model is studied over high-dimensional disease datasets and compared with standard firefly algorithm, particle swarm optimization, and genetic algorithm in the same experimental environment to establish its superiority of feature selection over selected counterparts.


2021 ◽  
Vol 40 (1) ◽  
pp. 535-550
Author(s):  
Ashis Kumar Mandal ◽  
Rikta Sen ◽  
Basabi Chakraborty

The fundamental aim of feature selection is to reduce the dimensionality of data by removing irrelevant and redundant features. As finding out the best subset of features from all possible subsets is computationally expensive, especially for high dimensional data sets, meta-heuristic algorithms are often used as a promising method for addressing the task. In this paper, a variant of recent meta-heuristic approach Owl Search Optimization algorithm (OSA) has been proposed for solving the feature selection problem within a wrapper-based framework. Several strategies are incorporated with an aim to strengthen BOSA (binary version of OSA) in searching the global best solution. The meta-parameter of BOSA is initialized dynamically and then adjusted using a self-adaptive mechanism during the search process. Besides, elitism and mutation operations are combined with BOSA to control the exploitation and exploration better. This improved BOSA is named in this paper as Modified Binary Owl Search Algorithm (MBOSA). Decision Tree (DT) classifier is used for wrapper based fitness function, and the final classification performance of the selected feature subset is evaluated by Support Vector Machine (SVM) classifier. Simulation experiments are conducted on twenty well-known benchmark datasets from UCI for the evaluation of the proposed algorithm, and the results are reported based on classification accuracy, the number of selected features, and execution time. In addition, BOSA along with three common meta-heuristic algorithms Binary Bat Algorithm (BBA), Binary Particle Swarm Optimization (BPSO), and Binary Genetic Algorithm (BGA) are used for comparison. Simulation results show that the proposed approach outperforms similar methods by reducing the number of features significantly while maintaining a comparable level of classification accuracy.


Author(s):  
Noria Bidi ◽  
Zakaria Elberrichi

Feature selection is essential to improve the classification effectiveness. This paper presents a new adaptive algorithm called FS-PeSOA (feature selection penguins search optimization algorithm) which is a meta-heuristic feature selection method based on “Penguins Search Optimization Algorithm” (PeSOA), it will be combined with different classifiers to find the best subset features, which achieve the highest accuracy in classification. In order to explore the feature subset candidates, the bio-inspired approach PeSOA generates during the process a trial feature subset and estimates its fitness value by using three classifiers for each case: Naive Bayes (NB), Nearest Neighbors (KNN) and Support Vector Machines (SVMs). Our proposed approach has been experimented on six well known benchmark datasets (Wisconsin Breast Cancer, Pima Diabetes, Mammographic Mass, Dermatology, Colon Tumor and Prostate Cancer data sets). Experimental results prove that the classification accuracy of FS-PeSOA is the highest and very powerful for different datasets.


Author(s):  
Prachi Agrawal ◽  
Talari Ganesh ◽  
Ali Wagdy Mohamed

AbstractThis article proposes a novel binary version of recently developed Gaining Sharing knowledge-based optimization algorithm (GSK) to solve binary optimization problems. GSK algorithm is based on the concept of how humans acquire and share knowledge during their life span. A binary version of GSK named novel binary Gaining Sharing knowledge-based optimization algorithm (NBGSK) depends on mainly two binary stages: binary junior gaining sharing stage and binary senior gaining sharing stage with knowledge factor 1. These two stages enable NBGSK for exploring and exploitation of the search space efficiently and effectively to solve problems in binary space. Moreover, to enhance the performance of NBGSK and prevent the solutions from trapping into local optima, NBGSK with population size reduction (PR-NBGSK) is introduced. It decreases the population size gradually with a linear function. The proposed NBGSK and PR-NBGSK applied to set of knapsack instances with small and large dimensions, which shows that NBGSK and PR-NBGSK are more efficient and effective in terms of convergence, robustness, and accuracy.


2021 ◽  
Vol 11 (3) ◽  
pp. 1286 ◽  
Author(s):  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Ali Dehghani ◽  
Om P. Malik ◽  
Ruben Morales-Menendez ◽  
...  

One of the most powerful tools for solving optimization problems is optimization algorithms (inspired by nature) based on populations. These algorithms provide a solution to a problem by randomly searching in the search space. The design’s central idea is derived from various natural phenomena, the behavior and living conditions of living organisms, laws of physics, etc. A new population-based optimization algorithm called the Binary Spring Search Algorithm (BSSA) is introduced to solve optimization problems. BSSA is an algorithm based on a simulation of the famous Hooke’s law (physics) for the traditional weights and springs system. In this proposal, the population comprises weights that are connected by unique springs. The mathematical modeling of the proposed algorithm is presented to be used to achieve solutions to optimization problems. The results were thoroughly validated in different unimodal and multimodal functions; additionally, the BSSA was compared with high-performance algorithms: binary grasshopper optimization algorithm, binary dragonfly algorithm, binary bat algorithm, binary gravitational search algorithm, binary particle swarm optimization, and binary genetic algorithm. The results show the superiority of the BSSA. The results of the Friedman test corroborate that the BSSA is more competitive.


2021 ◽  
Vol 11 (10) ◽  
pp. 4382
Author(s):  
Ali Sadeghi ◽  
Sajjad Amiri Doumari ◽  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Pavel Trojovský ◽  
...  

Optimization is the science that presents a solution among the available solutions considering an optimization problem’s limitations. Optimization algorithms have been introduced as efficient tools for solving optimization problems. These algorithms are designed based on various natural phenomena, behavior, the lifestyle of living beings, physical laws, rules of games, etc. In this paper, a new optimization algorithm called the good and bad groups-based optimizer (GBGBO) is introduced to solve various optimization problems. In GBGBO, population members update under the influence of two groups named the good group and the bad group. The good group consists of a certain number of the population members with better fitness function than other members and the bad group consists of a number of the population members with worse fitness function than other members of the population. GBGBO is mathematically modeled and its performance in solving optimization problems was tested on a set of twenty-three different objective functions. In addition, for further analysis, the results obtained from the proposed algorithm were compared with eight optimization algorithms: genetic algorithm (GA), particle swarm optimization (PSO), gravitational search algorithm (GSA), teaching–learning-based optimization (TLBO), gray wolf optimizer (GWO), and the whale optimization algorithm (WOA), tunicate swarm algorithm (TSA), and marine predators algorithm (MPA). The results show that the proposed GBGBO algorithm has a good ability to solve various optimization problems and is more competitive than other similar algorithms.


Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1190
Author(s):  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Štěpán Hubálovský

There are many optimization problems in the different disciplines of science that must be solved using the appropriate method. Population-based optimization algorithms are one of the most efficient ways to solve various optimization problems. Population-based optimization algorithms are able to provide appropriate solutions to optimization problems based on a random search of the problem-solving space without the need for gradient and derivative information. In this paper, a new optimization algorithm called the Group Mean-Based Optimizer (GMBO) is presented; it can be applied to solve optimization problems in various fields of science. The main idea in designing the GMBO is to use more effectively the information of different members of the algorithm population based on two selected groups, with the titles of the good group and the bad group. Two new composite members are obtained by averaging each of these groups, which are used to update the population members. The various stages of the GMBO are described and mathematically modeled with the aim of being used to solve optimization problems. The performance of the GMBO in providing a suitable quasi-optimal solution on a set of 23 standard objective functions of different types of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal is evaluated. In addition, the optimization results obtained from the proposed GMBO were compared with eight other widely used optimization algorithms, including the Marine Predators Algorithm (MPA), the Tunicate Swarm Algorithm (TSA), the Whale Optimization Algorithm (WOA), the Grey Wolf Optimizer (GWO), Teaching–Learning-Based Optimization (TLBO), the Gravitational Search Algorithm (GSA), Particle Swarm Optimization (PSO), and the Genetic Algorithm (GA). The optimization results indicated the acceptable performance of the proposed GMBO, and, based on the analysis and comparison of the results, it was determined that the GMBO is superior and much more competitive than the other eight algorithms.


2021 ◽  
Author(s):  
Rekha G ◽  
Krishna Reddy V ◽  
chandrashekar jatoth ◽  
Ugo Fiore

Abstract Class imbalance problems have attracted the research community but a few works have focused on feature selection with imbalanced datasets. To handle class imbalance problems, we developed a novel fitness function for feature selection using the chaotic salp swarm optimization algorithm, an efficient meta-heuristic optimization algorithm that has been successfully used in a wide range of optimization problems. This paper proposes an Adaboost algorithm with chaotic salp swarm optimization. The most discriminating features are selected using salp swarm optimization and Adaboost classifiers are thereafter trained on the features selected. Experiments show the ability of the proposed technique to find the optimal features with performance maximization of Adaboost.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0255307
Author(s):  
Fujun Wang ◽  
Xing Wang

Feature selection is an important task in big data analysis and information retrieval processing. It reduces the number of features by removing noise, extraneous data. In this paper, one feature subset selection algorithm based on damping oscillation theory and support vector machine classifier is proposed. This algorithm is called the Maximum Kendall coefficient Maximum Euclidean Distance Improved Gray Wolf Optimization algorithm (MKMDIGWO). In MKMDIGWO, first, a filter model based on Kendall coefficient and Euclidean distance is proposed, which is used to measure the correlation and redundancy of the candidate feature subset. Second, the wrapper model is an improved grey wolf optimization algorithm, in which its position update formula has been improved in order to achieve optimal results. Third, the filter model and the wrapper model are dynamically adjusted by the damping oscillation theory to achieve the effect of finding an optimal feature subset. Therefore, MKMDIGWO achieves both the efficiency of the filter model and the high precision of the wrapper model. Experimental results on five UCI public data sets and two microarray data sets have demonstrated the higher classification accuracy of the MKMDIGWO algorithm than that of other four state-of-the-art algorithms. The maximum ACC value of the MKMDIGWO algorithm is at least 0.5% higher than other algorithms on 10 data sets.


Sign in / Sign up

Export Citation Format

Share Document