scholarly journals Feature Selection Ensemble

10.29007/rlxq ◽  
2018 ◽  
Author(s):  
Qiang Shen ◽  
Ren Diao ◽  
Pan Su

Many strategies have been exploited for the task of feature selection, in an effort to identify more compact and better quality feature subsets. Such techniques typically involve the use of an individual feature significance evaluation, or a measurement of feature subset consistency, that work together with a search algorithm in order to determine a quality subset. Feature selection ensemble aims to combine the outputs of multiple feature selectors, thereby producing a more robust result for the subsequent classifier learning tasks. In this paper, three novel implementations of the feature selection ensemble concept are introduced, generalising the ensemble approach so that it can be used in conjunction with many subset evaluation techniques, and search algorithms. A recently developed heuristic algorithm: harmony search is employed to demonstrate the approaches. Results of experimental comparative studies are reported in order to highlight the benefits of the present work. Thepaper ends with a proposal to extend the application of feature selection ensemble to aiding the development of biped robots (inspired by the authors’ involvement in the joint celebration of Olympic and the centenary of the birth of Alan Turing).

Author(s):  
Laith Mohammad Abualigah ◽  
Mofleh Al‐diabat ◽  
Mohammad Al Shinwan ◽  
Khaldoon Dhou ◽  
Bisan Alsalibi ◽  
...  

2020 ◽  
Vol 10 (8) ◽  
pp. 2816 ◽  
Author(s):  
Soumyajit Saha ◽  
Manosij Ghosh ◽  
Soulib Ghosh ◽  
Shibaprasad Sen ◽  
Pawan Kumar Singh ◽  
...  

Nowadays, researchers aim to enhance man-to-machine interactions by making advancements in several domains. Facial emotion recognition (FER) is one such domain in which researchers have made significant progresses. Features for FER can be extracted using several popular methods. However, there may be some redundant/irrelevant features in feature sets. In order to remove those redundant/irrelevant features that do not have any significant impact on classification process, we propose a feature selection (FS) technique called the supervised filter harmony search algorithm (SFHSA) based on cosine similarity and minimal-redundancy maximal-relevance (mRMR). Cosine similarity aims to remove similar features from feature vectors, whereas mRMR was used to determine the feasibility of the optimal feature subsets using Pearson’s correlation coefficient (PCC), which favors the features that have lower correlation values with other features—as well as higher correlation values with the facial expression classes. The algorithm was evaluated on two benchmark FER datasets, namely the Radboud faces database (RaFD) and the Japanese female facial expression (JAFFE). Five different state-of-the-art feature descriptors including uniform local binary pattern (uLBP), horizontal–vertical neighborhood local binary pattern (hvnLBP), Gabor filters, histogram of oriented gradients (HOG) and pyramidal HOG (PHOG) were considered for FS. Obtained results signify that our technique effectively optimized the feature vectors and made notable improvements in overall classification accuracy.


2018 ◽  
Vol 27 (3) ◽  
pp. 465-488 ◽  
Author(s):  
Pawan Kumar Singh ◽  
Supratim Das ◽  
Ram Sarkar ◽  
Mita Nasipuri

Abstract The feature selection process can be considered a problem of global combinatorial optimization in machine learning, which reduces the irrelevant, noisy, and non-contributing features, resulting in acceptable classification accuracy. Harmony search algorithm (HSA) is an evolutionary algorithm that is applied to various optimization problems such as scheduling, text summarization, water distribution networks, vehicle routing, etc. This paper presents a hybrid approach based on support vector machine and HSA for wrapper feature subset selection. This approach is used to select an optimized set of features from an initial set of features obtained by applying Modified log-Gabor filters on prepartitioned rectangular blocks of handwritten document images written in either of 12 official Indic scripts. The assessment justifies the need of feature selection for handwritten script identification where local and global features are computed without knowing the exact importance of features. The proposed approach is also compared with four well-known evolutionary algorithms, namely genetic algorithm, particle swarm optimization, tabu search, ant colony optimization, and two statistical feature dimensionality reduction techniques, namely greedy attribute search and principal component analysis. The acquired results show that the optimal set of features selected using HSA gives better accuracy in handwritten script recognition.


Author(s):  
Ani Dijah Rahajoe ◽  
Rifki Fahrial Zainal ◽  
Budi Mukhamad Mulyo ◽  
Boonyang Plangkang ◽  
Rahmawati Febrifyaning Tias

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 102629-102645 ◽  
Author(s):  
Shameem Ahmed ◽  
Kushal Kanti Ghosh ◽  
Pawan Kumar Singh ◽  
Zong Woo Geem ◽  
Ram Sarkar

Data Mining ◽  
2011 ◽  
pp. 80-105 ◽  
Author(s):  
Yong Seong Kim ◽  
W. Nick Street ◽  
Filippo Menczer

Feature subset selection is an important problem in knowledge discovery, not only for the insight gained from determining relevant modeling variables, but also for the improved understandability, scalability, and, possibly, accuracy of the resulting models. The purpose of this chapter is to provide a comprehensive analysis of feature selection via evolutionary search in supervised and unsupervised learning. To achieve this purpose, we first discuss a general framework for feature selection based on a new search algorithm, Evolutionary Local Selection Algorithm (ELSA). The search is formulated as a multi-objective optimization problem to examine the trade-off between the complexity of the generated solutions against their quality. ELSA considers multiple objectives efficiently while avoiding computationally expensive global comparison. We combine ELSA with Artificial Neural Networks (ANNs) and Expectation-Maximization (EM) algorithms for feature selection in supervised and unsupervised learning respectively. Further, we provide a new two-level evolutionary algorithm, Meta-Evolutionary Ensembles (MEE), where feature selection is used to promote the diversity among classifiers in the same ensemble.


Sign in / Sign up

Export Citation Format

Share Document