A Multilevel Optimal Feature Selection and Ensemble Learning for a Specific CAD System-Pulmonary Nodule Detection

2013 ◽  
Vol 380-384 ◽  
pp. 1593-1599
Author(s):  
Hao Yan Guo ◽  
Da Zheng Wang

The traditional motivation behind feature selection algorithms is to find the best subset of features for a task using one particular learning algorithm. However, it has been often found that no single classifier is entirely satisfactory for a particular task. Therefore, how to further improve the performance of these single systems on the basis of the previous optimal feature subset is a very important issue.We investigate the notion of optimal feature selection and present a practical feature selection approach that is based on an optimal feature subset of a single CAD system, which is referred to as a multilevel optimal feature selection method (MOFS) in this paper. Through MOFS, we select the different optimal feature subsets in order to eliminate features that are redundant or irrelevant and obtain optimal features.

The optimal feature subset selection over very high dimensional data is a vital issue. Even though the optimal features are selected, the classification of those selected features becomes a key complicated task. In order to handle these problems, a novel, Accelerated Simulated Annealing and Mutation Operator (ASAMO) feature selection algorithm is suggested in this work. For solving the classification problem, the Fuzzy Minimal Consistent Class Subset Coverage (FMCCSC) problem is introduced. In FMCCSC, consistent subset is combined with the K-Nearest Neighbour (KNN) classifier known as FMCCSC-KNN classifier. The two data sets Dorothea and Madelon from UCI machine repository are experimented for optimal feature selection and classification. The experimental results substantiate the efficiency of proposed ASAMO with FMCCSC-KNN classifier compared to Particle Swarm Optimization (PSO) and Accelerated PSO feature selection algorithms.


Author(s):  
Suja K ◽  
Rengarajan A

The irrelevant features, along with redundant features, severely affect the accuracy of the learning machines. Feature subset selection as the process of identifying and removing many irrelevant and redundant features. The overall process of the optimal feature selection method is divided into two main steps, such as, i) preprocessing (ii) Optimal feature selection using clustering and tree generation. At first, preprocessing is done in the input micro array dataset. Then the Possibilistic fuzzy c-means clustering algorithm with optimal minimum spanning tree algorithm is applied on the high dimensional micro array dataset to select the important features. Here the proposed method is optimally select the features with the help of Adaptive artificial bee colony algorithm.


2021 ◽  
Vol 12 ◽  
Author(s):  
Dongxu Zhao ◽  
Zhixia Teng ◽  
Yanjuan Li ◽  
Dong Chen

Recently, several anti-inflammatory peptides (AIPs) have been found in the process of the inflammatory response, and these peptides have been used to treat some inflammatory and autoimmune diseases. Therefore, identifying AIPs accurately from a given amino acid sequences is critical for the discovery of novel and efficient anti-inflammatory peptide-based therapeutics and the acceleration of their application in therapy. In this paper, a random forest-based model called iAIPs for identifying AIPs is proposed. First, the original samples were encoded with three feature extraction methods, including g-gap dipeptide composition (GDC), dipeptide deviation from the expected mean (DDE), and amino acid composition (AAC). Second, the optimal feature subset is generated by a two-step feature selection method, in which the feature is ranked by the analysis of variance (ANOVA) method, and the optimal feature subset is generated by the incremental feature selection strategy. Finally, the optimal feature subset is inputted into the random forest classifier, and the identification model is constructed. Experiment results showed that iAIPs achieved an AUC value of 0.822 on an independent test dataset, which indicated that our proposed model has better performance than the existing methods. Furthermore, the extraction of features for peptide sequences provides the basis for evolutionary analysis. The study of peptide identification is helpful to understand the diversity of species and analyze the evolutionary history of species.


2014 ◽  
Vol 507 ◽  
pp. 806-809
Author(s):  
Shu Fang Li ◽  
Qin Jia ◽  
Hong Liang

In order to Red Tide algae present real-time automatic classification method of high accuracy rate, this paper proposes using ReliefF-SBS for feature selection. Namely feature analysis about Red Tide algae image original data set. And on this basis, feature selection to remove the irrelevant features and redundant features from the original feature set feature, to get the optimal feature subset, and reduce their impact on the classification accuracy. Meanwhile compare the classification results before and after SVM and KNN two kinds feature selection classifiers.


Author(s):  
Ilangovan Sangaiya ◽  
A. Vincent Antony Kumar

In data mining, people require feature selection to select relevant features and to remove unimportant irrelevant features from a original data set based on some evolution criteria. Filter and wrapper are the two methods used but here the authors have proposed a hybrid feature selection method to take advantage of both methods. The proposed method uses symmetrical uncertainty and genetic algorithms for selecting the optimal feature subset. This has been done so as to improve processing time by reducing the dimension of the data set without compromising the classification accuracy. This proposed hybrid algorithm is much faster and scales well to the data set in terms of selected features, classification accuracy and running time than most existing algorithms.


2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Deepti Sisodia ◽  
Dilip Singh Sisodia

PurposeThe problem of choosing the utmost useful features from hundreds of features from time-series user click data arises in online advertising toward fraudulent publisher's classification. Selecting feature subsets is a key issue in such classification tasks. Practically, the use of filter approaches is common; however, they neglect the correlations amid features. Conversely, wrapper approaches could not be applied due to their complexities. Moreover, in particular, existing feature selection methods could not handle such data, which is one of the major causes of instability of feature selection.Design/methodology/approachTo overcome such issues, a majority voting-based hybrid feature selection method, namely feature distillation and accumulated selection (FDAS), is proposed to investigate the optimal subset of relevant features for analyzing the publisher's fraudulent conduct. FDAS works in two phases: (1) feature distillation, where significant features from standard filter and wrapper feature selection methods are obtained using majority voting; (2) accumulated selection, where we enumerated an accumulated evaluation of relevant feature subset to search for an optimal feature subset using effective machine learning (ML) models.FindingsEmpirical results prove enhanced classification performance with proposed features in average precision, recall, f1-score and AUC in publisher identification and classification.Originality/valueThe FDAS is evaluated on FDMA2012 user-click data and nine other benchmark datasets to gauge its generalizing characteristics, first, considering original features, second, with relevant feature subsets selected by feature selection (FS) methods, third, with optimal feature subset obtained by the proposed approach. ANOVA significance test is conducted to demonstrate significant differences between independent features.


Author(s):  
Ravi Kumar Saidala

Emails have become one of the popular and flexible web or mobile-based applications that enables users to communicate. For decades, the most severe problem identified in email applications was unwanted emails. Electronic spam is also referred as spam emails, in which unsolicited and unwanted mails are Received. Making an email mailbox clean by detecting and eliminating all the spam mails is a challenging task. Classification-based email filtering is one of the best approaches used by many researchers to deal with the spam email filtering problem. In this work, the NOA optimization algorithm and the SVM classifier are used for getting an optimal feature subset of the Enron-spam dataset and classifying the obtained optimal feature subset. NOA is a recently developed metaheuristic algorithm which is driven by mimicking the energy saving flying pattern of the Northern Bald Ibis (Threskiornithidae). The performance comparisons have been made with other existing methods. The superiority of the proposed novel feature selection approach is evident in the analysis and comparison of the classification results.


Author(s):  
Hui Wang ◽  
Li Li Guo ◽  
Yun Lin

Automatic modulation recognition is very important for the receiver design in the broadband multimedia communication system, and the reasonable signal feature extraction and selection algorithm is the key technology of Digital multimedia signal recognition. In this paper, the information entropy is used to extract the single feature, which are power spectrum entropy, wavelet energy spectrum entropy, singular spectrum entropy and Renyi entropy. And then, the feature selection algorithm of distance measurement and Sequential Feature Selection(SFS) are presented to select the optimal feature subset. Finally, the BP neural network is used to classify the signal modulation. The simulation result shows that the four-different information entropy can be used to classify different signal modulation, and the feature selection algorithm is successfully used to choose the optimal feature subset and get the best performance.


Feature selection in multispectral high dimensional information is a hard labour machine learning problem because of the imbalanced classes present in the data. The existing Most of the feature selection schemes in the literature ignore the problem of class imbalance by choosing the features from the classes having more instances and avoiding significant features of the classes having less instances. In this paper, SMOTE concept is exploited to produce the required samples form minority classes. Feature selection model is formulated with the objective of reducing number of features with improved classification performance. This model is based on dimensionality reduction by opt for a subset of relevant spectral, textural and spatial features while eliminating the redundant features for the purpose of improved classification performance. Binary ALO is engaged to solve the feature selection model for optimal selection of features. The proposed ALO-SVM with wrapper concept is applied to each potential solution obtained during optimization step. The working of this methodology is tested on LANDSAT multispectral image.


Big mining plays a more critical role in the real world environment due to presence of large volume of data with different varieties and type. Handling these data values and predicting the information would be the more difficult task which needs to be concerned more to obtain the useful knowledge. This is achieved in our previous research work by introducing the Enhanced Particle Swarm Optimization with Genetic Algorithm – Modified Artificial Neural Network (EPSOGA -MANN) which can select the optimal features from the big volume of data. However this research work might be reduced in its performance due to presence of missing values in the dataset. And also this method is more complex to perform due to increased computational overhead of ANN algorithm. This is resolved in the proposed research method by introducing the method namely Missing Value concerned Optimal Feature Selection Method (MV-OFSM). In this research method Improved KNN imputation algorithm is introduced to handle the missing values. And then Dynamic clustering method is introduced to cluster the dataset based on closeness measure. Then Anarchies Society Optimization (ASO) based feature selection approach is applied for performing feature selection in the given dataset. Finally a Hybrid ANN-GA classification technique is applied for implementing the classification. The overall performance evaluation of the research method is performed in the matlab simulation environment from which it is proved that the proposed research method leads to provide the better performance than the existing research technique.


Sign in / Sign up

Export Citation Format

Share Document