scholarly journals A New Feature Selection Techniques Using Genetics Search and Random Search Approaches For Breast Cancer

2017 ◽  
Vol 14 (1) ◽  
pp. 409-414
Author(s):  
Tamilvanan Tamilvanan ◽  
V. Murali Bhaskaran
2021 ◽  
Vol 25 (1) ◽  
pp. 21-34
Author(s):  
Rafael B. Pereira ◽  
Alexandre Plastino ◽  
Bianca Zadrozny ◽  
Luiz H.C. Merschmann

In many important application domains, such as text categorization, biomolecular analysis, scene or video classification and medical diagnosis, instances are naturally associated with more than one class label, giving rise to multi-label classification problems. This has led, in recent years, to a substantial amount of research in multi-label classification. More specifically, feature selection methods have been developed to allow the identification of relevant and informative features for multi-label classification. This work presents a new feature selection method based on the lazy feature selection paradigm and specific for the multi-label context. Experimental results show that the proposed technique is competitive when compared to multi-label feature selection techniques currently used in the literature, and is clearly more scalable, in a scenario where there is an increasing amount of data.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 22090-22105
Author(s):  
Amin Ul Haq ◽  
Jian Ping Li ◽  
Abdus Saboor ◽  
Jalaluddin Khan ◽  
Samad Wali ◽  
...  

2019 ◽  
Vol 8 (2) ◽  
pp. 6396-6399

Breast Cancer Examination and Prediction are great provocations to the researchers in the medical applications. Breast Cancer Examination distinguishes benign from malignant breast lumps, Breast Cancer Prediction has great deal in foretelling when Breast Cancer is expected to reoccur in patients that have had their cancers excised. Feature Selection is considered to be the preliminary step used in process to find best subsets of attributes. In this paper authors confer about the performance of five classifiers Sequential minimal optimization (SMO), Multilayer Perceptrons, Kstar, Decision Table and Random Forest with and without feature selection. The results manifest that after implying two feature selection techniques such as Correlation based and information based with ranker algorithm there is an augmentation in the accuracy rate of the classifier. It has been observed that after through implication feature selection techniques accuracy of the classifiers such as SMO, Multilayer Perceptrons, Kstar, Decision Trees, and Random Forest are enhanced.


Author(s):  
Leena Nesamani S. ◽  
S. Nirmala Sigirtha Rajini

Predictive modeling or predict analysis is the process of trying to predict the outcome from data using machine learning models. The quality of the output predominantly depends on the quality of the data that is provided to the model. The process of selecting the best choice of input to a machine learning model depends on a variety of criteria and is referred to as feature engineering. The work is conducted to classify the breast cancer patients into either the recurrence or non-recurrence category. A categorical breast cancer dataset is used in this work from which the best set of features is selected to make accurate predictions. Two feature selection techniques, namely the chi-squared technique and the mutual information technique, have been used. The selected features were then used by the logistic regression model to make the final prediction. It was identified that the mutual information technique proved to be more efficient and produced higher accuracy in the predictions.


2021 ◽  
Vol 11 (15) ◽  
pp. 6769
Author(s):  
Souad Larabi-Marie-Sainte

The curse of dimensionality problem occurs when the data are high-dimensional. It affects the learning process and reduces the accuracy. Feature selection is one of the dimensionality reduction approaches that mainly contribute to solving the curse of the dimensionality problem by selecting the relevant features. Irrelevant features are the dependent and redundant features that cause noise in the data and then reduce its quality. The main well-known feature-selection methods are wrapper and filter techniques. However, wrapper feature selection techniques are computationally expensive, whereas filter feature selection methods suffer from multicollinearity. In this research study, four new feature selection methods based on outlier detection using the Projection Pursuit method are proposed. Outlier detection involves identifying abnormal data (irrelevant features of the transpose matrix obtained from the original dataset matrix). The concept of outlier detection using projection pursuit has proved its efficiency in many applications but has not yet been used as a feature selection approach. To the author’s knowledge, this study is the first of its kind. Experimental results on nineteen real datasets using three classifiers (k-NN, SVM, and Random Forest) indicated that the suggested methods enhanced the classification accuracy rate by an average of 6.64% when compared to the classification accuracy without applying feature selection. It also outperformed the state-of-the-art methods on most of the used datasets with an improvement rate ranging between 0.76% and 30.64%. Statistical analysis showed that the results of the proposed methods are statistically significant.


Sign in / Sign up

Export Citation Format

Share Document