scholarly journals Feature Selection Based on Machine Learning in MRIs for Hippocampal Segmentation

2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Sabina Tangaro ◽  
Nicola Amoroso ◽  
Massimo Brescia ◽  
Stefano Cavuoti ◽  
Andrea Chincarini ◽  
...  

Neurodegenerative diseases are frequently associated with structural changes in the brain. Magnetic resonance imaging (MRI) scans can show these variations and therefore can be used as a supportive feature for a number of neurodegenerative diseases. The hippocampus has been known to be a biomarker for Alzheimer disease and other neurological and psychiatric diseases. However, it requires accurate, robust, and reproducible delineation of hippocampal structures. Fully automatic methods are usually the voxel based approach; for each voxel a number of local features were calculated. In this paper, we compared four different techniques for feature selection from a set of 315 features extracted for each voxel: (i) filter method based on the Kolmogorov-Smirnov test; two wrapper methods, respectively, (ii) sequential forward selection and (iii) sequential backward elimination; and (iv) embedded method based on the Random Forest Classifier on a set of 10 T1-weighted brain MRIs and tested on an independent set of 25 subjects. The resulting segmentations were compared with manual reference labelling. By using only 23 feature for each voxel (sequential backward elimination) we obtained comparable state-of-the-art performances with respect to the standard tool FreeSurfer.

2020 ◽  
Vol 8 (5) ◽  
pp. 1591-1596

Nutrition problems that occurred in districts/cities of Central Java province from 2015-2017 were only 1 district city that did not have nutritional problems (good category) in 2015.The rest had acute, chronic or acute chronic nutrition problems. The search for the most influential attributes in toddler nutrition problems using data mining is expected to help health workers to focus more on solving problems based on classification in the area.Therefore, improving the nutritional status of the community can be accelerated. The best parameter search from the selection of features and data mining algorithm using the Optimize Parameters (Grid) operator found in Rapidminer.The feature selection models used are Backward Elimination, Forward Selection, and Optimize Selection. The datamining algorithm used is Naive Bayes, Decision Tree, k-NN, and Neural Network.The merging of the feature selection model and the datamining algorithm resulted in 12 algorithm models used in this study.The best model that was processed using test data with the highest accuracy of 74.19% was obtained from backward-neural network elimination. The attribute that is not very influential based on the model obtained is the condition of the mother who died.


2007 ◽  
Vol 19 (7) ◽  
pp. 1939-1961 ◽  
Author(s):  
Shay Cohen ◽  
Gideon Dror ◽  
Eytan Ruppin

We present and study the contribution-selection algorithm (CSA), a novel algorithm for feature selection. The algorithm is based on the multiperturbation shapley analysis (MSA), a framework that relies on game theory to estimate usefulness. The algorithm iteratively estimates the usefulness of features and selects them accordingly, using either forward selection or backward elimination. It can optimize various performance measures over unseen data such as accuracy, balanced error rate, and area under receiver-operator-characteristic curve. Empirical comparison with several other existing feature selection methods shows that the backward elimination variant of CSA leads to the most accurate classification results on an array of data sets.


2017 ◽  
Vol 9 (2) ◽  
pp. 116-123 ◽  
Author(s):  
Ivo Colanus Rally Drajana

Telah banyak peneliti-peneliti termotivasi dalam meningkatkan kinerja performa prediksi. Support Vector Machine (SVM) metode yang berlandaskan pada teori pembelajaran statistic dan memberi hasil yang menjanjikan akan lebih baik dibanding metode lain. SVM bekerja juga dengan baik terhadap data yang berdimensi tinggi dengan menggunakan teknik kernel. Penentuan variabel yang relevan sangat dibutuhkan untuk dapat memberikan kinerja performa lebih efektif lagi pada suatu model. Pada penelitian ini bermaksud untuk mengembangkan model prediksi dengan mengkombinasikan algoritma Support Vector Machine dengan Feature Selection, khususnya forward selection dalam memprediksi pembayaran pembelian bahan baku kopra. Model yang diusulkan dievaluasi menggunakan data time pembelian bahan baku kopra. Hasil eksperimen penelitian ini menunjukan dimana series pembayaran algoritma SVM dan Forward Selection memberikan kinerja performa yang terbaik dibandingkan SVM, SVM dan Backward Elimination serta BPNN dan Feature Selection.


Author(s):  
Arvind Kumar Tiwari

Feature selection is an important topic in data mining, especially for high dimensional dataset. Feature selection is a process commonly used in machine learning, wherein subsets of the features available from the data are selected for application of learning algorithm. The best subset contains the least number of dimensions that most contribute to accuracy. Feature selection methods can be decomposed into three main classes, one is filter method, another one is wrapper method and third one is embedded method. This chapter presents an empirical comparison of feature selection methods and its algorithm. In view of the substantial number of existing feature selection algorithms, the need arises to count on criteria that enable to adequately decide which algorithm to use in certain situation. This chapter reviews several fundamental algorithms found in the literature and assess their performance in a controlled scenario.


Author(s):  
Shashwati Mishra ◽  
Mrutyunjaya Panda

Feature plays a very important role in the analysis and prediction of data as it carries the most valuable information about the data. This data may be in a structured format or in an unstructured format. Feature engineering process is used to extract features from these data. Selection of features is one of the crucial steps in the feature engineering process. This feature selection process can adopt four different approaches. On that basis, it can be classified into four basic categories, namely filter method, wrapper method, embedded method, and hybrid method. This chapter discusses about different techniques coming under these four categories along with the research work on feature selection.


2020 ◽  
pp. 422-442
Author(s):  
Arvind Kumar Tiwari

Feature selection is an important topic in data mining, especially for high dimensional dataset. Feature selection is a process commonly used in machine learning, wherein subsets of the features available from the data are selected for application of learning algorithm. The best subset contains the least number of dimensions that most contribute to accuracy. Feature selection methods can be decomposed into three main classes, one is filter method, another one is wrapper method and third one is embedded method. This chapter presents an empirical comparison of feature selection methods and its algorithm. In view of the substantial number of existing feature selection algorithms, the need arises to count on criteria that enable to adequately decide which algorithm to use in certain situation. This chapter reviews several fundamental algorithms found in the literature and assess their performance in a controlled scenario.


2020 ◽  
Vol 5 (2) ◽  
pp. 153
Author(s):  
Rizki Tri Prasetio

Computer assisted medical diagnosis is a major machine learning problem being researched recently. General classifiers learn from the data itself through training process, due to the inexperience of an expert in determining parameters. This research proposes a methodology based on machine learning paradigm. Integrates the search heuristic that is inspired by natural evolution called genetic algorithm with the simplest and the most used learning algorithm, k-nearest Neighbor. The genetic algorithm were used for feature selection and parameter optimization while k-nearest Neighbor were used as a classifier. The proposed method is experimented on five benchmarked medical datasets from University California Irvine Machine Learning Repository and compared with original k-NN and other feature selection algorithm i.e., forward selection, backward elimination and greedy feature selection.  Experiment results show that the proposed method is able to achieve good performance with significant improvement with p value of t-Test is 0.0011.


2021 ◽  
Author(s):  
Jianfeng Wu ◽  
Qunxi Dong ◽  
Jie Zhang ◽  
Yi Su ◽  
Teresa Wu ◽  
...  

Amyloid-β (Aβ) plaques and tau protein tangles in the brain are now widely recognized as the defining hallmarks of Alzheimer's disease (AD), followed by structural atrophy detectable on brain magnetic resonance imaging (MRI) scans. One of the particular neurodegenerative regions is the hippocampus to which the influence of Aβ/tau on has been one of the research focuses in the AD pathophysiological progress. This work proposes a novel framework, Federated Morphometry Feature Selection (FMFS) model, to examine subtle aspects of hippocampal morphometry that are associated with Aβ/tau burden in the brain, measured using positron emission tomography (PET). FMFS is comprised of hippocampal surface-based feature calculation, patch-based feature selection, federated group LASSO regression, federated screening rule-based stability selection, and region of interest (ROI) identification. FMFS was tested on two ADNI cohorts to understand hippocampal alterations that relate to Aβ/tau depositions. Each cohort included pairs of MRI and PET for AD, mild cognitive impairment (MCI) and cognitively unimpaired (CU) subjects. Experimental results demonstrated that FMFS achieves an 89x speedup compared to other published state-of-the-art methods under five independent hypothetical institutions. In addition, the subiculum and cornu ammonis 1 (CA1 subfield) were identified as hippocampal subregions where atrophy is strongly associated with abnormal Aβ/tau. As potential biomarkers for Aβ/tau pathology, the features from the identified ROIs had greater power for predicting cognitive assessment and for survival analysis than five other imaging biomarkers. All the results indicate that FMFS is an efficient and effective tool to reveal associations between Aβ/tau burden and hippocampal morphometry.


Author(s):  
Fatemeh Alighardashi ◽  
Mohammad Ali Zare Chahooki

Improving the software product quality before releasing by periodic tests is one of the most expensive activities in software projects. Due to limited resources to modules test in software projects, it is important to identify fault-prone modules and use the test sources for fault prediction in these modules. Software fault predictors based on machine learning algorithms, are effective tools for identifying fault-prone modules. Extensive studies are being done in this field to find the connection between features of software modules, and their fault-prone. Some of features in predictive algorithms are ineffective and reduce the accuracy of prediction process. So, feature selection methods to increase performance of prediction models in fault-prone modules are widely used. In this study, we proposed a feature selection method for effective selection of features, by using combination of filter feature selection methods. In the proposed filter method, the combination of several filter feature selection methods presented as fused weighed filter method. Then, the proposed method caused convergence rate of feature selection as well as the accuracy improvement. The obtained results on NASA and PROMISE with ten datasets, indicates the effectiveness of proposed method in improvement of accuracy and convergence of software fault prediction.


Sign in / Sign up

Export Citation Format

Share Document