feature stability
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 13)

H-INDEX

8
(FIVE YEARS 2)

Author(s):  
Salvatore Gitto ◽  
Renato Cuocolo ◽  
Ilaria Emili ◽  
Laura Tofanelli ◽  
Vito Chianca ◽  
...  

AbstractThis study aims to investigate the influence of interobserver manual segmentation variability on the reproducibility of 2D and 3D unenhanced computed tomography (CT)- and magnetic resonance imaging (MRI)-based texture analysis. Thirty patients with cartilaginous bone tumors (10 enchondromas, 10 atypical cartilaginous tumors, 10 chondrosarcomas) were retrospectively included. Three radiologists independently performed manual contour-focused segmentation on unenhanced CT and T1-weighted and T2-weighted MRI by drawing both a 2D region of interest (ROI) on the slice showing the largest tumor area and a 3D ROI including the whole tumor volume. Additionally, a marginal erosion was applied to both 2D and 3D segmentations to evaluate the influence of segmentation margins. A total of 783 and 1132 features were extracted from original and filtered 2D and 3D images, respectively. Intraclass correlation coefficient ≥ 0.75 defined feature stability. In 2D vs. 3D contour-focused segmentation, the rates of stable features were 74.71% vs. 86.57% (p < 0.001), 77.14% vs. 80.04% (p = 0.142), and 95.66% vs. 94.97% (p = 0.554) for CT and T1-weighted and T2-weighted images, respectively. Margin shrinkage did not improve 2D (p = 0.343) and performed worse than 3D (p < 0.001) contour-focused segmentation in terms of feature stability. In 2D vs. 3D contour-focused segmentation, matching stable features derived from CT and MRI were 65.8% vs. 68.7% (p = 0.191), and those derived from T1-weighted and T2-weighted images were 76.0% vs. 78.2% (p = 0.285). 2D and 3D radiomic features of cartilaginous bone tumors extracted from unenhanced CT and MRI are reproducible, although some degree of interobserver segmentation variability highlights the need for reliability analysis in future studies.


2021 ◽  
Vol 88 ◽  
pp. 272-277
Author(s):  
Avishek Chatterjee ◽  
Martin Valliéres ◽  
Reza Forghani ◽  
Jan Seuntjens

Author(s):  
Christoph Haarburger ◽  
Justus Schock ◽  
Daniel Truhn ◽  
Philippe Weitz ◽  
Gustav Mueller-Franzes ◽  
...  

2020 ◽  
Vol 36 (10) ◽  
pp. 3093-3098 ◽  
Author(s):  
Saeid Parvandeh ◽  
Hung-Wen Yeh ◽  
Martin P Paulus ◽  
Brett A McKinney

Abstract Summary Feature selection can improve the accuracy of machine-learning models, but appropriate steps must be taken to avoid overfitting. Nested cross-validation (nCV) is a common approach that chooses the classification model and features to represent a given outer fold based on features that give the maximum inner-fold accuracy. Differential privacy is a related technique to avoid overfitting that uses a privacy-preserving noise mechanism to identify features that are stable between training and holdout sets. We develop consensus nested cross-validation (cnCV) that combines the idea of feature stability from differential privacy with nCV. Feature selection is applied in each inner fold and the consensus of top features across folds is used as a measure of feature stability or reliability instead of classification accuracy, which is used in standard nCV. We use simulated data with main effects, correlation and interactions to compare the classification accuracy and feature selection performance of the new cnCV with standard nCV, Elastic Net optimized by cross-validation, differential privacy and private evaporative cooling (pEC). We also compare these methods using real RNA-seq data from a study of major depressive disorder. The cnCV method has similar training and validation accuracy to nCV, but cnCV has much shorter run times because it does not construct classifiers in the inner folds. The cnCV method chooses a more parsimonious set of features with fewer false positives than nCV. The cnCV method has similar accuracy to pEC and cnCV selects stable features between folds without the need to specify a privacy threshold. We show that cnCV is an effective and efficient approach for combining feature selection with classification. Availability and implementation Code available at https://github.com/insilico/cncv. Supplementary information Supplementary data are available at Bioinformatics online.


2020 ◽  
Author(s):  
Saeid Parvandeh ◽  
Hung-Wen Yeh ◽  
Martin P. Paulus ◽  
Brett A. McKinney

AbstractMotivationFeature selection can improve the accuracy of machine learning models, but appropriate steps must be taken to avoid overfitting. Nested cross-validation (nCV) is a common approach that chooses the classification model and features to represent a given outer fold based on features that give the maximum inner-fold accuracy. Differential privacy is a related technique to avoid overfitting that uses a privacy preserving noise mechanism to identify features that are stable between training and holdout sets.MethodsWe develop consensus nested CV (cnCV) that combines the idea of feature stability from differential privacy with nested CV. Feature selection is applied in each inner fold and the consensus of top features across folds is a used as a measure of feature stability or reliability instead of classification accuracy, which is used in standard nCV. We use simulated data with main effects, correlation, and interactions to compare the classification accuracy and feature selection performance of the new cnCV with standard nCV, Elastic Net optimized by CV, differential privacy, and private Evaporative Cooling (pEC). We also compare these methods using real RNA-Seq data from a study of major depressive disorder.ResultsThe cnCV method has similar training and validation accuracy to nCV, but cnCV has much shorter run times because it does not construct classifiers in the inner folds. The cnCV method chooses a more parsimonious set of features with fewer false positives than nCV. The cnCV method has similar accuracy to pEC and cnCV selects stable features between folds without the need to specify a privacy threshold. We show that cnCV is an effective and efficient approach for combining feature selection with classification.AvailabilityCode available at https://github.com/insilico/[email protected] information:


Sign in / Sign up

Export Citation Format

Share Document