scholarly journals Knockoff boosted tree for model-free variable selection

Author(s):  
Tao Jiang ◽  
Yuanyuan Li ◽  
Alison A Motsinger-Reif

Abstract Motivation The recently proposed knockoff filter is a general framework for controlling the false discovery rate (FDR) when performing variable selection. This powerful new approach generates a ‘knockoff’ of each variable tested for exact FDR control. Imitation variables that mimic the correlation structure found within the original variables serve as negative controls for statistical inference. Current applications of knockoff methods use linear regression models and conduct variable selection only for variables existing in model functions. Here, we extend the use of knockoffs for machine learning with boosted trees, which are successful and widely used in problems where no prior knowledge of model function is required. However, currently available importance scores in tree models are insufficient for variable selection with FDR control. Results We propose a novel strategy for conducting variable selection without prior model topology knowledge using the knockoff method with boosted tree models. We extend the current knockoff method to model-free variable selection through the use of tree-based models. Additionally, we propose and evaluate two new sampling methods for generating knockoffs, namely the sparse covariance and principal component knockoff methods. We test and compare these methods with the original knockoff method regarding their ability to control type I errors and power. In simulation tests, we compare the properties and performance of importance test statistics of tree models. The results include different combinations of knockoffs and importance test statistics. We consider scenarios that include main-effect, interaction, exponential and second-order models while assuming the true model structures are unknown. We apply our algorithm for tumor purity estimation and tumor classification using Cancer Genome Atlas (TCGA) gene expression data. Our results show improved discrimination between difficult-to-discriminate cancer types. Availability and implementation The proposed algorithm is included in the KOBT package, which is available at https://cran.r-project.org/web/packages/KOBT/index.html. Supplementary information Supplementary data are available at Bioinformatics online.

Statistics ◽  
2018 ◽  
Vol 52 (6) ◽  
pp. 1212-1248
Author(s):  
Anchao Song ◽  
Tiefeng Ma ◽  
Shaogao Lv ◽  
Changsheng Lin

2018 ◽  
Vol 167 ◽  
pp. 366-377
Author(s):  
Ahmad Alothman ◽  
Yuexiao Dong ◽  
Andreas Artemiou

Author(s):  
Lexin Li ◽  
R. Dennis Cook ◽  
Christopher J. Nachtsheim

2020 ◽  
Vol 36 (10) ◽  
pp. 3099-3106
Author(s):  
Burim Ramosaj ◽  
Lubna Amro ◽  
Markus Pauly

Abstract Motivation Imputation procedures in biomedical fields have turned into statistical practice, since further analyses can be conducted ignoring the former presence of missing values. In particular, non-parametric imputation schemes like the random forest have shown favorable imputation performance compared to the more traditionally used MICE procedure. However, their effect on valid statistical inference has not been analyzed so far. This article closes this gap by investigating their validity for inferring mean differences in incompletely observed pairs while opposing them to a recent approach that only works with the given observations at hand. Results Our findings indicate that machine-learning schemes for (multiply) imputing missing values may inflate type I error or result in comparably low power in small-to-moderate matched pairs, even after modifying the test statistics using Rubin’s multiple imputation rule. In addition to an extensive simulation study, an illustrative data example from a breast cancer gene study has been considered. Availability and implementation The corresponding R-code can be accessed through the authors and the gene expression data can be downloaded at www.gdac.broadinstitute.org. Supplementary information Supplementary data are available at Bioinformatics online.


Biometrics ◽  
2011 ◽  
Vol 68 (1) ◽  
pp. 12-22
Author(s):  
Wei Sun ◽  
Lexin Li

Entropy ◽  
2019 ◽  
Vol 21 (4) ◽  
pp. 403 ◽  
Author(s):  
Changying Guo ◽  
Biqin Song ◽  
Yingjie Wang ◽  
Hong Chen ◽  
Huijuan Xiong

Model-free variable selection has attracted increasing interest recently due to its flexibility in algorithmic design and outstanding performance in real-world applications. However, most of the existing statistical methods are formulated under the mean square error (MSE) criterion, and susceptible to non-Gaussian noise and outliers. As the MSE criterion requires the data to satisfy Gaussian noise condition, it potentially hampers the effectiveness of model-free methods in complex circumstances. To circumvent this issue, we present a new model-free variable selection algorithm by integrating kernel modal regression and gradient-based variable identification together. The derived modal regression estimator is related closely to information theoretic learning under the maximum correntropy criterion, and assures algorithmic robustness to complex noise by replacing learning of the conditional mean with the conditional mode. The gradient information of estimator offers a model-free metric to screen the key variables. In theory, we investigate the theoretical foundations of our new model on generalization-bound and variable selection consistency. In applications, the effectiveness of the proposed method is verified by data experiments.


Sign in / Sign up

Export Citation Format

Share Document