scholarly journals selectBoost: a general algorithm to enhance the performance of variable selection methods

Author(s):  
Frédéric Bertrand ◽  
Ismaïl Aouadi ◽  
Nicolas Jung ◽  
Raphael Carapito ◽  
Laurent Vallat ◽  
...  

Abstract Motivation With the growth of big data, variable selection has become one of the critical challenges in statistics. Although many methods have been proposed in the literature, their performance in terms of recall (sensitivity) and precision (predictive positive value) is limited in a context where the number of variables by far exceeds the number of observations or in a highly correlated setting. Results In this article, we propose a general algorithm, which improves the precision of any existing variable selection method. This algorithm is based on highly intensive simulations and takes into account the correlation structure of the data. Our algorithm can either produce a confidence index for variable selection or be used in an experimental design planning perspective. We demonstrate the performance of our algorithm on both simulated and real data. We then apply it in two different ways to improve biological network reverse-engineering. Availability and implementation Code is available as the SelectBoost package on the CRAN, https://cran.r-project.org/package=SelectBoost. Some network reverse-engineering functionalities are available in the Patterns CRAN package, https://cran.r-project.org/package=Patterns. Supplementary information Supplementary data are available at Bioinformatics online.

Author(s):  
Dhamodharavadhani S. ◽  
Rathipriya R.

Regression model (RM) is an important tool for modeling and analyzing data. It is one of the popular predictive modeling techniques which explore the relationship between a dependent (target) and independent (predictor) variables. The variable selection method is used to form a good and effective regression model. Many variable selection methods existing for regression model such as filter method, wrapper method, embedded methods, forward selection method, Backward Elimination methods, stepwise methods, and so on. In this chapter, computational intelligence-based variable selection method is discussed with respect to the regression model in cybersecurity. Generally, these regression models depend on the set of (predictor) variables. Therefore, variable selection methods are used to select the best subset of predictors from the entire set of variables. Genetic algorithm-based quick-reduct method is proposed to extract optimal predictor subset from the given data to form an optimal regression model.


2013 ◽  
Vol 444-445 ◽  
pp. 604-609
Author(s):  
Guang Hui Fu ◽  
Pan Wang

LASSO is a very useful variable selection method for high-dimensional data , But it does not possess oracle property [Fan and Li, 200 and group effect [Zou and Hastie, 200. In this paper, we firstly review four improved LASSO-type methods which satisfy oracle property and (or) group effect, and then give another two new ones called WFEN and WFAEN. The performance on both the simulation and real data sets shows that WFEN and WFAEN are competitive with other LASSO-type methods.


Author(s):  
Dhamodharavadhani S. ◽  
Rathipriya R.

Regression model (RM) is an important tool for modeling and analyzing data. It is one of the popular predictive modeling techniques which explore the relationship between a dependent (target) and independent (predictor) variables. The variable selection method is used to form a good and effective regression model. Many variable selection methods existing for regression model such as filter method, wrapper method, embedded methods, forward selection method, Backward Elimination methods, stepwise methods, and so on. In this chapter, computational intelligence-based variable selection method is discussed with respect to the regression model in cybersecurity. Generally, these regression models depend on the set of (predictor) variables. Therefore, variable selection methods are used to select the best subset of predictors from the entire set of variables. Genetic algorithm-based quick-reduct method is proposed to extract optimal predictor subset from the given data to form an optimal regression model.


2021 ◽  
Vol 12 ◽  
Author(s):  
Xi Lu ◽  
Kun Fan ◽  
Jie Ren ◽  
Cen Wu

In high-throughput genetics studies, an important aim is to identify gene–environment interactions associated with the clinical outcomes. Recently, multiple marginal penalization methods have been developed and shown to be effective in G×E studies. However, within the Bayesian framework, marginal variable selection has not received much attention. In this study, we propose a novel marginal Bayesian variable selection method for G×E studies. In particular, our marginal Bayesian method is robust to data contamination and outliers in the outcome variables. With the incorporation of spike-and-slab priors, we have implemented the Gibbs sampler based on Markov Chain Monte Carlo (MCMC). The proposed method outperforms a number of alternatives in extensive simulation studies. The utility of the marginal robust Bayesian variable selection method has been further demonstrated in the case studies using data from the Nurse Health Study (NHS). Some of the identified main and interaction effects from the real data analysis have important biological implications.


2018 ◽  
Vol 21 (2) ◽  
pp. 117-124 ◽  
Author(s):  
Bakhtyar Sepehri ◽  
Nematollah Omidikia ◽  
Mohsen Kompany-Zareh ◽  
Raouf Ghavami

Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Materials & Methods: Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Result & Conclusion: Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Zhengguo Gu ◽  
Niek C. de Schipper ◽  
Katrijn Van Deun

AbstractInterdisciplinary research often involves analyzing data obtained from different data sources with respect to the same subjects, objects, or experimental units. For example, global positioning systems (GPS) data have been coupled with travel diary data, resulting in a better understanding of traveling behavior. The GPS data and the travel diary data are very different in nature, and, to analyze the two types of data jointly, one often uses data integration techniques, such as the regularized simultaneous component analysis (regularized SCA) method. Regularized SCA is an extension of the (sparse) principle component analysis model to the cases where at least two data blocks are jointly analyzed, which - in order to reveal the joint and unique sources of variation - heavily relies on proper selection of the set of variables (i.e., component loadings) in the components. Regularized SCA requires a proper variable selection method to either identify the optimal values for tuning parameters or stably select variables. By means of two simulation studies with various noise and sparseness levels in simulated data, we compare six variable selection methods, which are cross-validation (CV) with the “one-standard-error” rule, repeated double CV (rdCV), BIC, Bolasso with CV, stability selection, and index of sparseness (IS) - a lesser known (compared to the first five methods) but computationally efficient method. Results show that IS is the best-performing variable selection method.


Sign in / Sign up

Export Citation Format

Share Document