shrinkage priors
Recently Published Documents


TOTAL DOCUMENTS

68
(FIVE YEARS 33)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Kazuhiro Yamaguchi ◽  
Jihong Zhang

This study proposed efficient Gibbs sampling algorithms for variable selection in a latent regression model under a unidimensional two-parameter logistic item response theory model. Three types of shrinkage priors were employed to obtain shrinkage estimates: double-exponential (i.e., Laplace), horseshoe, and horseshoe+ priors. These shrinkage priors were compared to a uniform prior case in both simulation and real data analysis. The simulation study revealed that two types of horseshoe priors had a smaller root mean square errors and shorter 95% credible interval lengths than double-exponential or uniform priors. In addition, the horseshoe prior+ was slightly more stable than the horseshoe prior. The real data example successfully proved the utility of horseshoe and horseshoe+ priors in selecting effective predictive covariates for math achievement. In the final section, we discuss the benefits and limitations of the three types of Bayesian variable selection methods.


2021 ◽  
pp. 161-178
Author(s):  
Anirban Bhattacharya ◽  
James Johndrow

2021 ◽  
pp. 179-198
Author(s):  
Yan Dora Zhang ◽  
Weichang Yu ◽  
Howard D. Bondell

Author(s):  
Kaito Shimamura ◽  
Shuichi Kawano

AbstractSparse convex clustering is to group observations and conduct variable selection simultaneously in the framework of convex clustering. Although a weighted $$L_1$$ L 1 norm is usually employed for the regularization term in sparse convex clustering, its use increases the dependence on the data and reduces the estimation accuracy if the sample size is not sufficient. To tackle these problems, this paper proposes a Bayesian sparse convex clustering method based on the ideas of Bayesian lasso and global-local shrinkage priors. We introduce Gibbs sampling algorithms for our method using scale mixtures of normal distributions. The effectiveness of the proposed methods is shown in simulation studies and a real data analysis.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Zihang Lu ◽  
Wendy Lou

Abstract In many clinical studies, researchers are interested in parsimonious models that simultaneously achieve consistent variable selection and optimal prediction. The resulting parsimonious models will facilitate meaningful biological interpretation and scientific findings. Variable selection via Bayesian inference has been receiving significant advancement in recent years. Despite its increasing popularity, there is limited practical guidance for implementing these Bayesian approaches and evaluating their comparative performance in clinical datasets. In this paper, we review several commonly used Bayesian approaches to variable selection, with emphasis on application and implementation through R software. These approaches can be roughly categorized into four classes: namely the Bayesian model selection, spike-and-slab priors, shrinkage priors, and the hybrid of both. To evaluate their variable selection performance under various scenarios, we compare these four classes of approaches using real and simulated datasets. These results provide practical guidance to researchers who are interested in applying Bayesian approaches for the purpose of variable selection.


2021 ◽  
Author(s):  
Arinjita Bhattacharyya ◽  
Subhadip Pal ◽  
Riten Mitra ◽  
Shesh Rai

Abstract Background: Prediction and classification algorithms are commonly used in clinical research for identifying patients susceptible to clinical conditions like diabetes, colon cancer, and Alzheimer’s disease. Developing accurate prediction and classification methods have implications for personalized medicine. Building an excellent predictive model involves selecting features that are most significantly associated with the response at hand. These features can include several biological and demographic characteristics, such as genomic biomarkers and health history. Such variable selection becomes challenging when the number of potential predictors is large. Bayesian shrinkage models have emerged as popular and flexible methods of variable selection in regression settings. The article discusses variable selection with three shrinkage priors and illustrates its application to clinical data sets such as Pima Indians Diabetes, Colon cancer, ADNI, and OASIS Alzheimer’s data sets. Methods: We present a unified Bayesian hierarchical framework that implements and compares shrinkage priors in binary and multinomial logistic regression models. The key feature is the representation of the likelihood by a Polya-Gamma data augmentation, which admits a natural integration with a family of shrinkage priors. We specifically focus on the Horseshoe, Dirichlet Laplace, and Double Pareto priors. Extensive simulation studies are conducted to assess the performances under different data dimensions and parameter settings. Measures of accuracy, AUC, brier score, L1 error, cross-entropy, ROC surface plots are used as evaluation criteria comparing the priors to frequentist methods like Lasso, Elastic-Net, and Ridge regression. Results: All three priors can be used for robust prediction with significant metrics, irrespective of their categorical response model choices. Simulation study could achieve the mean prediction accuracy of 91% (95% CI: 90.7, 91.2) and 74% (95% CI: 73.8,74.1) for logistic regression and multinomial logistic models, respectively. The model can identify significant variables for disease risk prediction and is computationally efficient. Conclusions: The models are robust enough to conduct both variable selection and future prediction because of their high shrinkage property and applicability to a broad range of classification problems.


Sign in / Sign up

Export Citation Format

Share Document