A stepwise regression method and consistent model selection for high-dimensional sparse linear models

2011 ◽  
Vol 21 (4) ◽  
Author(s):  
Ching-Kang Ing ◽  
Tze Leung Lai
2021 ◽  
pp. 1471082X2110347
Author(s):  
Panagiota Tsamtsakiri ◽  
Dimitris Karlis

There is an increasing interest in models for discrete valued time series. Among them, the integer autoregressive conditional heteroscedastic (INGARCH) is a model that has found several applications. In the present article, we study the problem of model selection for this family of models. Namely we consider that an observation conditional on the past follows a Poisson distribution where its mean depends on its past mean values and on past observations. We consider both linear and log-linear models. Our purpose is to select the most appropriate order of such models, using a trans-dimensional Bayesian approach that allows jumps between competing models. A small simulation experiment supports the usage of the method. We apply the methodology to real datasets to illustrate the potential of the approach.


Biometrika ◽  
2021 ◽  
Author(s):  
Emre Demirkaya ◽  
Yang Feng ◽  
Pallavi Basu ◽  
Jinchi Lv

Summary Model selection is crucial both to high-dimensional learning and to inference for contemporary big data applications in pinpointing the best set of covariates among a sequence of candidate interpretable models. Most existing work assumes implicitly that the models are correctly specified or have fixed dimensionality, yet both are prevalent in practice. In this paper, we exploit the framework of model selection principles under the misspecified generalized linear models presented in Lv and Liu (2014) and investigate the asymptotic expansion of the posterior model probability in the setting of high-dimensional misspecified models.With a natural choice of prior probabilities that encourages interpretability and incorporates the Kullback–Leibler divergence, we suggest the high-dimensional generalized Bayesian information criterion with prior probability for large-scale model selection with misspecification. Our new information criterion characterizes the impacts of both model misspecification and high dimensionality on model selection. We further establish the consistency of covariance contrast matrix estimation and the model selection consistency of the new information criterion in ultra-high dimensions under some mild regularity conditions. The numerical studies demonstrate that our new method enjoys improved model selection consistency compared to its main competitors.


Sign in / Sign up

Export Citation Format

Share Document