shrinkage prior
Recently Published Documents


TOTAL DOCUMENTS

24
(FIVE YEARS 15)

H-INDEX

4
(FIVE YEARS 1)

2022 ◽  
Vol -1 (-1) ◽  
Author(s):  
Fangzheng Xie ◽  
Joshua Cape ◽  
Carey E. Priebe ◽  
Yanxun Xu

Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2865
Author(s):  
Jiayi Luo ◽  
Cindy Long Yu

Real-time nowcasting is a process to assess current-quarter GDP from timely released economic and financial series before the figure is disseminated in order to catch the overall macroeconomic conditions in real time. In economic data nowcasting, dynamic factor models (DFMs) are widely used due to their abilities to bridge information with different frequencies and to achieve dimension reduction. However, most of the research using DFMs assumes a fixed known number of factors contributing to GDP nowcasting. In this paper, we propose a Bayesian approach with the horseshoe shrinkage prior to determine the number of factors that have nowcasting power in GDP and to accurately estimate model parameters and latent factors simultaneously. The horseshoe prior is a powerful shrinkage prior in that it can shrink unimportant signals to 0 while keeping important ones remaining large and practically unshrunk. The validity of the method is demonstrated through simulation studies and an empirical study of nowcasting U.S. quarterly GDP growth rates using monthly data series in the U.S. market.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Hana Šinkovec ◽  
Georg Heinze ◽  
Rok Blagus ◽  
Angelika Geroldinger

Abstract Background For finite samples with binary outcomes penalized logistic regression such as ridge logistic regression has the potential of achieving smaller mean squared errors (MSE) of coefficients and predictions than maximum likelihood estimation. There is evidence, however, that ridge logistic regression can result in highly variable calibration slopes in small or sparse data situations. Methods In this paper, we elaborate this issue further by performing a comprehensive simulation study, investigating the performance of ridge logistic regression in terms of coefficients and predictions and comparing it to Firth’s correction that has been shown to perform well in low-dimensional settings. In addition to tuned ridge regression where the penalty strength is estimated from the data by minimizing some measure of the out-of-sample prediction error or information criterion, we also considered ridge regression with pre-specified degree of shrinkage. We included ‘oracle’ models in the simulation study in which the complexity parameter was chosen based on the true event probabilities (prediction oracle) or regression coefficients (explanation oracle) to demonstrate the capability of ridge regression if truth was known. Results Performance of ridge regression strongly depends on the choice of complexity parameter. As shown in our simulation and illustrated by a data example, values optimized in small or sparse datasets are negatively correlated with optimal values and suffer from substantial variability which translates into large MSE of coefficients and large variability of calibration slopes. In contrast, in our simulations pre-specifying the degree of shrinkage prior to fitting led to accurate coefficients and predictions even in non-ideal settings such as encountered in the context of rare outcomes or sparse predictors. Conclusions Applying tuned ridge regression in small or sparse datasets is problematic as it results in unstable coefficients and predictions. In contrast, determining the degree of shrinkage according to some meaningful prior assumptions about true effects has the potential to reduce bias and stabilize the estimates.


2021 ◽  
Vol 72 (3) ◽  
pp. 170-176
Author(s):  
Olaf Zagólski ◽  
Paweł Stręk ◽  
Małgorzata Lisiecka ◽  
Przemyslaw Gorzedowski

2021 ◽  
pp. 83-102
Author(s):  
Krisada Lekdee ◽  
Chao Yang ◽  
Lily Ingsrisawang ◽  
Yisheng Li

Author(s):  
Yan Dora Zhang ◽  
Brian P. Naughton ◽  
Howard D. Bondell ◽  
Brian J. Reich

2020 ◽  
Vol 114 (3) ◽  
pp. e119
Author(s):  
Kana Murakami ◽  
Hiroya Kitasaka ◽  
Tomokuni Yoshimura ◽  
Noritaka Fukunaga ◽  
Yoshimasa Asada

2020 ◽  
Vol 12 (2) ◽  
Author(s):  
Sebastian Ankargren ◽  
Måns Unosson ◽  
Yukai Yang

AbstractWe propose a Bayesian vector autoregressive (VAR) model for mixed-frequency data. Our model is based on the mean-adjusted parametrization of the VAR and allows for an explicit prior on the “steady states” (unconditional means) of the included variables. Based on recent developments in the literature, we discuss extensions of the model that improve the flexibility of the modeling approach. These extensions include a hierarchical shrinkage prior for the steady-state parameters, and the use of stochastic volatility to model heteroskedasticity. We put the proposed model to use in a forecast evaluation using US data consisting of 10 monthly and three quarterly variables. The results show that the predictive ability typically benefits from using mixed-frequency data, and that improvement can be obtained for both monthly and quarterly variables. We also find that the steady-state prior generally enhances the accuracy of the forecasts, and that accounting for heteroskedasticity by means of stochastic volatility usually provides additional improvements, although not for all variables.


Sign in / Sign up

Export Citation Format

Share Document