frequentist model averaging
Recently Published Documents


TOTAL DOCUMENTS

26
(FIVE YEARS 10)

H-INDEX

5
(FIVE YEARS 1)

2023 ◽  
Author(s):  
Jun Liao ◽  
Alan Wan ◽  
Shuyuan He ◽  
Guohua Zou

2021 ◽  
Author(s):  
Katerina Kroupova ◽  
Tomas Havranek ◽  
Zuzana Irsova

Educational outcomes have many determinants, but one that most young people can readily control is choosing whether to work while in school. Sixty-nine studies have estimated the effect, but results vary from large negative to positive estimates. We show that the results are systematically driven by context, publication bias, and treatment of endogeneity. Studies ignoring endogeneity suffer from an upward bias, which is almost fully compensated by publication selection in favor of negative estimates. Net of the biases, the literature suggests a negative but economically inconsequential mean effect. The effect is more negative for high-intensity employment and educational outcomes measured as decisions to dropout, but it is positive in Germany. To derive these results we collect 861 previously reported estimates together with 32 variables reflecting estimation context, use recently developed nonlinear techniques to correct for publication bias, and employ Bayesian and frequentist model averaging to assign a pattern to the heterogeneity in the literature.


2021 ◽  
Author(s):  
Tomas Havranek ◽  
Roman Horvath ◽  
Ali Elminejad

The intertemporal substitution (Frisch) elasticity of labor supply governs the predictions of real business cycle models and models of taxation. We show that, for the extensive margin elasticity, two biases conspire to systematically produce large positive estimates when the elasticity is in fact zero. Among 723 estimates in 36 studies, the mean reported elasticity is 0.5. One half of that number is due to publication bias: larger estimates are reported preferentially. The other half is due to identification bias: studies with less exogenous time variation in wages report larger elasticities. Net of the biases, the literature implies a zero mean elasticity and, with 95% confidence, is inconsistent with calibrations above 0.25. To derive these results we collect 23 variables that reflect the context in which the elasticity was obtained, use nonlinear techniques to correct for publication bias, and employ Bayesian and frequentist model averaging to address model uncertainty.


Sankhya A ◽  
2021 ◽  
Author(s):  
Kira Alhorn ◽  
Holger Dette ◽  
Kirsten Schorning

AbstractIn this paper we construct optimal designs for frequentist model averaging estimation. We derive the asymptotic distribution of the model averaging estimate with fixed weights in the case where the competing models are non-nested. A Bayesian optimal design minimizes an expectation of the asymptotic mean squared error of the model averaging estimate calculated with respect to a suitable prior distribution. We derive a necessary condition for the optimality of a given design with respect to this new criterion. We demonstrate that Bayesian optimal designs can improve the accuracy of model averaging substantially. Moreover, the derived designs also improve the accuracy of estimation in a model selected by model selection and model averaging estimates with random weights.


Author(s):  
Barry L. Nelson ◽  
Alan T. K. Wan ◽  
Guohua Zou ◽  
Xinyu Zhang ◽  
Xi Jiang

Input uncertainty is an aspect of simulation model risk that arises when the driving input distributions are derived or “fit” to real-world, historical data. Although there has been significant progress on quantifying and hedging against input uncertainty, there has been no direct attempt to reduce it via better input modeling. The meaning of “better” depends on the context and the objective: Our context is when (a) there are one or more families of parametric distributions that are plausible choices; (b) the real-world historical data are not expected to perfectly conform to any of them; and (c) our primary goal is to obtain higher-fidelity simulation output rather than to discover the “true” distribution. In this paper, we show that frequentist model averaging can be an effective way to create input models that better represent the true, unknown input distribution, thereby reducing model risk. Input model averaging builds from standard input modeling practice, is not computationally burdensome, requires no change in how the simulation is executed nor any follow-up experiments, and is available on the Comprehensive R Archive Network (CRAN). We provide theoretical and empirical support for our approach.


2020 ◽  
Author(s):  
Tomas Havranek ◽  
Zuzana Irsova ◽  
Lubica Laslopova ◽  
Olesia Zeynalova

A key parameter in the analysis of wage inequality is the elasticity of substitution between skilled and unskilled labor. We question the common view that the elasticity exceeds 1. Two biases, publication and attenuation, conspire to pull the mean elasticity reported in the literature to 1.9. After correcting for the biases, the literature is consistent with the elasticity in the US of 0.6--0.9. Our analysis relies on 729 estimates of the elasticity collected from 76 studies as well as 37 controls that reflect the context in which the estimates were obtained. We use recently developed nonlinear techniques to correct for publication bias and employ Bayesian and frequentist model averaging to address model uncertainty. Our results suggest that, first, insignificant estimates of the elasticity are underreported. Second, because researchers typically estimate the elasticity's inverse, measurement error exaggerates the elasticity, and we show the exaggeration is substantial. Third, elasticities are systematically larger for developed countries, translog estimation, and methods that ignore endogeneity.


2020 ◽  
Vol 58 (3) ◽  
pp. 644-719 ◽  
Author(s):  
Mark F. J. Steel

The method of model averaging has become an important tool to deal with model uncertainty, for example in situations where a large amount of different theories exist, as are common in economics. Model averaging is a natural and formal response to model uncertainty in a Bayesian framework, and most of the paper deals with Bayesian model averaging. The important role of the prior assumptions in these Bayesian procedures is highlighted. In addition, frequentist model averaging methods are also discussed. Numerical techniques to implement these methods are explained, and I point the reader to some freely available computational resources. The main focus is on uncertainty regarding the choice of covariates in normal linear regression models, but the paper also covers other, more challenging, settings, with particular emphasis on sampling models commonly used in economics. Applications of model averaging in economics are reviewed and discussed in a wide range of areas including growth economics, production modeling, finance and forecasting macroeconomic quantities. (JEL C11, C15, C20, C52, O47).


Author(s):  
Zhenyu A. Liao ◽  
Charupriya Sharma ◽  
James Cussens ◽  
Peter Van Beek

A Bayesian network is a widely used probabilistic graphical model with applications in knowledge discovery and prediction. Learning a Bayesian network (BN) from data can be cast as an optimization problem using the well-known score-andsearch approach. However, selecting a single model (i.e., the best scoring BN) can be misleading or may not achieve the best possible accuracy. An alternative to committing to a single model is to perform some form of Bayesian or frequentist model averaging, where the space of possible BNs is sampled or enumerated in some fashion. Unfortunately, existing approaches for model averaging either severely restrict the structure of the Bayesian network or have only been shown to scale to networks with fewer than 30 random variables. In this paper, we propose a novel approach to model averaging inspired by performance guarantees in approximation algorithms. Our approach has two primary advantages. First, our approach only considers credible models in that they are optimal or near-optimal in score. Second, our approach is more efficient and scales to significantly larger Bayesian networks than existing approaches.


Biometrika ◽  
2019 ◽  
Vol 106 (3) ◽  
pp. 665-682
Author(s):  
K Alhorn ◽  
K Schorning ◽  
H Dette

SummaryWe consider the problem of designing experiments for estimating a target parameter in regression analysis when there is uncertainty about the parametric form of the regression function. A new optimality criterion is proposed that chooses the experimental design to minimize the asymptotic mean squared error of the frequentist model averaging estimate. Necessary conditions for the optimal solution of a locally and Bayesian optimal design problem are established. The results are illustrated in several examples, and it is demonstrated that Bayesian optimal designs can yield a reduction of the mean squared error of the model averaging estimator by up to 45%.


2019 ◽  
Vol 62 (2) ◽  
pp. 205-226
Author(s):  
Priyam Mitra ◽  
Heng Lian ◽  
Ritwik Mitra ◽  
Hua Liang ◽  
Min-ge Xie

Sign in / Sign up

Export Citation Format

Share Document