Learning with Heterogeneous Misspecified Models: Characterization and Robustness

2021 ◽  
Author(s):  
Daniel Hauser ◽  
J. Aislinn Bohren
Keyword(s):  
1996 ◽  
Vol 12 (4) ◽  
pp. 597-619 ◽  
Author(s):  
Alain Monfort

This paper is the text of the 1994 Tjalling Koopmans Lecture of the Cowles Foundation. The aim of this lecture was to survey the roles of misspecified models in econometrics. Through 10 stories we show how the misspecifipation problems can be dealt with and how misspecified models can play a positive role in inference processes.


Econometrica ◽  
2021 ◽  
Vol 89 (6) ◽  
pp. 3025-3077 ◽  
Author(s):  
J. Aislinn Bohren ◽  
Daniel N. Hauser

This paper develops a general framework to study how misinterpreting information impacts learning. Our main result is a simple criterion to characterize long‐run beliefs based on the underlying form of misspecification. We present this characterization in the context of social learning, then highlight how it applies to other learning environments, including individual learning. A key contribution is that our characterization applies to settings with model heterogeneity and provides conditions for entrenched disagreement. Our characterization can be used to determine whether a representative agent approach is valid in the face of heterogeneity, study how differing levels of bias or unawareness of others' biases impact learning, and explore whether the impact of a bias is sensitive to parametric specification or the source of information. This unified framework synthesizes insights gleaned from previously studied forms of misspecification and provides novel insights in specific applications, as we demonstrate in settings with partisan bias, overreaction, naive learning, and level‐k reasoning.


2018 ◽  
Vol 115 (8) ◽  
pp. 1854-1859 ◽  
Author(s):  
Ziheng Yang ◽  
Tianqi Zhu

The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.


2019 ◽  
Vol 33 (6) ◽  
pp. 2796-2842 ◽  
Author(s):  
Valentina Raponi ◽  
Cesare Robotti ◽  
Paolo Zaffaroni

Abstract We propose a methodology for estimating and testing beta-pricing models when a large number of assets is available for investment but the number of time-series observations is fixed. We first consider the case of correctly specified models with constant risk premia, and then extend our framework to deal with time-varying risk premia, potentially misspecified models, firm characteristics, and unbalanced panels. We show that our large cross-sectional framework poses a serious challenge to common empirical findings regarding the validity of beta-pricing models. In the context of pricing models with Fama-French factors, firm characteristics are found to explain a much larger proportion of variation in estimated expected returns than betas. Authors have furnished an Internet Appendix, which is available on the Oxford University Press Web site next to the link to the final published paper online.


Biometrika ◽  
2018 ◽  
Vol 106 (2) ◽  
pp. 479-486 ◽  
Author(s):  
Nicholas Syring ◽  
Ryan Martin

Summary Calibration of credible regions derived from under- or misspecified models is an important and challenging problem. In this paper, we introduce a scalar tuning parameter that controls the posterior distribution spread, and develop a Monte Carlo algorithm that sets this parameter so that the corresponding credible region achieves the nominal frequentist coverage probability.


Sign in / Sign up

Export Citation Format

Share Document