Uncertainty of prior and posterior model probability: Implications for interpreting Bayes factors

2021 ◽  
Author(s):  
John K. Kruschke

In most applications of Bayesian model comparison or Bayesian hypothesis testing, the results are reported in terms of the Bayes factor only, not in terms of the posterior probabilities of the models. Posterior model probabilities are not reported because researchers are reluctant to declare prior model probabilities, which in turn stems from uncertainty in the prior. Fortunately, Bayesian formalisms are designed to embrace prior uncertainty, not ignore it. This article provides a novel derivation of the posterior distribution of model probability, and shows many examples. The posterior distribution is useful for making decisions taking into account the uncertainty of the posterior model probability. Benchmark Bayes factors are provided for a spectrum of priors on model probability. R code is posted at https://osf.io/36527/. This framework and tools will improve interpretation and usefulness of Bayes factors in all their applications.

2017 ◽  
Author(s):  
Jeffrey Rouder ◽  
Julia M. Haaf ◽  
Frederik Aust

A key goal in research is to use data to assess competing hypotheses or theories. Analternative to the conventional significance testing is Bayesian model comparison. The mainidea is that competing theories are represented by statistical models. In the Bayesianframework, these models then yield predictions about data even before the data are seen.How well the data match the predictions under competing models may be calculated, andthe ratio of these matches—the Bayes factor—is used to assess the evidence for one modelcompared to another. We illustrate the process of going from theories to models and topredictions in the context of two hypothetical examples about how exposure to media affectsattitudes toward refugees.


2014 ◽  
pp. 101-117
Author(s):  
Michael D. Lee ◽  
Eric-Jan Wagenmakers

2018 ◽  
Vol 265 ◽  
pp. 271-278 ◽  
Author(s):  
Tyler B. Grove ◽  
Beier Yao ◽  
Savanna A. Mueller ◽  
Merranda McLaughlin ◽  
Vicki L. Ellingrod ◽  
...  

2017 ◽  
Vol 70 ◽  
pp. 84-93 ◽  
Author(s):  
R. Wesley Henderson ◽  
Paul M. Goggans ◽  
Lei Cao

2018 ◽  
Author(s):  
Julia M. Haaf ◽  
Fayette Klaassen ◽  
Jeffrey Rouder

Most theories in the social sciences are verbal and provide ordinal-level predictions for data. For example, a theory might predict that performance is better in one condition than another, but not by how much. One way of gaining additional specificity is to posit many ordinal constraints that hold simultaneously. For example a theory might predict an effect in one condition, a larger effect in another, and none in a third. We show how common theoretical positions naturally lead to multiple ordinal constraints. To assess whether multiple ordinal constraints hold in data, we adopt a Bayesian model comparison approach. The result is an inferential system that is custom-tuned for the way social scientists conceptualize theory, and that is more intuitive and informative than current linear-model approaches.


Sign in / Sign up

Export Citation Format

Share Document