scholarly journals Computing Bayes factors to measure evidence from experiments: An extension of the BIC approximation

2018 ◽  
Vol 55 (1) ◽  
pp. 31-43 ◽  
Author(s):  
Thomas J. Faulkenberry

Summary Bayesian inference affords scientists powerful tools for testing hypotheses. One of these tools is the Bayes factor, which indexes the extent to which support for one hypothesis over another is updated after seeing the data. Part of the hesitance to adopt this approach may stem from an unfamiliarity with the computational tools necessary for computing Bayes factors. Previous work has shown that closed-form approximations of Bayes factors are relatively easy to obtain for between-groups methods, such as an analysis of variance or t-test. In this paper, I extend this approximation to develop a formula for the Bayes factor that directly uses information that is typically reported for ANOVAs (e.g., the F ratio and degrees of freedom). After giving two examples of its use, I report the results of simulations which show that even with minimal input, this approximate Bayes factor produces similar results to existing software solutions.

Author(s):  
Fco. Javier Girón ◽  
Carmen del Castillo

AbstractA simple solution to the Behrens–Fisher problem based on Bayes factors is presented, and its relation with the Behrens–Fisher distribution is explored. The construction of the Bayes factor is based on a simple hierarchical model, and has a closed form based on the densities of general Behrens–Fisher distributions. Simple asymptotic approximations of the Bayes factor, which are functions of the Kullback–Leibler divergence between normal distributions, are given, and it is also proved to be consistent. Some examples and comparisons are also presented.


2020 ◽  
Vol 17 (1) ◽  
Author(s):  
Thomas Faulkenberry

In this paper, I develop a formula for estimating Bayes factors directly from minimal summary statistics produced in repeated measures analysis of variance designs. The formula, which requires knowing only the F-statistic, the number of subjects, and the number of repeated measurements per subject, is based on the BIC approximation of the Bayes factor, a common default method for Bayesian computation with linear models. In addition to providing computational examples, I report a simulation study in which I demonstrate that the formula compares favorably to a recently developed, more complex method that accounts for correlation between repeated measurements. The minimal BIC method provides a simple way for researchers to estimate Bayes factors from a minimal set of summary statistics, giving users a powerful index for estimating the evidential value of not only their own data, but also the data reported in published studies.


2021 ◽  
pp. 1471082X2098131
Author(s):  
Alan Agresti ◽  
Francesco Bartolucci ◽  
Antonietta Mira

We describe two interesting and innovative strands of Murray Aitkin's research publications, dealing with mixture models and with Bayesian inference. Of his considerable publications on mixture models, we focus on a nonparametric random effects approach in generalized linear mixed modelling, which has proven useful in a wide variety of applications. As an early proponent of ways of implementing the Bayesian paradigm, Aitkin proposed an alternative Bayes factor based on a posterior mean likelihood. We discuss these innovative approaches and some research lines motivated by them and also suggest future related methodological implementations.


2017 ◽  
Author(s):  
Jeffrey Rouder

This document archives two blog posts that may be used in association with "Bayesian Inference for Psychology. Part II: Example Applications with JASP," (Wagenmakers et a.) which is currently forthcoming in Psyconomic Bulletin and Review (2017). The blog posts document how to compute Bayes factors for any two models in the common one-sample (paired t-test) setup. They are posted as part of Jeff Rouder’s Invariances blog, and may be found at http://jeffrouder.blogspot.com/2016/01/what-priors-should-i-use-part-i.html and http://jeffrouder.blogspot.com/2016/03/roll-your-own-ii-bayes-factors-with.html.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Thomas J. Faulkenberry

Summary In Bayesian hypothesis testing, evidence for a statistical model is quantified by the Bayes factor, which represents the relative likelihood of observed data under that model compared to another competing model. In general, computing Bayes factors is difficult, as computing the marginal likelihood of data under a given model requires integrating over a prior distribution of model parameters. In this paper, I capitalize on a particular choice of prior distribution that allows the Bayes factor to be expressed without integral representation, and I develop a simple formula – the Pearson Bayes factor – that requires only minimal summary statistics as commonly reported in scientific papers, such as the t or F score and the degrees of freedom. In addition to presenting this new result, I provide several examples of its use and report a simulation study validating its performance. Importantly, the Pearson Bayes factor gives applied researchers the ability to compute exact Bayes factors from minimal summary data, and thus easily assess the evidential value of any data for which these summary statistics are provided, even when the original data is not available.


2017 ◽  
Author(s):  
Matt Williams ◽  
Rasmus A. Bååth ◽  
Michael Carl Philipp

This paper will discuss the concept of Bayes factors as inferential tools that can directly replace NHST in the day-to-day work of developmental researchers. A Bayes factor indicates the degree which data observed should increase (or decrease) our support for one hypothesis in comparison to another. This framework allows researchers to not just reject but also produce evidence in favor of null hypotheses. Bayes factor alternatives to common tests used by developmental psychologists are available in easy-to-use software. However, we note that Bayesian estimation (rather than Bayes factors) may be a more appealing and general framework when a point null hypothesis is a priori implausible.


2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Lei Sun ◽  
Minglei Yang ◽  
Baixiao Chen

Sparse planar arrays, such as the billboard array, the open box array, and the two-dimensional nested array, have drawn lots of interest owing to their ability of two-dimensional angle estimation. Unfortunately, these arrays often suffer from mutual-coupling problems due to the large number of sensor pairs with small spacing d (usually equal to a half wavelength), which will degrade the performance of direction of arrival (DOA) estimation. Recently, the two-dimensional half-open box array and the hourglass array are proposed to reduce the mutual coupling. But both of them still have many sensor pairs with small spacing d, which implies that the reduction of mutual coupling is still limited. In this paper, we propose a new sparse planar array which has fewer number of sensor pairs with small spacing d. It is named as the thermos array because its shape seems like a thermos. Although the resulting difference coarray (DCA) of the thermos array is not hole-free, a large filled rectangular part in the DCA can be facilitated to perform spatial-smoothing-based DOA estimation. Moreover, it enjoys closed-form expressions for the sensor locations and the number of available degrees of freedom. Simulations show that the thermos array can achieve better DOA estimation performance than the hourglass array in the presence of mutual coupling, which indicates that our thermos array is more robust to the mutual-coupling array.


2021 ◽  
Vol 4 (1) ◽  
pp. 251524592097262
Author(s):  
Don van Ravenzwaaij ◽  
Alexander Etz

When social scientists wish to learn about an empirical phenomenon, they perform an experiment. When they wish to learn about a complex numerical phenomenon, they can perform a simulation study. The goal of this Tutorial is twofold. First, it introduces how to set up a simulation study using the relatively simple example of simulating from the prior. Second, it demonstrates how simulation can be used to learn about the Jeffreys-Zellner-Siow (JZS) Bayes factor, a currently popular implementation of the Bayes factor employed in the BayesFactor R package and freeware program JASP. Many technical expositions on Bayes factors exist, but these may be somewhat inaccessible to researchers who are not specialized in statistics. In a step-by-step approach, this Tutorial shows how a simple simulation script can be used to approximate the calculation of the Bayes factor. We explain how a researcher can write such a sampler to approximate Bayes factors in a few lines of code, what the logic is behind the Savage-Dickey method used to visualize Bayes factors, and what the practical differences are for different choices of the prior distribution used to calculate Bayes factors.


2021 ◽  
Author(s):  
Neil McLatchie ◽  
Manuela Thomae

Thomae and Viki (2013) reported that increased exposure to sexist humour can increase rape proclivity among males, specifically those who score high on measures of Hostile Sexism. Here we report two pre-registered direct replications (N = 530) of Study 2 from Thomae and Viki (2013) and assess replicability via (i) statistical significance, (ii) Bayes factors, (iii) the small-telescope approach, and (iv) an internal meta-analysis across the original and replication studies. The original results were not supported by any of the approaches. Combining the original study and the replications yielded moderate evidence in support of the null over the alternative hypothesis with a Bayes factor of B = 0.13. In light of the combined evidence, we encourage researchers to exercise caution before claiming that brief exposure to sexist humour increases male’s proclivity towards rape, until further pre-registered and open research demonstrates the effect is reliably reproducible.


2021 ◽  
Author(s):  
John K. Kruschke

In most applications of Bayesian model comparison or Bayesian hypothesis testing, the results are reported in terms of the Bayes factor only, not in terms of the posterior probabilities of the models. Posterior model probabilities are not reported because researchers are reluctant to declare prior model probabilities, which in turn stems from uncertainty in the prior. Fortunately, Bayesian formalisms are designed to embrace prior uncertainty, not ignore it. This article provides a novel derivation of the posterior distribution of model probability, and shows many examples. The posterior distribution is useful for making decisions taking into account the uncertainty of the posterior model probability. Benchmark Bayes factors are provided for a spectrum of priors on model probability. R code is posted at https://osf.io/36527/. This framework and tools will improve interpretation and usefulness of Bayes factors in all their applications.


Sign in / Sign up

Export Citation Format

Share Document