scholarly journals Testing theories with Bayes factors

2021 ◽  
Author(s):  
Zoltan Dienes

Bayes factors are a useful tool for researchers in the behavioural and social sciences, partly because they can provide evidence for no effect relative to the sort of effect expected. By contrast, a non-significant result does not provide evidence for the H0 tested. So, if non-significance does not in itself count against any theory predicting an effect, how could a theory fail a test? Bayes factors provide a measure of evidence from first principles. A severe test is one that is likely to obtain evidence against a theory if it were false; that is, to obtain an extreme Bayes factor against the theory. Bayes factors show why hacking and cherry picking degrade evidence; how to deal with multiple testing situations; and how optional stopping is consistent with severe testing. Further, informed Bayes factors can be used to link theory tightly to how that theory is tested, so that the measured evidence does relate to the theory.

2016 ◽  
Vol 27 (2) ◽  
pp. 364-383 ◽  
Author(s):  
Stefano Cabras

The problem of multiple hypothesis testing can be represented as a Markov process where a new alternative hypothesis is accepted in accordance with its relative evidence to the currently accepted one. This virtual and not formally observed process provides the most probable set of non null hypotheses given the data; it plays the same role as Markov Chain Monte Carlo in approximating a posterior distribution. To apply this representation and obtain the posterior probabilities over all alternative hypotheses, it is enough to have, for each test, barely defined Bayes Factors, e.g. Bayes Factors obtained up to an unknown constant. Such Bayes Factors may either arise from using default and improper priors or from calibrating p-values with respect to their corresponding Bayes Factor lower bound. Both sources of evidence are used to form a Markov transition kernel on the space of hypotheses. The approach leads to easy interpretable results and involves very simple formulas suitable to analyze large datasets as those arising from gene expression data (microarray or RNA-seq experiments).


2019 ◽  
Author(s):  
Herbert Hoijtink ◽  
Joris Mulder ◽  
Caspar J. Van Lissa ◽  
Xin Gu

Learning about hypothesis evaluation using the Bayes factor could enhance psychologicalresearch. In contrast to null-hypothesis significance testing: it renders the evidence in favorof each of the hypotheses under consideration (it can be used to quantify support for thenull-hypothesis) instead of a dichotomous reject/do-not-reject decision; it canstraightforwardly be used for the evaluation of multiple hypotheses without having tobother about the proper manner to account for multiple testing; and, it allows continuousre-evaluation of hypotheses after additional data have been collected (Bayesian updating).This tutorial addresses researchers considering to evaluate their hypotheses by meansof the Bayes factor. The focus is completely applied and each topic discussed is illustratedusing Bayes factors for the evaluation of hypotheses in the context of an ANOVA model,obtained using the R package bain. Readers can execute all the analyses presented whilereading this tutorial if they download bain and the R-codes used. It will be elaborated in acompletely non-technical manner: what the Bayes factor is, how it can be obtained, howBayes factors should be interpreted, and what can be done with Bayes factors. Afterreading this tutorial and executing the associated code, researchers will be able to use theirown data for the evaluation of hypotheses by means of the Bayes factor, not only in thecontext of ANOVA models, but also in the context of other statistical models.


2019 ◽  
Author(s):  
Jeffrey Rouder

In their paper Why optional stopping is a problem for Bayesians, @deHeide:Grunwald:2020 critique the claim that those using Bayes factor can sample until there is sufficient evidence to support one or another model. de Heide and Grünwald's main message is that unless you believe your priors to govern the data-generating mechanism, optional stopping may distort inference. We show here that the distortions are not in inference but in interpretation. Their claim is about what happens when the analyst's models do not match the data generating models, that is, the Bayes factor for models not specified. Given that we never know the underlying data generation mechanism we ask researcher to interpret Bayes factors for what they are---the relative strength of evidence from data for two hypotheticals. We discuss how researchers should assess the robustness of their results by considering reasonable variation in model specification conditional on observed data rather than on hypothetical truths.


Author(s):  
Fco. Javier Girón ◽  
Carmen del Castillo

AbstractA simple solution to the Behrens–Fisher problem based on Bayes factors is presented, and its relation with the Behrens–Fisher distribution is explored. The construction of the Bayes factor is based on a simple hierarchical model, and has a closed form based on the densities of general Behrens–Fisher distributions. Simple asymptotic approximations of the Bayes factor, which are functions of the Kullback–Leibler divergence between normal distributions, are given, and it is also proved to be consistent. Some examples and comparisons are also presented.


2021 ◽  
Vol 4 (1) ◽  
pp. 251524592097262
Author(s):  
Don van Ravenzwaaij ◽  
Alexander Etz

When social scientists wish to learn about an empirical phenomenon, they perform an experiment. When they wish to learn about a complex numerical phenomenon, they can perform a simulation study. The goal of this Tutorial is twofold. First, it introduces how to set up a simulation study using the relatively simple example of simulating from the prior. Second, it demonstrates how simulation can be used to learn about the Jeffreys-Zellner-Siow (JZS) Bayes factor, a currently popular implementation of the Bayes factor employed in the BayesFactor R package and freeware program JASP. Many technical expositions on Bayes factors exist, but these may be somewhat inaccessible to researchers who are not specialized in statistics. In a step-by-step approach, this Tutorial shows how a simple simulation script can be used to approximate the calculation of the Bayes factor. We explain how a researcher can write such a sampler to approximate Bayes factors in a few lines of code, what the logic is behind the Savage-Dickey method used to visualize Bayes factors, and what the practical differences are for different choices of the prior distribution used to calculate Bayes factors.


2021 ◽  
Author(s):  
Neil McLatchie ◽  
Manuela Thomae

Thomae and Viki (2013) reported that increased exposure to sexist humour can increase rape proclivity among males, specifically those who score high on measures of Hostile Sexism. Here we report two pre-registered direct replications (N = 530) of Study 2 from Thomae and Viki (2013) and assess replicability via (i) statistical significance, (ii) Bayes factors, (iii) the small-telescope approach, and (iv) an internal meta-analysis across the original and replication studies. The original results were not supported by any of the approaches. Combining the original study and the replications yielded moderate evidence in support of the null over the alternative hypothesis with a Bayes factor of B = 0.13. In light of the combined evidence, we encourage researchers to exercise caution before claiming that brief exposure to sexist humour increases male’s proclivity towards rape, until further pre-registered and open research demonstrates the effect is reliably reproducible.


2021 ◽  
Author(s):  
John K. Kruschke

In most applications of Bayesian model comparison or Bayesian hypothesis testing, the results are reported in terms of the Bayes factor only, not in terms of the posterior probabilities of the models. Posterior model probabilities are not reported because researchers are reluctant to declare prior model probabilities, which in turn stems from uncertainty in the prior. Fortunately, Bayesian formalisms are designed to embrace prior uncertainty, not ignore it. This article provides a novel derivation of the posterior distribution of model probability, and shows many examples. The posterior distribution is useful for making decisions taking into account the uncertainty of the posterior model probability. Benchmark Bayes factors are provided for a spectrum of priors on model probability. R code is posted at https://osf.io/36527/. This framework and tools will improve interpretation and usefulness of Bayes factors in all their applications.


2020 ◽  
Vol 17 (1) ◽  
Author(s):  
Thomas Faulkenberry

In this paper, I develop a formula for estimating Bayes factors directly from minimal summary statistics produced in repeated measures analysis of variance designs. The formula, which requires knowing only the F-statistic, the number of subjects, and the number of repeated measurements per subject, is based on the BIC approximation of the Bayes factor, a common default method for Bayesian computation with linear models. In addition to providing computational examples, I report a simulation study in which I demonstrate that the formula compares favorably to a recently developed, more complex method that accounts for correlation between repeated measurements. The minimal BIC method provides a simple way for researchers to estimate Bayes factors from a minimal set of summary statistics, giving users a powerful index for estimating the evidential value of not only their own data, but also the data reported in published studies.


2021 ◽  
Author(s):  
Herbert Hoijtink ◽  
Xin Gu ◽  
Joris Mulder ◽  
Yves Rosseel

The Bayes factor is increasingly used for the evaluation of hypotheses. These may betraditional hypotheses specified using equality constraints among the parameters of thestatistical model of interest or informative hypotheses specified using equality andinequality constraints. So far no attention has been given to the computation of Bayesfactors from data with missing values. A key property of such a Bayes factor should bethat it is only based on the information in the observed values. This paper will show thatsuch a Bayes factor can be obtained using multiple imputations of the missing values.


Author(s):  
Xuan Cao ◽  
Lili Ding ◽  
Tesfaye B. Mersha

AbstractIn this study, we conduct a comparison of three most recent statistical methods for joint variable selection and covariance estimation with application of detecting expression quantitative trait loci (eQTL) and gene network estimation, and introduce a new hierarchical Bayesian method to be included in the comparison. Unlike the traditional univariate regression approach in eQTL, all four methods correlate phenotypes and genotypes by multivariate regression models that incorporate the dependence information among phenotypes, and use Bayesian multiplicity adjustment to avoid multiple testing burdens raised by traditional multiple testing correction methods. We presented the performance of three methods (MSSL – Multivariate Spike and Slab Lasso, SSUR – Sparse Seemingly Unrelated Bayesian Regression, and OBFBF – Objective Bayes Fractional Bayes Factor), along with the proposed, JDAG (Joint estimation via a Gaussian Directed Acyclic Graph model) method through simulation experiments, and publicly available HapMap real data, taking asthma as an example. Compared with existing methods, JDAG identified networks with higher sensitivity and specificity under row-wise sparse settings. JDAG requires less execution in small-to-moderate dimensions, but is not currently applicable to high dimensional data. The eQTL analysis in asthma data showed a number of known gene regulations such as STARD3, IKZF3 and PGAP3, all reported in asthma studies. The code of the proposed method is freely available at GitHub (https://github.com/xuan-cao/Joint-estimation-for-eQTL).


Sign in / Sign up

Export Citation Format

Share Document