scholarly journals Bayesian Inference and Testing Any Hypothesis You Can Specify

2018 ◽  
Vol 1 (2) ◽  
pp. 281-295 ◽  
Author(s):  
Alexander Etz ◽  
Julia M. Haaf ◽  
Jeffrey N. Rouder ◽  
Joachim Vandekerckhove

Hypothesis testing is a special form of model selection. Once a pair of competing models is fully defined, their definition immediately leads to a measure of how strongly each model supports the data. The ratio of their support is often called the likelihood ratio or the Bayes factor. Critical in the model-selection endeavor is the specification of the models. In the case of hypothesis testing, it is of the greatest importance that the researcher specify exactly what is meant by a “null” hypothesis as well as the alternative to which it is contrasted, and that these are suitable instantiations of theoretical positions. Here, we provide an overview of different instantiations of null and alternative hypotheses that can be useful in practice, but in all cases the inferential procedure is based on the same underlying method of likelihood comparison. An associated app can be found at https://osf.io/mvp53/ . This article is the work of the authors and is reformatted from the original, which was published under a CC-By Attribution 4.0 International license and is available at https://psyarxiv.com/wmf3r/ .

2018 ◽  
Author(s):  
Alexander Etz ◽  
Julia M. Haaf ◽  
Jeffrey N. Rouder ◽  
Joachim Vandekerckhove

Hypothesis testing is a special form of model selection. Once a pair of competing models is fully defined, their definition immediately leads to a measure of how strongly each model supports the data. The ratio of their support is often called the likelihood ratio or the Bayes factor. Critical in the model selection endeavor is the specification of the models. In the case of hypothesis testing, it is of the greatest importance that we specify exactly what is meant by a "null" hypothesis as well as the alternative to which it is contrasted, and that these are suitable instantiations of theoretical positions. Here, we provide an overview of different instantiations of null and alternative hypotheses that can be useful in practice, while the underlying method of likelihood comparison is universal and identical in all cases. An associated app can be found via https://osf.io/mvp53/.


1994 ◽  
Vol 10 (3-4) ◽  
pp. 596-608 ◽  
Author(s):  
Robert E. McCulloch ◽  
Ruey S. Tsay

This paper proposes a general Bayesian framework for distinguishing between trend- and difference-stationarity. Usually, in model selection, we assume that all of the data were generated by one of the models under consideration. In studying time series, however, we may be concerned that the process is changing over time, so that the preferred model changes over time as well. To handle this possibility, we compute the posterior probabilities of the competing models for each observation. This way we can see if different segments of the series behave differently with respect to the competing models. The proposed method is a generalization of the usual odds ratio for model discrimination in Bayesian inference. In application, we employ the Gibbs sampler to overcome the computational difficulty. The procedure is illustrated by a real example.


2017 ◽  
Author(s):  
Guillermo CAMPITELLI

This tutorial on Bayesian inference targets psychological researchers who are trained in the null hypothesis testing approach and use of SPSS software. There a number ofexcellent quality tutorials on Bayesian inference, but their problem is that, they assume mathematical knowledge that most psychological researchers do not possess. Thistutorial starts from the idea that Bayesian inference is not more difficult than the traditional approach, but before being introduced to probability theory notation is necessary for the newcomer to understand simple probability principles, which could be explained without mathematical formulas or probability notation. For this purpose in this tutorial I use a simple tool-the parameter-data table-to explain how probability theory can easily be used to make inferences in research. Then I compare the Bayesian and the null hypothesis testing approach using the same tool. Only after having introduced these principles I show the formulas and notations and explain how they relate to the parameter-data table. It is to be expected that this tutorial will increase the use of Bayesian inference by psychological researchers. Moreover, Bayesian researchers may use this tutorial to teach Bayesian inference to undergraduate or postgraduate students.


1986 ◽  
Vol 23 (A) ◽  
pp. 187-200
Author(s):  
Yuzo Hosoya

This paper considers the generalized likelihood ratio (GLR) test or its modification dealing with nested models. The algorithm for evaluating the critical values and the error-rates for the canonical tests are provided; a table of critical values of a class of GLR tests is also given. The test proposed in the paper has applications in time-series model selection.


1986 ◽  
Vol 23 (A) ◽  
pp. 187-200 ◽  
Author(s):  
Yuzo Hosoya

This paper considers the generalized likelihood ratio (GLR) test or its modification dealing with nested models. The algorithm for evaluating the critical values and the error-rates for the canonical tests are provided; a table of critical values of a class of GLR tests is also given. The test proposed in the paper has applications in time-series model selection.


2015 ◽  
Vol 4 (1) ◽  
Author(s):  
João M. C. Santos Silva ◽  
Silvana Tenreyro ◽  
Frank Windmeijer

AbstractIn economic applications it is often the case that the variate of interest is non-negative and its distribution has a mass-point at zero. Many regression strategies have been proposed to deal with data of this type but, although there has been a long debate in the literature on the appropriateness of different models, formal statistical tests to choose between the competing specifications are not often used in practice. We use the non-nested hypothesis testing framework of Davidson and MacKinnon (Davidson and MacKinnon 1981. “Several Tests for Model Specification in the Presence of Alternative Hypotheses.”


2014 ◽  
Vol 8 (4) ◽  
Author(s):  
Yin Zhang ◽  
Ingo Neumann

AbstractDeformation monitoring usually focuses on the detection of whether the monitored objects satisfy the given properties (e.g. being stable or not), and makes further decisions to minimise the risks, for example, the consequences and costs in case of collapse of artificial objects and/or natural hazards. With this intention, a methodology relying on hypothesis testing and utility theory is reviewed in this paper. The main idea of utility theory is to judge each possible outcome with a utility value. The presented methodology makes it possible to minimise the risk of an individual monitoring project by considering the costs and consequences of overall possible situations within the decision process. It is not the danger that the monitored object may collapse that can be reduced. The risk (based on the utility values multiplied by the danger) can be described more appropriately and therefore more valuable decisions can be made. Especially, the opportunity for the measurement process to minimise the risk is an important key issue. In this paper, application of the methodology to two of the classical cases in hypothesis testing will be discussed in detail: 1) both probability density functions (pdfs) of tested objects under null and alternative hypotheses are known; 2) only the pdf under the null hypothesis is known and the alternative hypothesis is treated as the pure negation of the null hypothesis. Afterwards, a practical example in deformation monitoring is introduced and analysed. Additionally, the way in which the magnitudes of utility values (consequences of a decision) influence the decision will be considered and discussed at the end.


2021 ◽  
Vol 8 ◽  
Author(s):  
Vincent A. Voelz ◽  
Yunhui Ge ◽  
Robert M. Raddi

Bayesian Inference of Conformational Populations (BICePs) is an algorithm developed to reconcile simulated ensembles with sparse experimental measurements. The Bayesian framework of BICePs enables population reweighting as a post-simulation processing step, with several advantages over existing methods, including the proper use of reference potentials, and the estimation of a Bayes factor-like quantity called the BICePs score for model selection. Here, we summarize the theory underlying this method in context with related algorithms, review the history of BICePs applications to date, and discuss current shortcomings along with future plans for improvement.


2019 ◽  
Vol 10 (2) ◽  
pp. 691-707
Author(s):  
Jason C. Doll ◽  
Stephen J. Jacquemin

Abstract Researchers often test ecological hypotheses relating to a myriad of questions ranging from assemblage structure, population dynamics, demography, abundance, growth rate, and more using mathematical models that explain trends in data. To aid in the evaluation process when faced with competing hypotheses, we employ statistical methods to evaluate the validity of these multiple hypotheses with the goal of deriving the most robust conclusions possible. In fisheries management and ecology, frequentist methodologies have largely dominated this approach. However, in recent years, researchers have increasingly used Bayesian inference methods to estimate model parameters. Our aim with this perspective is to provide the practicing fisheries ecologist with an accessible introduction to Bayesian model selection. Here we discuss Bayesian inference methods for model selection in the context of fisheries management and ecology with empirical examples to guide researchers in the use of these methods. In this perspective we discuss three methods for selecting among competing models. For comparing two models we discuss Bayes factor and for more complex models we discuss Watanabe–Akaike information criterion and leave-one-out cross-validation. We also describe what kinds of information to report when conducting Bayesian inference. We conclude this review with a discussion of final thoughts about these model selection techniques.


Sign in / Sign up

Export Citation Format

Share Document