point null hypothesis
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 8)

H-INDEX

9
(FIVE YEARS 1)

2020 ◽  
Vol 18 (1) ◽  
pp. 2-27
Author(s):  
Miodrag M. Lovric

In frequentist statistics, point-null hypothesis testing based on significance tests and confidence intervals are harmonious procedures and lead to the same conclusion. This is not the case in the domain of the Bayesian framework. An inference made about the point-null hypothesis using Bayes factor may lead to an opposite conclusion if it is based on the Bayesian credible interval. Bayesian suggestions to test point-nulls using credible intervals are misleading and should be dismissed. A null hypothesized value may be outside a credible interval but supported by Bayes factor (a Type I conflict), or contrariwise, the null value may be inside a credible interval but not supported by the Bayes factor (Type II conflict). Two computer programs in R have been developed that confirm the existence of a countable infinite number of cases, for which Bayes credible intervals are not compatible with Bayesian hypothesis testing.


2019 ◽  
Author(s):  
Bence Palfi ◽  
Zoltan Dienes

Existing calculators of the Bayes factor typically represent the prediction of the null-hypothesis with a point null model (e.g., Dienes & Mclatchie, 2018; Morey & Rouder, 2015; Wagenmakers et al., 2018). The point null model remains a good approximation of the null hypothesis as long as the standard error of the estimate of interest is large relative to the range of theoretically negligible effect sizes. However, this assumption may be violated, for instance, when there is big data and the standard error becomes comparable to or smaller than the range of effect sizes that is meaningless according to the theory. In this case, the conclusion that was based on the point null model may not be scientifically valid. Here, we introduce a case study to demonstrate why is it critical to calculate the Bayes factor with an interval null hypothesis rather than a point null hypothesis when the smallest meaningful effect size is not approximately zero relative to the standard error.


2019 ◽  
Author(s):  
Eric-Jan Wagenmakers ◽  
Michael David Lee ◽  
Jeffrey N. Rouder ◽  
Richard Donald Morey

The principle of predictive irrelevance states that when two competing models predict a data set equally well, that data set cannot be used to discriminate the models and --for that specific purpose-- the data set is evidentially irrelevant. To highlight the ramifications of the principle, we first show how a single binomial observation can be irrelevant in the sense that it carries no evidential value for discriminating the null hypothesis $\theta = 1/2$ from a broad class of alternative hypotheses that allow $\theta$ to be between 0 and 1. In contrast, the Bayesian credible interval suggest that a single binomial observation does provide some evidence against the null hypothesis. We then generalize this paradoxical result to infinitely long data sequences that are predictively irrelevant throughout. Examples feature a test of a binomial rate and a test of a normal mean. These maximally uninformative data (MUD) sequences yield credible intervals and confidence intervals that are certain to exclude the point of test as the sequence lengthens. The resolution of this paradox requires the insight that interval estimation methods --and, consequently, p values-- may not be used for model comparison involving a point null hypothesis.


Econometrics ◽  
2019 ◽  
Vol 7 (2) ◽  
pp. 21
Author(s):  
Jae H. Kim ◽  
Andrew P. Robinson

This paper presents a brief review of interval-based hypothesis testing, widely used in bio-statistics, medical science, and psychology, namely, tests for minimum-effect, equivalence, and non-inferiority. We present the methods in the contexts of a one-sample t-test and a test for linear restrictions in a regression. We present applications in testing for market efficiency, validity of asset-pricing models, and persistence of economic time series. We argue that, from the point of view of economics and finance, interval-based hypothesis testing provides more sensible inferential outcomes than those based on point-null hypothesis. We propose that interval-based tests be routinely employed in empirical research in business, as an alternative to point null hypothesis testing, especially in the new era of big data.


2019 ◽  
Author(s):  
Henk Kiers ◽  
Jorge Tendeiro

Null Hypothesis Bayesian Testing (NHBT) has been proposed as an alternative to Null Hypothesis Significance Testing (NHST). Whereas NHST has a close link to parameter estimation via confidence intervals, such a link of NHBT with Bayesian estimation via a posterior distribution is less straightforward, but does exist, and has recently been reiterated by Rouder, Haaf, and Vandekerckhove (2018). It hinges on a combination of a point mass probability and a probability density function as prior (denoted as the spike-and-slab prior). In the present paper it is first carefully explained how the spike-and-slab prior is defined, and how results can be derived for which proofs were not given in Rouder et al. (2018). Next, it is shown that this spike-and-slab prior can be approximated by a pure probability density function with a rectangular peak around the center towering highly above the remainder of the density function. Finally, we will indicate how this ‘hill-and-chimney’ prior may in turn be approximated by fully continuous priors. In this way it is shown that NHBT results can be approximated well by results from estimation using a strongly peaked prior, and it is noted that the estimation itself offers more than merely the posterior odds ratio on which NHBT is based. Thus, it complies with the strong APA requirement of not just mentioning testing results but also offering effect size information. It also offers a transparent perspective on the NHBT approach employing a prior with a strong peak around the chosen point null hypothesis value.


2017 ◽  
Author(s):  
Matt Williams ◽  
Rasmus A. Bååth ◽  
Michael Carl Philipp

This paper will discuss the concept of Bayes factors as inferential tools that can directly replace NHST in the day-to-day work of developmental researchers. A Bayes factor indicates the degree which data observed should increase (or decrease) our support for one hypothesis in comparison to another. This framework allows researchers to not just reject but also produce evidence in favor of null hypotheses. Bayes factor alternatives to common tests used by developmental psychologists are available in easy-to-use software. However, we note that Bayesian estimation (rather than Bayes factors) may be a more appealing and general framework when a point null hypothesis is a priori implausible.


Sign in / Sign up

Export Citation Format

Share Document