Statistical Inference: Significance Testing

2021 ◽  
pp. 161-210
Author(s):  
Alan Agresti ◽  
Maria Kateri
2019 ◽  
Author(s):  
Jan Sprenger

The replication crisis poses an enormous challenge to the epistemic authority of science and the logic of statistical inference in particular. Two prominent features of Null Hypothesis Significance Testing (NHST) arguably contribute to the crisis: the lack of guidance for interpreting non-significant results and the impossibility of quantifying support for the null hypothesis. In this paper, I argue that also popular alternatives to NHST, such as confidence intervals and Bayesian inference, do not lead to a satisfactory logic of evaluating hypothesis tests. As an alternative, I motivate and explicate the concept of corroboration of the null hypothesis. Finally I show how degrees of corroboration give an interpretation to non-significant results, combat publication bias and mitigate the replication crisis.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Matthias Haucke ◽  
Jonas Miosga ◽  
Rink Hoekstra ◽  
Don van Ravenzwaaij

A majority of statistically educated scientists draw incorrect conclusions based on the most commonly used statistical technique: null hypothesis significance testing (NHST). Frequentist techniques are often claimed to be incorrectly interpreted as Bayesian outcomes, which suggests that a Bayesian framework may fit better to inferences researchers frequently want to make (Briggs, 2012). The current study set out to test this proposition. Firstly, we investigated whether there is a discrepancy between what researchers think they can conclude and what they want to be able to conclude from NHST. Secondly, we investigated to what extent researchers want to incorporate prior study results and their personal beliefs in their statistical inference. Results show the expected discrepancy between what researchers think they can conclude from NHST and what they want to be able to conclude. Furthermore, researchers were interested in incorporating prior study results, but not their personal beliefs, into their statistical inference.


2021 ◽  
Vol 70 (2) ◽  
pp. 123-133
Author(s):  
Norbert Hirschauer ◽  
Sven Grüner ◽  
Oliver Mußhoff ◽  
Claudia Becker

It has often been noted that the “null-hypothesis-significance-testing” (NHST) framework is an inconsistent hybrid of Neyman-Pearson’s “hypothesis testing” and Fisher’s “significance testing” that almost inevitably causes misinterpretations. To facilitate a realistic assessment of the potential and the limits of statistical inference, we briefly recall widespread inferential errors and outline the two original approaches of these famous statisticians. Based on the understanding of their irreconcilable perspectives, we propose “going back to the roots” and using the initial evidence in the data in terms of the size and the uncertainty of the estimate for the purpose of statistical inference. Finally, we make six propositions that hopefully contribute to improving the quality of inferences in future research.


2020 ◽  
Author(s):  
Matthias Haucke ◽  
Jonas Miosga ◽  
Rink Hoekstra ◽  
Don van Ravenzwaaij

A majority of statistically educated scientists draw incorrect conclusions based on, arguably, the most commonly used statistical technique: null hypothesis significance testing (NHST). Frequentist techniques are often claimed to be incorrectly interpreted as Bayesian outcomes, which suggests that a Bayesian framework may fit better to inferences researchers frequently want to make (Briggs, 2012). The current study set out to test this proposition. Firstly, we investigated whether there is a discrepancy between what researchers think they can conclude and what they want to be able to conclude from NHST. Secondly, we investigated to what extent researchers want to incorporate prior study results and subjective beliefs in their statistical inference. Results show the expected discrepancy between what researchers think they can conclude from NHST and what they want to be able to conclude. Furthermore, researchers were interested in incorporating prior study results, but not subjective beliefs, into their statistical inference.


2020 ◽  
Author(s):  
Norbert Hirschauer ◽  
Sven Gruener ◽  
Oliver Mußhoff ◽  
Claudia Becker

It has often been noted that the “null-hypothesis-significance-testing” (NHST) framework is an inconsistent hybrid of Neyman-Pearson’s “hypotheses testing” and Fisher’s “significance test-ing” approach that almost inevitably causes misinterpretations. To facilitate a realistic assessment of the potential and the limits of statistical inference, we briefly recall widespread inferential errors and outline the two original approaches of these famous statisticians. Based on the under-standing of their irreconcilable perspectives, we propose “going back to the roots” and using the initial evidence in the data in terms of the size and the uncertainty of the estimate for the pur-pose of statistical inference. Finally, we make six propositions that hopefully contribute to im-proving the quality of inferences in future research.


1970 ◽  
Vol 15 (6) ◽  
pp. 402, 404-405
Author(s):  
ROBERT E. DEAR

Sign in / Sign up

Export Citation Format

Share Document