Confidence Intervals and Hypothesis Testing

Author(s):  
Pierre Lafaye de Micheaux ◽  
Rémy Drouilhet ◽  
Benoit Liquet
2007 ◽  
Vol 22 (3) ◽  
pp. 637-650 ◽  
Author(s):  
Ian T. Jolliffe

Abstract When a forecast is assessed, a single value for a verification measure is often quoted. This is of limited use, as it needs to be complemented by some idea of the uncertainty associated with the value. If this uncertainty can be quantified, it is then possible to make statistical inferences based on the value observed. There are two main types of inference: confidence intervals can be constructed for an underlying “population” value of the measure, or hypotheses can be tested regarding the underlying value. This paper will review the main ideas of confidence intervals and hypothesis tests, together with the less well known “prediction intervals,” concentrating on aspects that are often poorly understood. Comparisons will be made between different methods of constructing confidence intervals—exact, asymptotic, bootstrap, and Bayesian—and the difference between prediction intervals and confidence intervals will be explained. For hypothesis testing, multiple testing will be briefly discussed, together with connections between hypothesis testing, prediction intervals, and confidence intervals.


1992 ◽  
Vol 13 (9) ◽  
pp. 553-555 ◽  
Author(s):  
Leon F. Burmeister ◽  
David Bimbaum ◽  
Samuel B. Sheps

A variety of statistical tests of a null hypothesis commonly are used in biomedical studies. While these tests are the mainstay for justifying inferences drawn from data, they have important limitations. This report discusses the relative merits of two different approaches to data analysis and display, and recommends the use of confidence intervals rather than classic hypothesis testing.Formulae for a confidence interval surrounding the point estimate of an average value take the form: d= ±zσ/√n, where “d” represents the average difference between central and extreme values, “z” is derived from the density function of a known distribution, and “a/-∨n” represents the magnitude of sampling variability. Transposition of terms yields the familiar formula for hypothesis testing of normally distributed data (without applying the finite population correction factor): z = d/(σ/√n).


Ecology ◽  
2004 ◽  
Vol 85 (10) ◽  
pp. 2895-2900 ◽  
Author(s):  
Moshe Kiflawi ◽  
Matthew Spencer

1989 ◽  
Vol 33 (3) ◽  
pp. 220-241 ◽  
Author(s):  
Patricia L. Busk ◽  
Leonard A. Marascuilo

In recent years, the loglinear model has been proposed and used for analysing frequency data in multidimensional contingency tables. The primary focus of the literature has been on model building and only secondarily on hypothesis testing and estimation. This paper extends Kennedy's (1988) description by presenting post hoc procedures for statistically evaluating treatment effects, contrasts, and confidence intervals. It illustrates methods for main effect and interaction contrasts and pays special attention to odds ratios and their interval estimates. Procedures are described for treating the variables as interdependent and for the case where there are independent and dependent variables—both ordered and unordered.


Sign in / Sign up

Export Citation Format

Share Document