Significance tests in clinical research—Challenges and pitfalls
AbstractBackgroundStatistical analyses are used to help understand the practical significance of the findings in a clinical study. Many clinical researchers appear to have limited knowledge onhowto perform appropriate statistical analysis as well as understanding what the results in fact mean.MethodsThis focal review is based on long experience in supervising clinicians on statistical analysis and advising editors of scientific journals on the quality of statistical analysis applied in scientific reports evaluated for publication.ResultsBasic facts on elementary statistical analyses are presented, and common misunderstandings are elucidated. Efficacy estimates, the effect of sample size, and confidence intervals for effect estimates are reviewed, and the difference between statistical significance and clinical relevance is highlighted. The weaknesses of p-values and misunderstandings in how to interpret them are illustrated with practical examples.Conclusions and recommendationsSome very important questions need to be answered before initiating a clinical trial. What is the research question? To which patients should the result be generalised? Is the number of patients sufficient to draw a valid conclusion? When data are analysed the number of (preplanned) significance tests should be kept small and post hoc analyses should be avoided. It should also be remembered that the clinical relevance of a finding cannot be assessed by the p-value. Thus effect estimates and corresponding 95% confidence intervals should always be reported.