Statistical Principles for Clinical Trials

1998 ◽  
Vol 26 (2) ◽  
pp. 57-65 ◽  
Author(s):  
R Kay

If a trial is to be well designed, and the conclusions drawn from it valid, a thorough understanding of the benefits and pitfalls of basic statistical principles is required. When setting up a trial, appropriate sample-size calculation is vital. If initial calculations are inaccurate, trial results will be unreliable. The principle of intent-to-treat in comparative trials is examined. Randomization as a method of selecting patients to treatment is essential to ensure that the treatment groups are equalized in terms of avoiding biased allocation in the mix of patients within groups. Once trial results are available the correct calculation and interpretation of the P-value is important. Its limitations are examined, and the use of the confidence interval to help draw valid conclusions regarding the clinical value of treatments is explored.

1994 ◽  
Vol 13 (8) ◽  
pp. 859-870 ◽  
Author(s):  
Robert P. McMahon ◽  
Michael Proschan ◽  
Nancy L. Geller ◽  
Peter H. Stone ◽  
George Sopko

2018 ◽  
Vol 17 (3) ◽  
pp. 214-230 ◽  
Author(s):  
Frank Miller ◽  
Sarah Zohar ◽  
Nigel Stallard ◽  
Jason Madan ◽  
Martin Posch ◽  
...  

2019 ◽  
Vol 16 (5) ◽  
pp. 531-538 ◽  
Author(s):  
David Alan Schoenfeld ◽  
Dianne M Finkelstein ◽  
Eric Macklin ◽  
Neta Zach ◽  
David L Ennist ◽  
...  

Background/AimsFor single arm trials, a treatment is evaluated by comparing an outcome estimate to historically reported outcome estimates. Such a historically controlled trial is often analyzed as if the estimates from previous trials were known without variation and there is no trial-to-trial variation in their estimands. We develop a test of treatment efficacy and sample size calculation for historically controlled trials that considers these sources of variation.MethodsWe fit a Bayesian hierarchical model, providing a sample from the posterior predictive distribution of the outcome estimand of a new trial, which, along with the standard error of the estimate, can be used to calculate the probability that the estimate exceeds a threshold. We then calculate criteria for statistical significance as a function of the standard error of the new trial and calculate sample size as a function of difference to be detected. We apply these methods to clinical trials for amyotrophic lateral sclerosis using data from the placebo groups of 16 trials.ResultsWe find that when attempting to detect the small to moderate effect sizes usually assumed in amyotrophic lateral sclerosis clinical trials, historically controlled trials would require a greater total number of patients than concurrently controlled trials, and only when an effect size is extraordinarily large is a historically controlled trial a reasonable alternative. We also show that utilizing patient level data for the prognostic covariates can reduce the sample size required for a historically controlled trial.ConclusionThis article quantifies when historically controlled trials would not provide any sample size advantage, despite dispensing with a control group.


2003 ◽  
Vol 8 (2) ◽  
pp. 87-92 ◽  
Author(s):  
C. Gerlinger ◽  
J. Endrikat ◽  
E. A. van der Meulen ◽  
T. O. M. Dieben ◽  
B. Düsterberg

2020 ◽  
Vol 99 (13) ◽  
pp. 1453-1460
Author(s):  
D. Qin ◽  
F. Hua ◽  
H. He ◽  
S. Liang ◽  
H. Worthington ◽  
...  

The objectives of this study were to assess the reporting quality and methodological quality of split-mouth trials (SMTs) published during the past 2 decades and to determine whether there has been an improvement in their quality over time. We searched the MEDLINE database via PubMed to identify SMTs published in 1998, 2008, and 2018. For each included SMT, we used the CONsolidated Standards Of Reporting Trials (CONSORT) 2010 guideline, CONSORT for within-person trial (WPT) extension, and a new 3-item checklist to assess its trial reporting quality (TRQ), WPT-specific reporting quality (WRQ), and SMT-specific methodological quality (SMQ), respectively. Multivariable generalized linear models were performed to analyze the quality of SMTs over time, adjusting for potential confounding factors. A total of 119 SMTs were included. The mean overall score for the TRQ (score range, 0 to 32), WRQ (0 to 15), and SMQ (0 to 3) was 15.77 (SD 4.51), 6.06 (2.06), and 1.12 (0.70), respectively. The primary outcome was clearly defined in only 28 SMTs (23.5%), and only 27 (22.7%) presented a replicable sample size calculation. Only 45 SMTs (37.8%) provided the rationale for using a split-mouth design. The correlation between body sites was reported in only 5 studies (4.2%) for sample size calculation and 4 studies (3.4%) for statistical results. Only 2 studies (1.7%) performed an appropriate sample size calculation, and 46 (38.7%) chose appropriate statistical methods, both accounting for the correlation among treatment groups and the clustering/multiplicity of measurements within an individual. Results of regression analyses suggested that the TRQ of SMTs improved significantly with time ( P < 0.001), while there was no evidence of improvement in WRQ or SMQ. Both the reporting quality and methodological quality of SMTs still have much room for improvement. Concerted efforts are needed to improve the execution and reporting of SMTs.


1997 ◽  
Vol 2 (2) ◽  
pp. 81-85 ◽  
Author(s):  
David Torgerson ◽  
Marion Campbell

Objectives: In the majority of clinical trials patients are randomised equally between treatment groups. This approach maximises statistical power for a given total sample size. The objectives of this paper were to determine if, when research costs between treatments differ, it is more economically efficient to randomise additional patients to the cheaper treatment, and how the optimum randomisation ratio can be estimated. Methods: Estimation of the most economically efficient randomisation ratio for four hypothetical clinical trials using cost-effectiveness analysis. Results: When research costs differ between treatments, and there is no constraint on total sample size, it is always more cost-effective to randomise more patients to the cheaper treatment. For example, a cost ratio between the lesser and more expensive treatment of ten, results in a randomisation ratio of 3.2:1. Conclusions: Unequal randomisation ratios should be more widely used as this will achieve optimum statistical power for the lowest expenditure of research resources.


Sign in / Sign up

Export Citation Format

Share Document