scholarly journals Current practice in methodology and reporting of the sample size calculation in randomised trials of hip and knee osteoarthritis: a protocol for a systematic review

Trials ◽  
2017 ◽  
Vol 18 (1) ◽  
Author(s):  
Bethan Copsey ◽  
Susan Dutton ◽  
Ray Fitzpatrick ◽  
Sarah E. Lamb ◽  
Jonathan A. Cook
2020 ◽  
Vol 158 (3) ◽  
pp. S14-S15
Author(s):  
Svetlana Lakunina ◽  
Zipporah Iheozor-Ejiofor ◽  
Morris Gordon ◽  
Daniel Akintelure ◽  
Vassiliki Sinopoulou

2017 ◽  
Author(s):  
Clarissa F. D. Carneiro ◽  
Thiago C. Moulin ◽  
Malcolm R. Macleod ◽  
Olavo B. Amaral

AbstractProposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science.


2017 ◽  
Vol 33 (1) ◽  
pp. 103-110 ◽  
Author(s):  
Britta Olberg ◽  
Matthias Perleth ◽  
Katja Felgentraeger ◽  
Sandra Schulz ◽  
Reinhard Busse

Background: The aim of this study was to assess the quality of reporting sample size calculation and underlying design assumptions in pivotal trials of high-risk medical devices (MDs) for neurological conditions.Methods: Systematic review of research protocols for publicly registered randomized controlled trials (RCTs). In the absence of a published protocol, principal investigators were contacted for additional data. To be included, trials had to investigate a high-risk MD, registered between 2005 and 2015, with indications stroke, headache disorders, and epilepsy as case samples within central nervous system diseases. Extraction of key methodological parameters for sample size calculation was performed independently and peer-reviewed.Results: In a final sample of seventy-one eligible trials, we collected data from thirty-one trials. Eighteen protocols were obtained from the public domain or principal investigators. Data availability decreased during the extraction process, with almost all data available for stroke-related trials. Of the thirty-one trials with sample size information available, twenty-six reported a predefined calculation and underlying assumptions. Justification was given in twenty and evidence for parameter estimation in sixteen trials. Estimates were most often based on previous research, including RCTs and observational data. Observational data were predominantly represented by retrospective designs. Other references for parameter estimation indicated a lower level of evidence.Conclusions: Our systematic review of trials on high-risk MDs confirms previous research, which has documented deficiencies regarding data availability and a lack of reporting on sample size calculation. More effort is needed to ensure both relevant sources, that is, original research protocols, to be publicly available and reporting requirements to be standardized.


Sign in / Sign up

Export Citation Format

Share Document