Covariate Adjustment in Randomized Controlled Trials

Author(s):  
Hyolim Lee ◽  
Kevin Thorpe

Introduction & Objective: Unadjusted analyses, fully adjusted analyses, or adjusted analyses based on tests of significance on covariate imbalance are recommended for covariate adjustment in randomized controlled trials. It has been indicated that the tests of significance on baseline comparability is inappropriate, rather it is important to indicate the strength of relationship with outcomes. Our goal is to understand when the adjustment should be used in randomized controlled trials. Methods: Unadjusted analysis, fully adjusted analysis, and adjusted analysis based on baseline comparability were examined under null and alternative hypothesis by simulation studies. Each data set was simulated 3000 times for a total of 9 scenarios for sample sizes of 20, 40, and 100 each with baseline thresholds of 0.05, 0.1, and 0.2. Each scenario was examined by the change in magnitude of correlation from 0.1 to 0.9. Results: Power of fully adjusted analysis under alternative hypothesis was increased as the correlation increased while adjusted analysis based on the covariate imbalance did not compare favorably to the unadjusted analysis. Type 1 error was decreased in adjusted analysis based on the covariate imbalance under null hypothesis. It was then observed that p-value does not follow a uniform distribution under the null hypothesis. Conclusion: Unadjusted and fully adjusted analyses were valid analyses. Full adjustment could potentially increase the power if adjustment is known. However, adjusted analysis based on the test of significance on covariate imbalance may not be a valid analysis. Tests of significance should not be used for comparing baseline comparability.

10.2196/22422 ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. e22422
Author(s):  
Tomohide Yamada ◽  
Daisuke Yoneoka ◽  
Yuta Hiraike ◽  
Kimihiro Hino ◽  
Hiroyoshi Toyoshiba ◽  
...  

Background Performing systematic reviews is a time-consuming and resource-intensive process. Objective We investigated whether a machine learning system could perform systematic reviews more efficiently. Methods All systematic reviews and meta-analyses of interventional randomized controlled trials cited in recent clinical guidelines from the American Diabetes Association, American College of Cardiology, American Heart Association (2 guidelines), and American Stroke Association were assessed. After reproducing the primary screening data set according to the published search strategy of each, we extracted correct articles (those actually reviewed) and incorrect articles (those not reviewed) from the data set. These 2 sets of articles were used to train a neural network–based artificial intelligence engine (Concept Encoder, Fronteo Inc). The primary endpoint was work saved over sampling at 95% recall (WSS@95%). Results Among 145 candidate reviews of randomized controlled trials, 8 reviews fulfilled the inclusion criteria. For these 8 reviews, the machine learning system significantly reduced the literature screening workload by at least 6-fold versus that of manual screening based on WSS@95%. When machine learning was initiated using 2 correct articles that were randomly selected by a researcher, a 10-fold reduction in workload was achieved versus that of manual screening based on the WSS@95% value, with high sensitivity for eligible studies. The area under the receiver operating characteristic curve increased dramatically every time the algorithm learned a correct article. Conclusions Concept Encoder achieved a 10-fold reduction of the screening workload for systematic review after learning from 2 randomly selected studies on the target topic. However, few meta-analyses of randomized controlled trials were included. Concept Encoder could facilitate the acquisition of evidence for clinical guidelines.


2020 ◽  
Author(s):  
Stefan L.K. Gruijters

Checks on baseline differences in randomized controlled trials (RCTs) are often done using null-hypothesis significance tests (NHSTs). However, the use of NHSTs to establish the degree of baseline similarity is inappropriate, potentially misleading, and, simply, logically incoherent.


2015 ◽  
Vol 68 (9) ◽  
pp. 1068-1075 ◽  
Author(s):  
Douglas D. Thompson ◽  
Hester F. Lingsma ◽  
William N. Whiteley ◽  
Gordon D. Murray ◽  
Ewout W. Steyerberg

Sign in / Sign up

Export Citation Format

Share Document