Calibrating validation samples when accounting for measurement error in intervention studies

2021 ◽  
pp. 096228022098857
Author(s):  
Benjamin Ackerman ◽  
Juned Siddique ◽  
Elizabeth A Stuart

Many lifestyle intervention trials depend on collecting self-reported outcomes, such as dietary intake, to assess the intervention’s effectiveness. Self-reported outcomes are subject to measurement error, which impacts treatment effect estimation. External validation studies measure both self-reported outcomes and accompanying biomarkers, and can be used to account for measurement error. However, in order to account for measurement error using an external validation sample, an assumption must be made that the inferences are transportable from the validation sample to the intervention trial of interest. This assumption does not always hold. In this paper, we propose an approach that adjusts the validation sample to better resemble the trial sample, and we also formally investigate when bias due to poor transportability may arise. Lastly, we examine the performance of the methods using simulation, and illustrate them using PREMIER, a lifestyle intervention trial measuring self-reported sodium intake as an outcome, and OPEN, a validation study measuring both self-reported diet and urinary biomarkers.

Biometrics ◽  
2019 ◽  
Vol 75 (3) ◽  
pp. 927-937 ◽  
Author(s):  
Juned Siddique ◽  
Michael J. Daniels ◽  
Raymond J. Carroll ◽  
Trivellore E. Raghunathan ◽  
Elizabeth A. Stuart ◽  
...  

Author(s):  
David Aaby ◽  
Juned Siddique

Abstract Background Lifestyle intervention studies often use self-reported measures of diet as an outcome variable to measure changes in dietary intake. The presence of measurement error in self-reported diet due to participant failure to accurately report their diet is well known. Less familiar to researchers is differential measurement error, where the nature of measurement error differs by treatment group and/or time. Differential measurement error is often present in intervention studies and can result in biased estimates of the treatment effect and reduced power to detect treatment effects. Investigators need to be aware of the impact of differential measurement error when designing intervention studies that use self-reported measures. Methods We use simulation to assess the consequences of differential measurement error on the ability to estimate treatment effects in a two-arm randomized trial with two time points. We simulate data under a variety of scenarios, focusing on how different factors affect power to detect a treatment effect, bias of the treatment effect, and coverage of the 95% confidence interval of the treatment effect. Simulations use realistic scenarios based on data from the Trials of Hypertension Prevention Study. Simulated sample sizes ranged from 110-380 per group. Results Realistic differential measurement error seen in lifestyle intervention studies can require an increased sample size to achieve 80% power to detect a treatment effect and may result in a biased estimate of the treatment effect. Conclusions Investigators designing intervention studies that use self-reported measures should take differential measurement error into account by increasing their sample size, incorporating an internal validation study, and/or identifying statistical methods to correct for differential measurement error.


2021 ◽  
pp. 103940
Author(s):  
Jiebin Chu ◽  
Zhoujian Sun ◽  
Wei Dong ◽  
Jinlong Shi ◽  
Zhengxing Huang

Sign in / Sign up

Export Citation Format

Share Document