scholarly journals Effects of differential measurement error in self-reported diet in longitudinal lifestyle intervention studies

Author(s):  
David Aaby ◽  
Juned Siddique

Abstract Background Lifestyle intervention studies often use self-reported measures of diet as an outcome variable to measure changes in dietary intake. The presence of measurement error in self-reported diet due to participant failure to accurately report their diet is well known. Less familiar to researchers is differential measurement error, where the nature of measurement error differs by treatment group and/or time. Differential measurement error is often present in intervention studies and can result in biased estimates of the treatment effect and reduced power to detect treatment effects. Investigators need to be aware of the impact of differential measurement error when designing intervention studies that use self-reported measures. Methods We use simulation to assess the consequences of differential measurement error on the ability to estimate treatment effects in a two-arm randomized trial with two time points. We simulate data under a variety of scenarios, focusing on how different factors affect power to detect a treatment effect, bias of the treatment effect, and coverage of the 95% confidence interval of the treatment effect. Simulations use realistic scenarios based on data from the Trials of Hypertension Prevention Study. Simulated sample sizes ranged from 110-380 per group. Results Realistic differential measurement error seen in lifestyle intervention studies can require an increased sample size to achieve 80% power to detect a treatment effect and may result in a biased estimate of the treatment effect. Conclusions Investigators designing intervention studies that use self-reported measures should take differential measurement error into account by increasing their sample size, incorporating an internal validation study, and/or identifying statistical methods to correct for differential measurement error.

Author(s):  
Alice R. Carter ◽  
Eleanor Sanderson ◽  
Gemma Hammerton ◽  
Rebecca C. Richmond ◽  
George Davey Smith ◽  
...  

AbstractMediation analysis seeks to explain the pathway(s) through which an exposure affects an outcome. Traditional, non-instrumental variable methods for mediation analysis experience a number of methodological difficulties, including bias due to confounding between an exposure, mediator and outcome and measurement error. Mendelian randomisation (MR) can be used to improve causal inference for mediation analysis. We describe two approaches that can be used for estimating mediation analysis with MR: multivariable MR (MVMR) and two-step MR. We outline the approaches and provide code to demonstrate how they can be used in mediation analysis. We review issues that can affect analyses, including confounding, measurement error, weak instrument bias, interactions between exposures and mediators and analysis of multiple mediators. Description of the methods is supplemented by simulated and real data examples. Although MR relies on large sample sizes and strong assumptions, such as having strong instruments and no horizontally pleiotropic pathways, our simulations demonstrate that these methods are unaffected by confounders of the exposure or mediator and the outcome and non-differential measurement error of the exposure or mediator. Both MVMR and two-step MR can be implemented in both individual-level MR and summary data MR. MR mediation methods require different assumptions to be made, compared with non-instrumental variable mediation methods. Where these assumptions are more plausible, MR can be used to improve causal inference in mediation analysis.


2021 ◽  
pp. 096228022098857
Author(s):  
Benjamin Ackerman ◽  
Juned Siddique ◽  
Elizabeth A Stuart

Many lifestyle intervention trials depend on collecting self-reported outcomes, such as dietary intake, to assess the intervention’s effectiveness. Self-reported outcomes are subject to measurement error, which impacts treatment effect estimation. External validation studies measure both self-reported outcomes and accompanying biomarkers, and can be used to account for measurement error. However, in order to account for measurement error using an external validation sample, an assumption must be made that the inferences are transportable from the validation sample to the intervention trial of interest. This assumption does not always hold. In this paper, we propose an approach that adjusts the validation sample to better resemble the trial sample, and we also formally investigate when bias due to poor transportability may arise. Lastly, we examine the performance of the methods using simulation, and illustrate them using PREMIER, a lifestyle intervention trial measuring self-reported sodium intake as an outcome, and OPEN, a validation study measuring both self-reported diet and urinary biomarkers.


Biometrika ◽  
2020 ◽  
Author(s):  
Oliver Dukes ◽  
Stijn Vansteelandt

Summary Eliminating the effect of confounding in observational studies typically involves fitting a model for an outcome adjusted for covariates. When, as often, these covariates are high-dimensional, this necessitates the use of sparse estimators, such as the lasso, or other regularization approaches. Naïve use of such estimators yields confidence intervals for the conditional treatment effect parameter that are not uniformly valid. Moreover, as the number of covariates grows with the sample size, correctly specifying a model for the outcome is nontrivial. In this article we deal with both of these concerns simultaneously, obtaining confidence intervals for conditional treatment effects that are uniformly valid, regardless of whether the outcome model is correct. This is done by incorporating an additional model for the treatment selection mechanism. When both models are correctly specified, we can weaken the standard conditions on model sparsity. Our procedure extends to multivariate treatment effect parameters and complex longitudinal settings.


2005 ◽  
Vol 5 (1) ◽  
Author(s):  
Charles H Mullin

AbstractEmpirical researchers commonly invoke instrumental variable (IV) assumptions to identify treatment effects. This paper considers what can be learned under two specific violations of those assumptions: contaminated and corrupted data. Either of these violations prevents point identification, but sharp bounds of the treatment effect remain feasible. In an applied example, random miscarriages are an IV for women’s age at first birth. However, the inability to separate random miscarriages from behaviorally induced miscarriages (those caused by smoking and drinking) results in a contaminated sample. Furthermore, censored child outcomes produce a corrupted sample. Despite these limitations, the bounds demonstrate that delaying the age at first birth for the current population of non-black teenage mothers reduces their first-born child’s well-being.


2019 ◽  
Author(s):  
Alice R Carter ◽  
Eleanor Sanderson ◽  
Gemma Hammerton ◽  
Rebecca C Richmond ◽  
George Davey Smith ◽  
...  

AbstractMediation analysis seeks to explain the pathway(s) through which an exposure affects an outcome. Mediation analysis experiences a number of methodological difficulties, including bias due to confounding and measurement error. Mendelian randomisation (MR) can be used to improve causal inference for mediation analysis. We describe two approaches that can be used for estimating mediation analysis with MR: multivariable Mendelian randomisation (MVMR) and two-step Mendelian randomisation. We outline the approaches and provide code to demonstrate how they can be used in mediation analysis. We review issues that can affect analyses, including confounding, measurement error, weak instrument bias, and analysis of multiple mediators. Description of the methods is supplemented by simulated and real data examples. Although Mendelian randomisation relies on large sample sizes and strong assumptions, such as having strong instruments and no horizontally pleiotropic pathways, our examples demonstrate that it is unlikely to be affected by confounders of the exposure or mediator and the outcome, reverse causality and non-differential measurement error of the exposure or mediator. Both MVMR and two-step MR can be implemented in both individual-level MR and summary data MR, and can improve causal inference in mediation analysis.


2007 ◽  
Vol 25 (18_suppl) ◽  
pp. 6513-6513
Author(s):  
R. A. Wilcox ◽  
G. H. Guyatt ◽  
V. M. Montori

6513 Background: Investigators finding a large treatment effect in an interim analysis may terminate a randomized trial (RCT) earlier than planned. A systematic review (Montori et. al., JAMA 2005; 294: 2203–2209) found that RCTs stopped early for benefit are poorly reported and may overestimate the true treatment affect. The extent to which RCTs in oncology stopped early for benefit share similar concerns remains unclear. Methods: We selected RCTs in oncology which had been reported in the original systematic review and reviewed the study characteristics, features related to the decision to monitor and stop the study early (sample size, interim analyses, monitoring and stopping rules), and the number of events and the estimated treatment effects. Results: We found 29 RCTs in malignant hematology (n=6) and oncology (n=23), 52% published in 2000–2004 and 41% in 3 high-impact medical journals (New England Journal of Medicine, Lancet, JAMA). The majority (79%) of trials reported a planned sample size and, on average, recruited 67% of the planned sample size (SD 31%). RCTs reported (1) the planned sample size (n=20), (2) the interim analysis at which the study was terminated (n=16), and (3) whether the decision to stop the study prematurely was informed by a stopping rule (n=16); only 13 reported all three. There was a highly significant correlation between the number of events and the treatment effect (r=0.68, p=0.0007). The odds of finding a large treatment effect (a relative risk < median of 0.54, IQR 0.3–0.7) when studies stopped after a few events (no. events < median of 54 events, IQR 22–125) was 6.2 times greater than when studies stopped later. Conclusions: RCTs in oncology stopped early for benefit tend to report large treatment effects that may overestimate the true treatment effect, particularly when the number of events driving study termination is small. Also, information pertinent to the decision to stop early was inconsistently reported. Clinicians and policymakers should interpret such studies with caution, especially when information about the decision to stop early is not provided and few events occurred. No significant financial relationships to disclose.


Circulation ◽  
2008 ◽  
Vol 118 (suppl_18) ◽  
Author(s):  
Gregory W Evans ◽  
Mike K Palmer ◽  
Daniel H O’Leary ◽  
John R Crouse ◽  
Michiel L Bots ◽  
...  

Carotid artery intima-media thickness (CIMT) assessed by B-mode ultrasound is an accepted marker for subclinical atherosclerosis commonly used in clinical trials. Their sample size and power calculations apply 2-sample independent t-tests and within group variance in progression rates from the literature. However, this approach obscures the impact of differences in study designs including length of follow-up and differences in the number of and interval between ultrasound scans. These effects can be assessed using common sample size formula for longitudinal models, but this approach requires decomposition of the total variance into between and within subject components that have not generally been reported in the literature. Here, we derive these variance components for the Measuring Effects on intima-media Thickness: an Evaluation of Rosuvastatin (METEOR) study, a randomized, double-blind trial that demonstrated treatment with 40 mg rosuvastatin significantly slowed CIMT progression in middle-aged patients with a low Framingham risk of coronary heart disease and subclinical atherosclerosis (baseline maximum CIMT ≥1.2-<3.5mm). We examined the impact of differing follow-up periods, use of intermediate scans, and use of duplicate scans using both sample size calculations and actual analyses based on subsets of the METEOR data. Reductions in study length or number of scans result in increased variances and larger sample sizes to detect a given treatment effect. Table shows the impact of duplicate scans at baseline and end of the 2-year study, with and without intermediate scans performed every 6 months, on the sample size required to detect a treatment effect of 0.012 mm/year. These results underscore the importance of considering the number and spacing of ultrasound exams explicitly during study design, and suggest that reductions in scanning frequency may seriously erode study power and/or increase costs by requiring recruitment of additional subjects.


Sign in / Sign up

Export Citation Format

Share Document