Bias for Treatment Effect by Measurement Error in Pretest in ANCOVA Analysis

2022 ◽  
pp. 001316442110688
Author(s):  
Yasuo Miyazaki ◽  
Akihito Kamata ◽  
Kazuaki Uekawa ◽  
Yizhi Sun

This paper investigated consequences of measurement error in the pretest on the estimate of the treatment effect in a pretest–posttest design with the analysis of covariance (ANCOVA) model, focusing on both the direction and magnitude of its bias. Some prior studies have examined the magnitude of the bias due to measurement error and suggested ways to correct it. However, none of them clarified how the direction of bias is affected by measurement error. This study analytically derived a formula for the asymptotic bias for the treatment effect. The derived formula is a function of the reliability of the pretest, the standardized population group mean difference for the pretest, and the correlation between pretest and posttest true scores. It revealed a concerning consequence of ignoring measurement errors in pretest scores: treatment effects could be overestimated or underestimated, and positive treatment effects can be estimated as negative effects in certain conditions. A simulation study was also conducted to verify the derived bias formula.

2019 ◽  
Vol 43 (6) ◽  
pp. 335-369
Author(s):  
J. R. Lockwood ◽  
Daniel F. McCaffrey

Background: Analysis of covariance (ANCOVA) is commonly used to adjust for potential confounders in observational studies of intervention effects. Measurement error in the covariates used in ANCOVA models can lead to inconsistent estimators of intervention effects. While errors-in-variables (EIV) regression can restore consistency, it requires surrogacy assumptions for the error-prone covariates that may be violated in practical settings. Objectives: The objectives of this article are (1) to derive asymptotic results for ANCOVA using EIV regression when measurement errors may not satisfy the standard surrogacy assumptions and (2) to demonstrate how these results can be used to explore the potential bias from ANCOVA models that either ignore measurement error by using ordinary least squares (OLS) regression or use EIV regression when its required assumptions do not hold. Results: The article derives asymptotic results for ANCOVA with error-prone covariates that cover a variety of cases relevant to applications. It then uses the results in a case study of choosing among ANCOVA model specifications for estimating teacher effects using longitudinal data from a large urban school system. It finds evidence that estimates of teacher effects computed using EIV regression may have smaller bias than estimates computed using OLS regression when the data available for adjusting for students’ prior achievement are limited.


Author(s):  
Edward F. Durner

Abstract This chapter focuses on the analysis of covariance. In the analysis of covariance, there is some measurable characteristic associated with experimental units which seems to be contributing significant variability to an experiment. This variability inflates the error term and makes it harder to reject the null hypothesis of no treatment effect. If one could pull this variability out of the error term, a larger F-value will be obtained, and therefore possibly reject the null hypothesis. This is increasing precision. Treatment effects on nematode populations in the soil and yield were taken as examples.


Author(s):  
David Aaby ◽  
Juned Siddique

Abstract Background Lifestyle intervention studies often use self-reported measures of diet as an outcome variable to measure changes in dietary intake. The presence of measurement error in self-reported diet due to participant failure to accurately report their diet is well known. Less familiar to researchers is differential measurement error, where the nature of measurement error differs by treatment group and/or time. Differential measurement error is often present in intervention studies and can result in biased estimates of the treatment effect and reduced power to detect treatment effects. Investigators need to be aware of the impact of differential measurement error when designing intervention studies that use self-reported measures. Methods We use simulation to assess the consequences of differential measurement error on the ability to estimate treatment effects in a two-arm randomized trial with two time points. We simulate data under a variety of scenarios, focusing on how different factors affect power to detect a treatment effect, bias of the treatment effect, and coverage of the 95% confidence interval of the treatment effect. Simulations use realistic scenarios based on data from the Trials of Hypertension Prevention Study. Simulated sample sizes ranged from 110-380 per group. Results Realistic differential measurement error seen in lifestyle intervention studies can require an increased sample size to achieve 80% power to detect a treatment effect and may result in a biased estimate of the treatment effect. Conclusions Investigators designing intervention studies that use self-reported measures should take differential measurement error into account by increasing their sample size, incorporating an internal validation study, and/or identifying statistical methods to correct for differential measurement error.


2017 ◽  
Vol 928 (10) ◽  
pp. 58-63 ◽  
Author(s):  
V.I. Salnikov

The initial subject for study are consistent sums of the measurement errors. It is assumed that the latter are subject to the normal law, but with the limitation on the value of the marginal error Δpred = 2m. It is known that each amount ni corresponding to a confidence interval, which provides the value of the sum, is equal to zero. The paradox is that the probability of such an event is zero; therefore, it is impossible to determine the value ni of where the sum becomes zero. The article proposes to consider the event consisting in the fact that some amount of error will change value within 2m limits with a confidence level of 0,954. Within the group all the sums have a limit error. These tolerances are proposed to use for the discrepancies in geodesy instead of 2m*SQL(ni). The concept of “the law of the truncated normal distribution with Δpred = 2m” is suggested to be introduced.


Author(s):  
SCOTT CLIFFORD ◽  
GEOFFREY SHEAGLEY ◽  
SPENCER PISTON

The use of survey experiments has surged in political science. The most common design is the between-subjects design in which the outcome is only measured posttreatment. This design relies heavily on recruiting a large number of subjects to precisely estimate treatment effects. Alternative designs that involve repeated measurements of the dependent variable promise greater precision, but they are rarely used out of fears that these designs will yield different results than a standard design (e.g., due to consistency pressures). Across six studies, we assess this conventional wisdom by testing experimental designs against each other. Contrary to common fears, repeated measures designs tend to yield the same results as more common designs while substantially increasing precision. These designs also offer new insights into treatment effect size and heterogeneity. We conclude by encouraging researchers to adopt repeated measures designs and providing guidelines for when and how to use them.


Author(s):  
Sean Wharton ◽  
Arne Astrup ◽  
Lars Endahl ◽  
Michael E. J. Lean ◽  
Altynai Satylganova ◽  
...  

AbstractIn the approval process for new weight management therapies, regulators typically require estimates of effect size. Usually, as with other drug evaluations, the placebo-adjusted treatment effect (i.e., the difference between weight losses with pharmacotherapy and placebo, when given as an adjunct to lifestyle intervention) is provided from data in randomized clinical trials (RCTs). At first glance, this may seem appropriate and straightforward. However, weight loss is not a simple direct drug effect, but is also mediated by other factors such as changes in diet and physical activity. Interpreting observed differences between treatment arms in weight management RCTs can be challenging; intercurrent events that occur after treatment initiation may affect the interpretation of results at the end of treatment. Utilizing estimands helps to address these uncertainties and improve transparency in clinical trial reporting by better matching the treatment-effect estimates to the scientific and/or clinical questions of interest. Estimands aim to provide an indication of trial outcomes that might be expected in the same patients under different conditions. This article reviews how intercurrent events during weight management trials can influence placebo-adjusted treatment effects, depending on how they are accounted for and how missing data are handled. The most appropriate method for statistical analysis is also discussed, including assessment of the last observation carried forward approach, and more recent methods, such as multiple imputation and mixed models for repeated measures. The use of each of these approaches, and that of estimands, is discussed in the context of the SCALE phase 3a and 3b RCTs evaluating the effect of liraglutide 3.0 mg for the treatment of obesity.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Fei Wan

Abstract Background Randomized pre-post designs, with outcomes measured at baseline and after treatment, have been commonly used to compare the clinical effectiveness of two competing treatments. There are vast, but often conflicting, amount of information in current literature about the best analytic methods for pre-post designs. It is challenging for applied researchers to make an informed choice. Methods We discuss six methods commonly used in literature: one way analysis of variance (“ANOVA”), analysis of covariance main effect and interaction models on the post-treatment score (“ANCOVAI” and “ANCOVAII”), ANOVA on the change score between the baseline and post-treatment scores (“ANOVA-Change”), repeated measures (“RM”) and constrained repeated measures (“cRM”) models on the baseline and post-treatment scores as joint outcomes. We review a number of study endpoints in randomized pre-post designs and identify the mean difference in the post-treatment score as the common treatment effect that all six methods target. We delineate the underlying differences and connections between these competing methods in homogeneous and heterogeneous study populations. Results ANCOVA and cRM outperform other alternative methods because their treatment effect estimators have the smallest variances. cRM has comparable performance to ANCOVAI in the homogeneous scenario and to ANCOVAII in the heterogeneous scenario. In spite of that, ANCOVA has several advantages over cRM: i) the baseline score is adjusted as covariate because it is not an outcome by definition; ii) it is very convenient to incorporate other baseline variables and easy to handle complex heteroscedasticity patterns in a linear regression framework. Conclusions ANCOVA is a simple and the most efficient approach for analyzing pre-post randomized designs.


2021 ◽  
pp. 1-22
Author(s):  
Daisuke Kurisu ◽  
Taisuke Otsu

This paper studies the uniform convergence rates of Li and Vuong’s (1998, Journal of Multivariate Analysis 65, 139–165; hereafter LV) nonparametric deconvolution estimator and its regularized version by Comte and Kappus (2015, Journal of Multivariate Analysis 140, 31–46) for the classical measurement error model, where repeated noisy measurements on the error-free variable of interest are available. In contrast to LV, our assumptions allow unbounded supports for the error-free variable and measurement errors. Compared to Bonhomme and Robin (2010, Review of Economic Studies 77, 491–533) specialized to the measurement error model, our assumptions do not require existence of the moment generating functions of the square and product of repeated measurements. Furthermore, by utilizing a maximal inequality for the multivariate normalized empirical characteristic function process, we derive uniform convergence rates that are faster than the ones derived in these papers under such weaker conditions.


Biometrika ◽  
2020 ◽  
Author(s):  
Oliver Dukes ◽  
Stijn Vansteelandt

Summary Eliminating the effect of confounding in observational studies typically involves fitting a model for an outcome adjusted for covariates. When, as often, these covariates are high-dimensional, this necessitates the use of sparse estimators, such as the lasso, or other regularization approaches. Naïve use of such estimators yields confidence intervals for the conditional treatment effect parameter that are not uniformly valid. Moreover, as the number of covariates grows with the sample size, correctly specifying a model for the outcome is nontrivial. In this article we deal with both of these concerns simultaneously, obtaining confidence intervals for conditional treatment effects that are uniformly valid, regardless of whether the outcome model is correct. This is done by incorporating an additional model for the treatment selection mechanism. When both models are correctly specified, we can weaken the standard conditions on model sparsity. Our procedure extends to multivariate treatment effect parameters and complex longitudinal settings.


2000 ◽  
Vol 30 (2) ◽  
pp. 306-310 ◽  
Author(s):  
M S Williams ◽  
H T Schreuder

Assuming volume equations with multiplicative errors, we derive simple conditions for determining when measurement error in total height is large enough that only using tree diameter, rather than both diameter and height, is more reliable for predicting tree volumes. Based on data for different tree species of excurrent form, we conclude that measurement errors up to ±40% of the true height can be tolerated before inclusion of estimated height in volume prediction is no longer warranted.


Sign in / Sign up

Export Citation Format

Share Document