Explaining and Controlling Regression to the Mean in Longitudinal Research Designs

2003 ◽  
Vol 46 (6) ◽  
pp. 1340-1351 ◽  
Author(s):  
Xuyang Zhang ◽  
J. Bruce Tomblin

This tutorial is concerned with examining how regression to the mean influences research findings in longitudinal studies of clinical populations. In such studies participants are often obtained because of performance that deviates systematically from the population mean and are then subsequently studied with respect to change in the trait used for this selection. It is shown that in such research there is a potential for the estimates of change to be erroneous due to the effect of regression to the mean. The source of the regression effect is shown to arise from measurement error and a sampling bias of this measurement error in the process of selecting on extreme scores. It is also shown that regression effects are greater with measures that are less reliable and with samples that are selected with more extreme scores. Furthermore, it is shown that regression effects are particularly prominent when measures of change are based on changes in dichotomous states formed from quantitative, normally distributed traits. In addition to a formal analysis of the regression to the mean, the features of regression to the mean are demonstrated via a simulation.

2019 ◽  
Vol 6 (10) ◽  
pp. 190937 ◽  
Author(s):  
Melissa Bateson ◽  
Dan T. A. Eisenberg ◽  
Daniel Nettle

Longitudinal studies have sought to establish whether environmental exposures such as smoking accelerate the attrition of individuals' telomeres over time. These studies typically control for baseline telomere length (TL) by including it as a covariate in statistical models. However, baseline TL also differs between smokers and non-smokers, and telomere attrition is spuriously linked to baseline TL via measurement error and regression to the mean. Using simulated datasets, we show that controlling for baseline TL overestimates the true effect of smoking on telomere attrition. This bias increases with increasing telomere measurement error and increasing difference in baseline TL between smokers and non-smokers. Using a meta-analysis of longitudinal datasets, we show that as predicted, the estimated difference in telomere attrition between smokers and non-smokers is greater when statistical models control for baseline TL than when they do not, and the size of the discrepancy is positively correlated with measurement error. The bias we describe is not specific to smoking and also applies to other exposures. We conclude that to avoid invalid inference, models of telomere attrition should not control for baseline TL by including it as a covariate. Many claims of accelerated telomere attrition in individuals exposed to adversity need to be re-assessed.


Author(s):  
Fatima Umber Ahmed ◽  
Erin Loraine Kinnally

In this chapter Ahmed and Kinnally provide some longitudinal examples and illustrations of how G x E influences may be studied with regard to neurobehavioral (brain) development in human and non-human primates. The chapter provides keen insight into two significant conceptual and methodological issues in the study of G X E interactions. First, is the importance of considering the findings from both human and non-human studies on genes and environment, thereby suggesting a more integrative lens for thinking about, planning, and interpreting research findings in G X E research. Second, they propose the use of multiple methods to investigate G X E interactions, including in their applications the use of both SNP-based and micro-array-based methods. With the quite massive increases in available large data sources (e.g., genomics, proteomics, metabolomics), there will be clear benefits in future research to incorporate different methods or sources of data toward identifying underlying biological mechanisms. Furthermore, the use of longitudinal research designs to study G X E interactions for time-ordered change phenomena such as neurobehavioral development provides a promising approach to identify and translate basic research findings into practice.


2015 ◽  
Vol 21 (7) ◽  
pp. 506-518 ◽  
Author(s):  
Alden L. Gross ◽  
Andreana Benitez ◽  
Regina Shih ◽  
Katherine J. Bangen ◽  
M. Maria M. Glymour ◽  
...  

AbstractBetter performance due to repeated testing can bias long-term trajectories of cognitive aging and correlates of change. We examined whether retest effects differ as a function of individual differences pertinent to cognitive aging: race/ethnicity, age, sex, language, years of education, literacy, and dementia risk factors including apolipoprotein E ε4 status, baseline cognitive performance, and cardiovascular risk. We used data from the Washington Heights-Inwood Columbia Aging Project, a community-based cohort of older adults (n=4073). We modeled cognitive change and retest effects in summary factors for general cognitive performance, memory, executive functioning, and language using multilevel models. Retest effects were parameterized in two ways, as improvement between the first and subsequent testings, and as the square root of the number of prior testings. We evaluated whether the retest effect differed by individual characteristics. The mean retest effect for general cognitive performance was 0.60 standard deviations (95% confidence interval [0.46, 0.74]), and was similar for memory, executive functioning, and language. Retest effects were greater for participants in the lowest quartile of cognitive performance (many of whom met criteria for dementia based on a study algorithm), consistent with regression to the mean. Retest did not differ by other characteristics. Retest effects are large in this community-based sample, but do not vary by demographic or dementia-related characteristics. Differential retest effects may not limit the generalizability of inferences across different groups in longitudinal research. (JINS, 2015, 21, 506–518)


2021 ◽  
Author(s):  
Jeff Goldsmith ◽  
Tomoko Kitago ◽  
Angel Garcia de la Garza ◽  
Robinson Kundert ◽  
Andreas Luft ◽  
...  

The proportional recovery rule (PRR) posits that most stroke survivors can expect to reverse a fixed proportion of motor impairment. As a statistical model, the PRR explicitly relates change scores to baseline values -- an approach that has the potential to introduce artifacts and flawed conclusions. We describe approaches that can assess associations between baseline and changes from baseline while avoiding artifacts either due to mathematical coupling or regression to the mean due to measurement error. We also describe methods that can compare different biological models of recovery. Across several real datasets, we find evidence for non-artifactual associations between baseline and change, and support for the PRR compared to alternative models. We conclude that the PRR remains a biologically-relevant model of recovery, and also introduce a statistical perspective that can be used to assess future models.


Social Forces ◽  
1971 ◽  
Vol 50 (2) ◽  
pp. 206-214 ◽  
Author(s):  
R. P. Althauser ◽  
D. Rubin

Author(s):  
Tom Burns ◽  
Mike Firn

This chapter covers the spectrum of routine monitoring, audit, service evaluation, and formal research. Routine monitoring is an essential task for all mental health professionals, and techniques to make it more palatable are explored, including using routine data for clinical supervision and monitoring team targets. Regular audit is described as an essential tool for logical service development and quality improvement. In the discussion of research, the importance of choosing the correct methodology and of paying attention to detail are stressed. In community psychiatry, sampling bias, regression to the mean, and the Hawthorne effect pose important risks. The hierarchy of research methods is outlined with randomized controlled trials (RCTs) at the top, preferably with either single- or double-blinding. Careful statistics and systematic reviews support evidence-based practice. In addition to experimental quantitative trials, there is a place for cohort and case control trials, as well as for qualitative trials to generate hypotheses.


1989 ◽  
Vol 23 (2) ◽  
pp. 181-186 ◽  
Author(s):  
Gavin Andrews

When deciding which treatments are of benefit, results from placebo-controlled trials are conventionally preferred above all others, and treatments not supported by such trials are viewed sceptically. In this paper it is argued that while randomised controlled trials are desirable they are not always informative. Other, less robust, research designs can be acceptable when they provide independent evidence that their results are not invalidated by remission, regression to the mean, or placebo effect, particularly if they provide post-treatment follow-up assessments. Even when there are difficulties with a research design one can reasonably conclude that the treatment was responsible for the improvement provided a standard treatment was delivered, patient compliance was good, and a dose-response relationship was identified.


Social Forces ◽  
1971 ◽  
Vol 50 (2) ◽  
pp. 206 ◽  
Author(s):  
Robert P. Althauser ◽  
Donald Rubin

Symmetry ◽  
2020 ◽  
Vol 13 (1) ◽  
pp. 9
Author(s):  
John H. Graham

Best practices in studies of developmental instability, as measured by fluctuating asymmetry, have developed over the past 60 years. Unfortunately, they are haphazardly applied in many of the papers submitted for review. Most often, research designs suffer from lack of randomization, inadequate replication, poor attention to size scaling, lack of attention to measurement error, and unrecognized mixtures of additive and multiplicative errors. Here, I summarize a set of best practices, especially in studies that examine the effects of environmental stress on fluctuating asymmetry.


Sign in / Sign up

Export Citation Format

Share Document