scholarly journals Accounting for Confounding by Time, Early Intervention Adoption, and Time-Varying Effect Modification in the Design and Analysis of Stepped-Wedge Designs: Application to a Proposed Study Design to Reduce Opioid-Related Mortality

Author(s):  
Lior Rennert ◽  
Moonseong Heo ◽  
Alain H Litwin ◽  
Victor De Gruttola

Abstract Background: Stepped-wedge designs (SWDs) are currently being used in the investigation of interventions to reduce opioid-related deaths in communities located in several states. However, these interventions are competing with external factors such as newly initiated public policies limiting opioid prescriptions, media awareness campaigns, and COVID-19 social distancing mandates. Furthermore, control communities may prematurely adopt components of the intervention as they become available. The presence of time-varying external factors that impact study outcomes is a well-known limitation of SWDs; common approaches to adjusting for them make use of a mixed effects modeling framework. However, these models have several shortcomings when external factors differentially impact intervention and control clusters. Methods: We discuss limitations of commonly used mixed effects models in the context of proposed SWDs to investigate interventions intended to reduce opioid-related mortality, and propose extensions of these models to address these limitations. We conduct an extensive simulation study of anticipated data from SWD trials targeting the current opioid epidemic in order to examine the performance of these models in the presence of external factors. We consider confounding by time, premature adoption of components of the intervention, and time-varying effect modification— in which external factors differentially impact intervention and control clusters. Results: In the presence of confounding by time, commonly used mixed effects models yield unbiased intervention effect estimates, but can have inflated Type 1 error and result in under coverage of confidence intervals. These models yield biased intervention effect estimates when premature intervention adoption or effect modification are present. In such scenarios, models incorporating fixed intervention-by-time interactions with an unstructured covariance for intervention-by-cluster-by-time random effects result in unbiased intervention effect estimates, reach nominal confidence interval coverage, and preserve Type 1 error. Conclusions: Mixed effects models can adjust for different combinations of external factors through correct specification of fixed and random time effects; misspecification can result in bias of the intervention effect estimate, under coverage of confidence intervals, and Type 1 error inflation. Since model choice has considerable impact on validity of results and study power, careful consideration must be given to choosing appropriate models that account for potential external factors.

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Lior Rennert ◽  
Moonseong Heo ◽  
Alain H. Litwin ◽  
Victor De Gruttola

Abstract Background Beginning in 2019, stepped-wedge designs (SWDs) were being used in the investigation of interventions to reduce opioid-related deaths in communities across the United States. However, these interventions are competing with external factors such as newly initiated public policies limiting opioid prescriptions, media awareness campaigns, and the COVID-19 pandemic. Furthermore, control communities may prematurely adopt components of the intervention as they become available. The presence of time-varying external factors that impact study outcomes is a well-known limitation of SWDs; common approaches to adjusting for them make use of a mixed effects modeling framework. However, these models have several shortcomings when external factors differentially impact intervention and control clusters. Methods We discuss limitations of commonly used mixed effects models in the context of proposed SWDs to investigate interventions intended to reduce opioid-related mortality, and propose extensions of these models to address these limitations. We conduct an extensive simulation study of anticipated data from SWD trials targeting the current opioid epidemic in order to examine the performance of these models in the presence of external factors. We consider confounding by time, premature adoption of intervention components, and time-varying effect modification— in which external factors differentially impact intervention and control clusters. Results In the presence of confounding by time, commonly used mixed effects models yield unbiased intervention effect estimates, but can have inflated Type 1 error and result in under coverage of confidence intervals. These models yield biased intervention effect estimates when premature intervention adoption or effect modification are present. In such scenarios, models incorporating fixed intervention-by-time interactions with an unstructured covariance for intervention-by-cluster-by-time random effects result in unbiased intervention effect estimates, reach nominal confidence interval coverage, and preserve Type 1 error. Conclusions Mixed effects models can adjust for different combinations of external factors through correct specification of fixed and random time effects. Since model choice has considerable impact on validity of results and study power, careful consideration must be given to how these external factors impact study endpoints and what estimands are most appropriate in the presence of such factors.


2020 ◽  
Author(s):  
Lior Rennert ◽  
Moonseong Heo ◽  
Alain H Litwin ◽  
Victor de Grutolla

Background: Stepped-wedge designs (SWDs) are currently being used to investigate interventions to reduce opioid overdose deaths in communities located in several states. However, these interventions are competing with external factors such as newly initiated public policies limiting opioid prescriptions, media awareness campaigns, and social distancing orders due to the COVID-19 pandemic. Furthermore, control communities may prematurely adopt components of the proposed intervention as they become widely available. These types of events induce confounding of the intervention effect by time. Such confounding is a well-known limitation of SWDs; a common approach to adjusting for it makes use of a mixed effects modeling framework that includes both fixed and random effects for time. However, these models have several shortcomings when multiple confounding factors are present. Methods: We discuss the limitations of existing methods based on mixed effects models in the context of proposed SWDs to investigate interventions intended to reduce mortality associated with the opioid epidemic, and propose solutions to accommodate deviations from assumptions that underlie these models. We conduct an extensive simulation study of anticipated data from SWD trials targeting the current opioid epidemic in order to examine the performance of these models under different sources of confounding. We specifically examine the impact of factors external to the study and premature adoption of intervention components. Results: When only external factors are present, our simulation studies show that commonly used mixed effects models can result in unbiased estimates of the intervention effect, but have inflated Type 1 error and result in under coverage of confidence intervals. These models are severely biased when confounding factors differentially impact intervention and control clusters; premature adoption of intervention components is an example of this scenario. In these scenarios, models that incorporate fixed intervention-by-time interaction terms and an unstructured covariance for the intervention-by-cluster-by-time random effects result in unbiased estimates of the intervention effect, reach nominal confidence interval coverage, and preserve Type 1 error, but may reduce power. Conclusions: The incorporation of fixed and random time effects in mixed effects models require certain assumptions about the impact of confounding by time in SWD. Violations of these assumptions can result in severe bias of the intervention effect estimate, under coverage of confidence intervals, and inflated Type 1 error. Since model choice has considerable impact on study power as well as validity of results, careful consideration needs to be given to choosing an appropriate model that takes into account potential confounding factors.


2020 ◽  
Vol 103 (6) ◽  
pp. 1667-1679
Author(s):  
Shizhen S Wang

Abstract Background There are several statistical methods for detecting a difference of detection rates between alternative and reference qualitative microbiological assays in a single laboratory validation study with a paired design. Objective We compared performance of eight methods including McNemar’s test, sign test, Wilcoxon signed-rank test, paired t-test, and the regression methods based on conditional logistic (CLOGIT), mixed effects complementary log-log (MCLOGLOG), mixed effects logistic (MLOGIT) models, and a linear mixed effects model (LMM). Methods We first compared the minimum detectable difference in the proportion of detections between the alternative and reference detection methods among these statistical methods for a varied number of test portions. We then compared power and type 1 error rates of these methods using simulated data. Results The MCLOGLOG and MLOGIT models had the lowest minimum detectable difference, followed by the LMM and paired t-test. The MCLOGLOG and MLOGIT models had the highest average power but were anticonservative when correlation between the pairs of outcome values of the alternative and reference methods was high. The LMM and paired t-test had mostly the highest average power when the correlation was low and the second highest average power when the correlation was high. Type 1 error rates of these last two methods approached the nominal value of significance level when the number of test portions was moderately large (n > 20). Highlights The LMM and paired t-test are better choices than other competing methods, and we provide an example using real data.


2016 ◽  
Vol 27 (7) ◽  
pp. 2200-2215 ◽  
Author(s):  
Masahiko Gosho ◽  
Kazushi Maruo ◽  
Ryota Ishii ◽  
Akihiro Hirakawa

The total score, which is calculated as the sum of scores in multiple items or questions, is repeatedly measured in longitudinal clinical studies. A mixed effects model for repeated measures method is often used to analyze these data; however, if one or more individual items are not measured, the method cannot be directly applied to the total score. We develop two simple and interpretable procedures that infer fixed effects for a longitudinal continuous composite variable. These procedures consider that the items that compose the total score are multivariate longitudinal continuous data and, simultaneously, handle subject-level and item-level missing data. One procedure is based on a multivariate marginalized random effects model with a multiple of Kronecker product covariance matrices for serial time dependence and correlation among items. The other procedure is based on a multiple imputation approach with a multivariate normal model. In terms of the type-1 error rate and the bias of treatment effect in total score, the marginalized random effects model and multiple imputation procedures performed better than the standard mixed effects model for repeated measures analysis with listwise deletion and single imputations for handling item-level missing data. In particular, the mixed effects model for repeated measures with listwise deletion resulted in substantial inflation of the type-1 error rate. The marginalized random effects model and multiple imputation methods provide for a more efficient analysis by fully utilizing the partially available data, compared to the mixed effects model for repeated measures method with listwise deletion.


2021 ◽  
Author(s):  
Lesa Hoffman

In longitudinal models with time-varying predictors, the need to distinguish their within-person (WP) relations of time-specific residuals from their between-person (BP) relations of individual means is relatively well-known. In contrast, the need to further distinguish their BP relations of individual time slopes has received much less attention. This article addresses the deleterious impact that ignoring effects of individual time slopes in time-varying predictors can have on the recovery of BP intercept and WP residual relations in commonly used variants of longitudinal models. Using simulation methods and analyses of example data, this problem is demonstrated within univariate longitudinal models (i.e., multilevel or mixed-effects models using observed predictors), as well as in multivariate longitudinal models (i.e., structural equation models using latent predictors, including those for cross-lagged relations). Recommendations are provided for how to avoid conflating the BP and WP associations of longitudinal variables in practice.


2017 ◽  
Author(s):  
Daniel Lakens

Pre-registration is a straightforward way to make science more transparant, and control Type 1 error rates. Pre-registration is often presented as beneficial for science in general, but rarely as a practice that leads to immediate individual benefits for researchers. One benefit of pre-registered studies is that they allow for non-conventional research designs that are more efficient than conventional designs. For example, by performing one-tailed tests and sequential analyses researchers can perform well-powered studies much more efficiently. Here, I examine whether such non-conventional but more efficient designs are considered appropriate by editors under the pre-condition that the analysis plans are pre-registered, and if so, whether researchers are more willing to pre-register their analysis plan to take advantage of the efficiency benefits of non-conventional designs. Study 1 shows the large majority of editors judged one-tailed tests and sequential analyses to be appropriate in psychology, but only when such analyses are pre-registered. In Study 2 I asked experimental psychologists to indicate their attitude towards pre-registration. Half of these researchers first read about the acceptence of one-tailed tests and sequential analyses by editors, and the efficiency gains of using these procedures. However, learning about the efficiency benefits associated with one-tailed tests and sequential analyses did not substantially influence researchers' attitudes about benefits and costs of pre-registration, or their willingness to pre-register studies. The self-reported likelihood of pre-registering studies in the next two years, as well as the percentage of studies researchers planned to pre-register in the future, was surprisingly high. 47% of respondents already had experience pre-registering, and 94% of respondents indicating that they would consider pre-registering at least some of their research in the future. Given this already strong self-reported willingness to pre-register studies, pointing out immediate individual benefits seems unlikely to be a useful way to increase researchers' willingness to pre-register any further.


Sign in / Sign up

Export Citation Format

Share Document