scholarly journals Accounting for external factors and early intervention adoption in the design and analysis of stepped-wedge designs: Application to a proposed study design to reduce opioid-related mortality

Author(s):  
Lior Rennert ◽  
Moonseong Heo ◽  
Alain H Litwin ◽  
Victor de Grutolla

Background: Stepped-wedge designs (SWDs) are currently being used to investigate interventions to reduce opioid overdose deaths in communities located in several states. However, these interventions are competing with external factors such as newly initiated public policies limiting opioid prescriptions, media awareness campaigns, and social distancing orders due to the COVID-19 pandemic. Furthermore, control communities may prematurely adopt components of the proposed intervention as they become widely available. These types of events induce confounding of the intervention effect by time. Such confounding is a well-known limitation of SWDs; a common approach to adjusting for it makes use of a mixed effects modeling framework that includes both fixed and random effects for time. However, these models have several shortcomings when multiple confounding factors are present. Methods: We discuss the limitations of existing methods based on mixed effects models in the context of proposed SWDs to investigate interventions intended to reduce mortality associated with the opioid epidemic, and propose solutions to accommodate deviations from assumptions that underlie these models. We conduct an extensive simulation study of anticipated data from SWD trials targeting the current opioid epidemic in order to examine the performance of these models under different sources of confounding. We specifically examine the impact of factors external to the study and premature adoption of intervention components. Results: When only external factors are present, our simulation studies show that commonly used mixed effects models can result in unbiased estimates of the intervention effect, but have inflated Type 1 error and result in under coverage of confidence intervals. These models are severely biased when confounding factors differentially impact intervention and control clusters; premature adoption of intervention components is an example of this scenario. In these scenarios, models that incorporate fixed intervention-by-time interaction terms and an unstructured covariance for the intervention-by-cluster-by-time random effects result in unbiased estimates of the intervention effect, reach nominal confidence interval coverage, and preserve Type 1 error, but may reduce power. Conclusions: The incorporation of fixed and random time effects in mixed effects models require certain assumptions about the impact of confounding by time in SWD. Violations of these assumptions can result in severe bias of the intervention effect estimate, under coverage of confidence intervals, and inflated Type 1 error. Since model choice has considerable impact on study power as well as validity of results, careful consideration needs to be given to choosing an appropriate model that takes into account potential confounding factors.

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Lior Rennert ◽  
Moonseong Heo ◽  
Alain H. Litwin ◽  
Victor De Gruttola

Abstract Background Beginning in 2019, stepped-wedge designs (SWDs) were being used in the investigation of interventions to reduce opioid-related deaths in communities across the United States. However, these interventions are competing with external factors such as newly initiated public policies limiting opioid prescriptions, media awareness campaigns, and the COVID-19 pandemic. Furthermore, control communities may prematurely adopt components of the intervention as they become available. The presence of time-varying external factors that impact study outcomes is a well-known limitation of SWDs; common approaches to adjusting for them make use of a mixed effects modeling framework. However, these models have several shortcomings when external factors differentially impact intervention and control clusters. Methods We discuss limitations of commonly used mixed effects models in the context of proposed SWDs to investigate interventions intended to reduce opioid-related mortality, and propose extensions of these models to address these limitations. We conduct an extensive simulation study of anticipated data from SWD trials targeting the current opioid epidemic in order to examine the performance of these models in the presence of external factors. We consider confounding by time, premature adoption of intervention components, and time-varying effect modification— in which external factors differentially impact intervention and control clusters. Results In the presence of confounding by time, commonly used mixed effects models yield unbiased intervention effect estimates, but can have inflated Type 1 error and result in under coverage of confidence intervals. These models yield biased intervention effect estimates when premature intervention adoption or effect modification are present. In such scenarios, models incorporating fixed intervention-by-time interactions with an unstructured covariance for intervention-by-cluster-by-time random effects result in unbiased intervention effect estimates, reach nominal confidence interval coverage, and preserve Type 1 error. Conclusions Mixed effects models can adjust for different combinations of external factors through correct specification of fixed and random time effects. Since model choice has considerable impact on validity of results and study power, careful consideration must be given to how these external factors impact study endpoints and what estimands are most appropriate in the presence of such factors.


2020 ◽  
Author(s):  
Lior Rennert ◽  
Moonseong Heo ◽  
Alain H Litwin ◽  
Victor De Gruttola

Abstract Background: Stepped-wedge designs (SWDs) are currently being used in the investigation of interventions to reduce opioid-related deaths in communities located in several states. However, these interventions are competing with external factors such as newly initiated public policies limiting opioid prescriptions, media awareness campaigns, and COVID-19 social distancing mandates. Furthermore, control communities may prematurely adopt components of the intervention as they become available. The presence of time-varying external factors that impact study outcomes is a well-known limitation of SWDs; common approaches to adjusting for them make use of a mixed effects modeling framework. However, these models have several shortcomings when external factors differentially impact intervention and control clusters. Methods: We discuss limitations of commonly used mixed effects models in the context of proposed SWDs to investigate interventions intended to reduce opioid-related mortality, and propose extensions of these models to address these limitations. We conduct an extensive simulation study of anticipated data from SWD trials targeting the current opioid epidemic in order to examine the performance of these models in the presence of external factors. We consider confounding by time, premature adoption of components of the intervention, and time-varying effect modification— in which external factors differentially impact intervention and control clusters. Results: In the presence of confounding by time, commonly used mixed effects models yield unbiased intervention effect estimates, but can have inflated Type 1 error and result in under coverage of confidence intervals. These models yield biased intervention effect estimates when premature intervention adoption or effect modification are present. In such scenarios, models incorporating fixed intervention-by-time interactions with an unstructured covariance for intervention-by-cluster-by-time random effects result in unbiased intervention effect estimates, reach nominal confidence interval coverage, and preserve Type 1 error. Conclusions: Mixed effects models can adjust for different combinations of external factors through correct specification of fixed and random time effects; misspecification can result in bias of the intervention effect estimate, under coverage of confidence intervals, and Type 1 error inflation. Since model choice has considerable impact on validity of results and study power, careful consideration must be given to choosing appropriate models that account for potential external factors.


2021 ◽  
pp. 1-4
Author(s):  
Michaela Kranepuhl ◽  
Detlef May ◽  
Edna Hillmann ◽  
Lorenz Gygax

Abstract This research communication describes the relationship between the occurrence of lameness and body condition score (BCS) in a sample of 288 cows from a single farm that were repeatedly scored in the course of 9 months while controlling for confounding variables. The relationship between BCS and lameness was evaluated using generalised linear mixed-effects models. It was found that the proportion of lame cows was higher with decreasing but also with increasing BCS, increased with lactation number and decreased with time since the last claw trimming. This is likely to reflect the importance of sufficient body condition in the prevention of lameness but also raises the question of the impact of overcondition on lameness and the influence of claw trimming events on the assessment of lameness. A stronger focus on BCS might allow improved management of lameness that is still one of the major problems in housed cows.


2020 ◽  
Vol 103 (6) ◽  
pp. 1667-1679
Author(s):  
Shizhen S Wang

Abstract Background There are several statistical methods for detecting a difference of detection rates between alternative and reference qualitative microbiological assays in a single laboratory validation study with a paired design. Objective We compared performance of eight methods including McNemar’s test, sign test, Wilcoxon signed-rank test, paired t-test, and the regression methods based on conditional logistic (CLOGIT), mixed effects complementary log-log (MCLOGLOG), mixed effects logistic (MLOGIT) models, and a linear mixed effects model (LMM). Methods We first compared the minimum detectable difference in the proportion of detections between the alternative and reference detection methods among these statistical methods for a varied number of test portions. We then compared power and type 1 error rates of these methods using simulated data. Results The MCLOGLOG and MLOGIT models had the lowest minimum detectable difference, followed by the LMM and paired t-test. The MCLOGLOG and MLOGIT models had the highest average power but were anticonservative when correlation between the pairs of outcome values of the alternative and reference methods was high. The LMM and paired t-test had mostly the highest average power when the correlation was low and the second highest average power when the correlation was high. Type 1 error rates of these last two methods approached the nominal value of significance level when the number of test portions was moderately large (n > 20). Highlights The LMM and paired t-test are better choices than other competing methods, and we provide an example using real data.


2016 ◽  
Vol 27 (7) ◽  
pp. 2200-2215 ◽  
Author(s):  
Masahiko Gosho ◽  
Kazushi Maruo ◽  
Ryota Ishii ◽  
Akihiro Hirakawa

The total score, which is calculated as the sum of scores in multiple items or questions, is repeatedly measured in longitudinal clinical studies. A mixed effects model for repeated measures method is often used to analyze these data; however, if one or more individual items are not measured, the method cannot be directly applied to the total score. We develop two simple and interpretable procedures that infer fixed effects for a longitudinal continuous composite variable. These procedures consider that the items that compose the total score are multivariate longitudinal continuous data and, simultaneously, handle subject-level and item-level missing data. One procedure is based on a multivariate marginalized random effects model with a multiple of Kronecker product covariance matrices for serial time dependence and correlation among items. The other procedure is based on a multiple imputation approach with a multivariate normal model. In terms of the type-1 error rate and the bias of treatment effect in total score, the marginalized random effects model and multiple imputation procedures performed better than the standard mixed effects model for repeated measures analysis with listwise deletion and single imputations for handling item-level missing data. In particular, the mixed effects model for repeated measures with listwise deletion resulted in substantial inflation of the type-1 error rate. The marginalized random effects model and multiple imputation methods provide for a more efficient analysis by fully utilizing the partially available data, compared to the mixed effects model for repeated measures method with listwise deletion.


2020 ◽  
Author(s):  
Anne M. Scheel ◽  
Mitchell Schijen ◽  
Daniel Lakens

When studies with positive results that support the tested hypotheses have a higher probability of being published than studies with negative results, the literature will give a distorted view of the evidence for scientific claims. Psychological scientists have been concerned about the degree of distortion in their literature due to publication bias and inflated Type-1 error rates. Registered Reports were developed with the goal to minimise such biases: In this new publication format, peer review and the decision to publish take place before the study results are known. We compared the results in the full population of published Registered Reports in Psychology (N = 71 as of November 2018) with a random sample of hypothesis-testing studies from the standard literature (N = 152) by searching 633 journals for the phrase ‘test* the hypothes*’ (replicating a method by Fanelli, 2010). Analysing the first hypothesis reported in each paper, we found 96% positive results in standard reports, but only 44% positive results in Registered Reports. The difference remained nearly as large when direct replications were excluded from the analysis (96% vs 50% positive results). This large gap suggests that psychologists underreport negative results to an extent that threatens cumulative science. Although our study did not directly test the effectiveness of Registered Reports at reducing bias, these results show that the introduction of Registered Reports has led to a much larger proportion of negative results appearing in the published literature compared to standard reports.


2020 ◽  
Vol 9 ◽  
pp. 25-41
Author(s):  
Dāniels Jukna ◽  

Borrowers’ solvency assessment models can not only increase company’s profit, but also potentially decrease the impact from the negative economic consequences of the crisis. However, there is no consensus on such models. Considering the flaws in the scientific literature, the main aim of this article was to develop the borrowers’ solvency assessment model, which can be applied in practice. The most appropriate method for developing such models was found to be logistic regression, and this research goal is to identify the best modelling approach to achieve the highest borrowers’ solvency predictability. By implementing the best-chosen model, a nonbank lending company could provide a 42.5% lower total borrowers risk of default than without implementing such a model. Depending on the risk policy of the non-bank lending company, three methodologies were developed based on different assumptions about the significance of type 1 error and type 2 error in the company to determine the exact cut-off value


Sign in / Sign up

Export Citation Format

Share Document