scholarly journals Assumption Trade-Offs When Choosing Identification Strategies for Pre-Post Treatment Effect Estimation: An Illustration of a Community-Based Intervention in Madagascar

2015 ◽  
Vol 3 (1) ◽  
pp. 109-130 ◽  
Author(s):  
Ann M. Weber ◽  
Mark J. van der Laan ◽  
Maya L. Petersen

AbstractFailure (or success) in finding a statistically significant effect of a large-scale intervention may be due to choices made in the evaluation. To highlight the potential limitations and pitfalls of some common identification strategies used for estimating causal effects of community-level interventions, we apply a roadmap for causal inference to a pre-post evaluation of a national nutrition program in Madagascar. Selection into the program was non-random and strongly associated with the pre-treatment (lagged) outcome. Using structural causal models (SCM), directed acyclic graphs (DAGs) and simulated data, we illustrate that an estimand with the outcome defined as the post-treatment outcome controls for confounding by the lagged outcome but not by possible unmeasured confounders. Two separate differencing estimands (of the pre- and post-treatment outcome) have the potential to adjust for a certain type of unmeasured confounding, but introduce bias if the additional identification assumptions they rely on are not met. In order to illustrate the practical impact of choice between three common identification strategies and their corresponding estimands, we used observational data from the community nutrition program in Madagascar to estimate each of these three estimands. Specifically, we estimated the average treatment effect of the program on the community mean nutritional status of children 5 years and under and found that the estimate based on the post-treatment estimand was about a quarter of the magnitude of either of the differencing estimands (0.066 SD vs. 0.26–0.27 SD increase in mean weight-for-age z-score). Choice of estimand clearly has important implications for the interpretation of the success of the program to improve nutritional status of young children. A careful appraisal of the assumptions underlying the causal model is imperative before committing to a statistical model and progressing to estimation. However, knowledge about the data-generating process must be sufficient in order to choose the identification strategy that gets us closest to the truth.

2020 ◽  
Author(s):  
ZhiMin Xiao ◽  
Oliver P Hauser ◽  
Charlie Kirkwood ◽  
Daniel Z. Li ◽  
Benjamin Jones ◽  
...  

The use of large-scale Randomised Controlled Trials (RCTs) is fast becoming "the gold standard" of testing the causal effects of policy, social, and educational interventions. RCTs are typically evaluated — and ultimately judged — by the economic, educational, and statistical significance of the Average Treatment Effect (ATE) in the study sample. However, many interventions have heterogeneous treatment effects across different individuals, not captured by the ATE. One way to identify heterogeneous treatment effects is to conduct subgroup analyses, such as focusing on low-income Free School Meal pupils as required for projects funded by the Education Endowment Foundation (EEF) in England. These subgroup analyses, as we demonstrate in 48 EEF-funded RCTs involving over 200,000 students, are usually not standardised across studies and offer flexible degrees of freedom to researchers, potentially leading to mixed results. Here, we develop and deploy a machine-learning and regression-based framework for systematic estimation of Individualised Treatment Effect (ITE), which can show where a seemingly ineffective and uninformative intervention worked, for whom, and by how much. Our findings have implications for decision-makers in education, public health, and medical trials.


2018 ◽  
Vol 15 (3) ◽  
pp. 247-256 ◽  
Author(s):  
Sabine Landau ◽  
Richard Emsley ◽  
Graham Dunn

Background: Random allocation avoids confounding bias when estimating the average treatment effect. For continuous outcomes measured at post-treatment as well as prior to randomisation (baseline), analyses based on (A) post-treatment outcome alone, (B) change scores over the treatment phase or (C) conditioning on baseline values (analysis of covariance) provide unbiased estimators of the average treatment effect. The decision to include baseline values of the clinical outcome in the analysis is based on precision arguments, with analysis of covariance known to be most precise. Investigators increasingly carry out explanatory analyses to decompose total treatment effects into components that are mediated by an intermediate continuous outcome and a non-mediated part. Traditional mediation analysis might be performed based on (A) post-treatment values of the intermediate and clinical outcomes alone, (B) respective change scores or (C) conditioning on baseline measures of both intermediate and clinical outcomes. Methods: Using causal diagrams and Monte Carlo simulation, we investigated the performance of the three competing mediation approaches. We considered a data generating model that included three possible confounding processes involving baseline variables: The first two processes modelled baseline measures of the clinical variable or the intermediate variable as common causes of post-treatment measures of these two variables. The third process allowed the two baseline variables themselves to be correlated due to past common causes. We compared the analysis models implied by the competing mediation approaches with this data generating model to hypothesise likely biases in estimators, and tested these in a simulation study. We applied the methods to a randomised trial of pragmatic rehabilitation in patients with chronic fatigue syndrome, which examined the role of limiting activities as a mediator. Results: Estimates of causal mediation effects derived by approach (A) will be biased if one of the three processes involving baseline measures of intermediate or clinical outcomes is operating. Necessary assumptions for the change score approach (B) to provide unbiased estimates under either process include the independence of baseline measures and change scores of the intermediate variable. Finally, estimates provided by the analysis of covariance approach (C) were found to be unbiased under all the three processes considered here. When applied to the example, there was evidence of mediation under all methods but the estimate of the indirect effect depended on the approach used with the proportion mediated varying from 57% to 86%. Conclusion: Trialists planning mediation analyses should measure baseline values of putative mediators as well as of continuous clinical outcomes. An analysis of covariance approach is recommended to avoid potential biases due to confounding processes involving baseline measures of intermediate or clinical outcomes, and not simply for increased precision.


2019 ◽  
Author(s):  
Kasper Van Mens ◽  
Joran Lokkerbol ◽  
Richard Janssen ◽  
Robert de Lange ◽  
Bea Tiemens

BACKGROUND It remains a challenge to predict which treatment will work for which patient in mental healthcare. OBJECTIVE In this study we compare machine algorithms to predict during treatment which patients will not benefit from brief mental health treatment and present trade-offs that must be considered before an algorithm can be used in clinical practice. METHODS Using an anonymized dataset containing routine outcome monitoring data from a mental healthcare organization in the Netherlands (n = 2,655), we applied three machine learning algorithms to predict treatment outcome. The algorithms were internally validated with cross-validation on a training sample (n = 1,860) and externally validated on an unseen test sample (n = 795). RESULTS The performance of the three algorithms did not significantly differ on the test set. With a default classification cut-off at 0.5 predicted probability, the extreme gradient boosting algorithm showed the highest positive predictive value (ppv) of 0.71(0.61 – 0.77) with a sensitivity of 0.35 (0.29 – 0.41) and area under the curve of 0.78. A trade-off can be made between ppv and sensitivity by choosing different cut-off probabilities. With a cut-off at 0.63, the ppv increased to 0.87 and the sensitivity dropped to 0.17. With a cut-off of at 0.38, the ppv decreased to 0.61 and the sensitivity increased to 0.57. CONCLUSIONS Machine learning can be used to predict treatment outcomes based on routine monitoring data.This allows practitioners to choose their own trade-off between being selective and more certain versus inclusive and less certain.


2021 ◽  
Author(s):  
Anik Dutta ◽  
Fanny E. Hartmann ◽  
Carolina Sardinha Francisco ◽  
Bruce A. McDonald ◽  
Daniel Croll

AbstractThe adaptive potential of pathogens in novel or heterogeneous environments underpins the risk of disease epidemics. Antagonistic pleiotropy or differential resource allocation among life-history traits can constrain pathogen adaptation. However, we lack understanding of how the genetic architecture of individual traits can generate trade-offs. Here, we report a large-scale study based on 145 global strains of the fungal wheat pathogen Zymoseptoria tritici from four continents. We measured 50 life-history traits, including virulence and reproduction on 12 different wheat hosts and growth responses to several abiotic stressors. To elucidate the genetic basis of adaptation, we used genome-wide association mapping coupled with genetic correlation analyses. We show that most traits are governed by polygenic architectures and are highly heritable suggesting that adaptation proceeds mainly through allele frequency shifts at many loci. We identified negative genetic correlations among traits related to host colonization and survival in stressful environments. Such genetic constraints indicate that pleiotropic effects could limit the pathogen’s ability to cause host damage. In contrast, adaptation to abiotic stress factors was likely facilitated by synergistic pleiotropy. Our study illustrates how comprehensive mapping of life-history trait architectures across diverse environments allows to predict evolutionary trajectories of pathogens confronted with environmental perturbations.


Author(s):  
Chris Gaskell ◽  
Ryan Askey-Jones ◽  
Martin Groom ◽  
Jaime Delgadillo

Abstract Background: This was a multi-site evaluation of psycho-educational transdiagnostic seminars (TDS) as a pre-treatment intervention to enhance the effectiveness and utilisation of high-intensity cognitive behavioural therapy (CBT). Aims: To evaluate the effectiveness of TDS combined with high-intensity CBT (TDS+CBT) versus a matched sample receiving CBT only. Second, to determine the consistency of results across participating services which employed CBT+TDS. Finally, to determine the acceptability of TDS across patients with different psychological disorders. Method: 106 patients across three services voluntarily attended TDS while on a waiting list for CBT (TDS+CBT). Individual and pooled service pre–post treatment effect sizes were calculated using measures of depression, anxiety and functional impairment. Effectiveness and completion rates for TDS+CBT were compared with a propensity score matched sample from an archival dataset of cases who received high-intensity CBT only. Results: Pre–post treatment effect sizes for TDS+CBT were comparable to the matched sample. Recovery rates were greater for the group receiving TDS; however, this was not statistically significant. Greater improvements were observed during the waiting-list period for patients who had received TDS for depression (d = 0.49 compared with d = 0.07) and anxiety (d = 0.36 compared with d = 0.04). Conclusions: Overall, this new evidence found a trend for TDS improving symptoms while awaiting CBT across three separate IAPT services. The effectiveness of TDS now warrants further exploration through an appropriately sized randomised control trial.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Fei Wan

Abstract Background Randomized pre-post designs, with outcomes measured at baseline and after treatment, have been commonly used to compare the clinical effectiveness of two competing treatments. There are vast, but often conflicting, amount of information in current literature about the best analytic methods for pre-post designs. It is challenging for applied researchers to make an informed choice. Methods We discuss six methods commonly used in literature: one way analysis of variance (“ANOVA”), analysis of covariance main effect and interaction models on the post-treatment score (“ANCOVAI” and “ANCOVAII”), ANOVA on the change score between the baseline and post-treatment scores (“ANOVA-Change”), repeated measures (“RM”) and constrained repeated measures (“cRM”) models on the baseline and post-treatment scores as joint outcomes. We review a number of study endpoints in randomized pre-post designs and identify the mean difference in the post-treatment score as the common treatment effect that all six methods target. We delineate the underlying differences and connections between these competing methods in homogeneous and heterogeneous study populations. Results ANCOVA and cRM outperform other alternative methods because their treatment effect estimators have the smallest variances. cRM has comparable performance to ANCOVAI in the homogeneous scenario and to ANCOVAII in the heterogeneous scenario. In spite of that, ANCOVA has several advantages over cRM: i) the baseline score is adjusted as covariate because it is not an outcome by definition; ii) it is very convenient to incorporate other baseline variables and easy to handle complex heteroscedasticity patterns in a linear regression framework. Conclusions ANCOVA is a simple and the most efficient approach for analyzing pre-post randomized designs.


Sign in / Sign up

Export Citation Format

Share Document