scholarly journals Estimation of the Overall Treatment Effect in the Presence of Interference in Cluster-Randomized Trials of Infectious Disease Prevention

2016 ◽  
Vol 5 (1) ◽  
Author(s):  
Nicole Bohme Carnegie ◽  
Rui Wang ◽  
Victor De Gruttola

AbstractAn issue that remains challenging in the field of causal inference is how to relax the assumption of no interference between units. Interference occurs when the treatment of one unit can affect the outcome of another, a situation which is likely to arise with outcomes that may depend on social interactions, such as occurrence of infectious disease. Existing methods to accommodate interference largely depend upon an assumption of “partial interference” – interference only within identifiable groups but not among them. There remains a considerable need for development of methods that allow further relaxation of the no-interference assumption. This paper focuses on an estimand that is the difference in the outcome that one would observe if the treatment were provided to all clusters compared to that outcome if treatment were provided to none – referred as the overall treatment effect. In trials of infectious disease prevention, the randomized treatment effect estimate will be attenuated relative to this overall treatment effect if a fraction of the exposures in the treatment clusters come from individuals who are outside these clusters. This source of interference – contacts sufficient for transmission that are with treated clusters – is potentially measurable. In this manuscript, we leverage epidemic models to infer the way in which a given level of interference affects the incidence of infection in clusters. This leads naturally to an estimator of the overall treatment effect that is easily implemented using existing software.

2020 ◽  
Vol 39 ◽  
pp. 101865
Author(s):  
Katherine Riester ◽  
Ludwig Kappos ◽  
Krzysztof Selmaj ◽  
Stacy Lindborg ◽  
Ilya Lipkovich ◽  
...  

2019 ◽  
pp. 004912411985237
Author(s):  
Roberto V. Penaloza ◽  
Mark Berends

To measure “treatment” effects, social science researchers typically rely on nonexperimental data. In education, school and teacher effects on students are often measured through value-added models (VAMs) that are not fully understood. We propose a framework that relates to the education production function in its most flexible form and connects with the basic VAMs without using untenable assumptions. We illustrate how, due to measurement error (ME), cross-group imbalances created by nonrandom group assignment cause correlations that drive the models’ treatment-effect estimate bias. We derive formulas to calculate bias and rank the models and show that no model is better in all situations. The framework and formulas’ workings are verified and illustrated via simulation. We also evaluate the performance of latent variable/errors-in-variables models that handle ME and study the role of extra covariates including lags of the outcome.


2015 ◽  
Vol 6 (1-2) ◽  
Author(s):  
Joel A. Middleton ◽  
Peter M. Aronow

AbstractMany estimators of the average treatment effect, including the difference-in-means, may be biased when clusters of units are allocated to treatment. This bias remains even when the number of units within each cluster grows asymptotically large. In this paper, we propose simple, unbiased, location-invariant, and covariate-adjusted estimators of the average treatment effect in experiments with random allocation of clusters, along with associated variance estimators. We then analyze a cluster-randomized field experiment on voter mobilization in the US, demonstrating that the proposed estimators have precision that is comparable, if not superior, to that of existing, biased estimators of the average treatment effect.


F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 610
Author(s):  
Theodoros Papakonstantinou ◽  
Adriani Nikolakopoulou ◽  
Gerta Rücker ◽  
Anna Chaimani ◽  
Guido Schwarzer ◽  
...  

In network meta-analysis, it is important to assess the influence of the limitations or other characteristics of individual studies on the estimates obtained from the network. The percentage contribution matrix, which shows how much each direct treatment effect contributes to each treatment effect estimate from network meta-analysis, is crucial in this context. We use ideas from graph theory to derive the percentage that is contributed by each direct treatment effect. We start with the ‘projection’ matrix in a two-step network meta-analysis model, called the H matrix, which is analogous to the hat matrix in a linear regression model. We develop a method to translate H entries to percentage contributions based on the observation that the rows of H can be interpreted as flow networks, where a stream is defined as the composition of a path and its associated flow. We present an algorithm that identifies the flow of evidence in each path and decomposes it into direct comparisons. To illustrate the methodology, we use two published networks of interventions. The first compares no treatment, quinolone antibiotics, non-quinolone antibiotics and antiseptics for underlying eardrum perforations and the second compares 14 antimanic drugs. We believe that this approach is a useful and novel addition to network meta-analysis methodology, which allows the consistent derivation of the percentage contributions of direct evidence from individual studies to network treatment effects.


2020 ◽  
Vol 39 (28) ◽  
pp. 4218-4237
Author(s):  
Siyun Yang ◽  
Fan Li ◽  
Monique A. Starks ◽  
Adrian F. Hernandez ◽  
Robert J. Mentz ◽  
...  

Author(s):  
Lee Kennedy-Shaffer ◽  
Marc Lipsitch

ABSTRACTRandomized controlled trials are crucial for the evaluation of interventions such as vaccinations, but the design and analysis of these studies during infectious disease outbreaks is complicated by statistical, ethical, and logistical factors. Attempts to resolve these complexities have led to the proposal of a variety of trial designs, including individual randomization and several types of cluster randomization designs: parallel-arm, ring vaccination, and stepped wedge designs. Because of the strong time trends present in infectious disease incidence, however, methods generally used to analyze stepped wedge trials may not perform well in these settings. Using simulated outbreaks, we evaluate various designs and analysis methods, including recently proposed methods for analyzing stepped wedge trials, to determine the statistical properties of these methods. While new methods for analyzing stepped wedge trials can provide some improvement over previous methods, we find that they still lag behind parallel-arm cluster-randomized trials and individually-randomized trials in achieving adequate power to detect intervention effects. We also find that these methods are highly sensitive to the weighting of effect estimates across time periods. Despite the value of new methods, stepped wedge trials still have statistical disadvantages compared to other trial designs in epidemic settings.


Sign in / Sign up

Export Citation Format

Share Document