scholarly journals Generalizability of Randomized Trial Results to Target Populations

2017 ◽  
Vol 28 (5) ◽  
pp. 532-537 ◽  
Author(s):  
Elizabeth A. Stuart ◽  
Benjamin Ackerman ◽  
Daniel Westreich

Randomized trials play an important role in estimating the effect of a policy or social work program in a given population. While most trial designs benefit from strong internal validity, they often lack external validity, or generalizability, to the target population of interest. In other words, one can obtain an unbiased estimate of the study sample average treatment effect from a randomized trial; however, this estimate may not equal the target population average treatment effect if the study sample is not fully representative of the target population. This article provides an overview of existing strategies to assess and improve upon the generalizability of randomized trials, both through statistical methods and study design, as well as recommendations on how to implement these ideas in social work research.

2016 ◽  
Vol 41 (4) ◽  
pp. 357-388 ◽  
Author(s):  
Elizabeth A. Stuart ◽  
Anna Rhodes

Background: Given increasing concerns about the relevance of research to policy and practice, there is growing interest in assessing and enhancing the external validity of randomized trials: determining how useful a given randomized trial is for informing a policy question for a specific target population. Objectives: This article highlights recent advances in assessing and enhancing external validity, with a focus on the data needed to make ex post statistical adjustments to enhance the applicability of experimental findings to populations potentially different from their study sample. Research design: We use a case study to illustrate how to generalize treatment effect estimates from a randomized trial sample to a target population, in particular comparing the sample of children in a randomized trial of a supplemental program for Head Start centers (the Research-Based, Developmentally Informed study) to the national population of children eligible for Head Start, as represented in the Head Start Impact Study. Results: For this case study, common data elements between the trial sample and population were limited, making reliable generalization from the trial sample to the population challenging. Conclusions: To answer important questions about external validity, more publicly available data are needed. In addition, future studies should make an effort to collect measures similar to those in other data sets. Measure comparability between population data sets and randomized trials that use samples of convenience will greatly enhance the range of research and policy relevant questions that can be answered.


2018 ◽  
Vol 42 (4) ◽  
pp. 391-422 ◽  
Author(s):  
Donald P. Green ◽  
Winston Lin ◽  
Claudia Gerber

Background: Many place-based randomized trials and quasi-experiments use a pair of cross-section surveys, rather than panel surveys, to estimate the average treatment effect of an intervention. In these studies, a random sample of individuals in each geographic cluster is selected for a baseline (preintervention) survey, and an independent random sample is selected for an endline (postintervention) survey. Objective: This design raises the question, given a fixed budget, how should a researcher allocate resources between the baseline and endline surveys to maximize the precision of the estimated average treatment effect? Results: We formalize this allocation problem and show that although the optimal share of interviews allocated to the baseline survey is always less than one-half, it is an increasing function of the total number of interviews per cluster, the cluster-level correlation between the baseline measure and the endline outcome, and the intracluster correlation coefficient. An example using multicountry survey data from Africa illustrates how the optimal allocation formulas can be combined with data to inform decisions at the planning stage. Another example uses data from a digital political advertising experiment in Texas to explore how precision would have varied with alternative allocations.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0249642
Author(s):  
Byeong Yeob Choi

Instrumental variable (IV) analysis is used to address unmeasured confounding when comparing two nonrandomized treatment groups. The local average treatment effect (LATE) is a causal estimand that can be identified by an IV. The LATE approach is appealing because its identification relies on weaker assumptions than those in other IV approaches requiring a homogeneous treatment effect assumption. If the instrument is confounded by some covariates, then one can use a weighting estimator, for which the outcome and treatment are weighted by instrumental propensity scores. The weighting estimator for the LATE has a large variance when the IV is weak and the target population, i.e., the compliers, is relatively small. We propose a truncated LATE that can be estimated more reliably than the regular LATE in the presence of a weak IV. In our approach, subjects who contribute substantially to the weak IV are identified by their probabilities of being compliers, and they are removed based on a pre-specified threshold. We discuss interpretation of the proposed estimand and related inference method. Simulation and real data experiments demonstrate that the proposed truncated LATE can be estimated more precisely than the standard LATE.


2021 ◽  
pp. 174077452110568
Author(s):  
Fan Li ◽  
Zizhong Tian ◽  
Jennifer Bobb ◽  
Georgia Papadogeorgou ◽  
Fan Li

Background In cluster randomized trials, patients are typically recruited after clusters are randomized, and the recruiters and patients may not be blinded to the assignment. This often leads to differential recruitment and consequently systematic differences in baseline characteristics of the recruited patients between intervention and control arms, inducing post-randomization selection bias. We aim to rigorously define causal estimands in the presence of selection bias. We elucidate the conditions under which standard covariate adjustment methods can validly estimate these estimands. We further discuss the additional data and assumptions necessary for estimating causal effects when such conditions are not met. Methods Adopting the principal stratification framework in causal inference, we clarify there are two average treatment effect (ATE) estimands in cluster randomized trials: one for the overall population and one for the recruited population. We derive analytical formula of the two estimands in terms of principal-stratum-specific causal effects. Furthermore, using simulation studies, we assess the empirical performance of the multivariable regression adjustment method under different data generating processes leading to selection bias. Results When treatment effects are heterogeneous across principal strata, the average treatment effect on the overall population generally differs from the average treatment effect on the recruited population. A naïve intention-to-treat analysis of the recruited sample leads to biased estimates of both average treatment effects. In the presence of post-randomization selection and without additional data on the non-recruited subjects, the average treatment effect on the recruited population is estimable only when the treatment effects are homogeneous between principal strata, and the average treatment effect on the overall population is generally not estimable. The extent to which covariate adjustment can remove selection bias depends on the degree of effect heterogeneity across principal strata. Conclusion There is a need and opportunity to improve the analysis of cluster randomized trials that are subject to post-randomization selection bias. For studies prone to selection bias, it is important to explicitly specify the target population that the causal estimands are defined on and adopt design and estimation strategies accordingly. To draw valid inferences about treatment effects, investigators should (1) assess the possibility of heterogeneous treatment effects, and (2) consider collecting data on covariates that are predictive of the recruitment process, and on the non-recruited population from external sources such as electronic health records.


2019 ◽  
Vol 30 (3) ◽  
pp. 695-712
Author(s):  
Gabriel González ◽  
Luisa Díez-Echavarría ◽  
Elkin Zapa ◽  
Danilo Eusse

Las instituciones de educación superior deben formar a sus estudiantes según requerimientos del contexto en que se desenvuelven, ya que, sobre la base de su desempeño, es donde se medirá si las políticas de desarrollo socioeconómico son efectivas. Para lograrlo, es necesario identificar el impacto de esa educación en sus egresados, y hacer los ajustes necesarios que generen mejora continua. El objetivo de este artículo es estimar el impacto académico y social de egresados del Instituto Tecnológico Metropolitano – Medellín, a través de un análisis multivariado y la estimación del modelo Average Treatment Effect (ATE). Se encontró que la educación ofrecida a esta población ha generado un impacto académico, asociado a los estudios de actualización, y dos impactos sociales, asociados a la situación laboral y al nivel de ingresos percibidos por los egresados. Se recomienda usar esta metodología en otras instituciones, ya que suele arrojar resultados más informativos y precisos que los estudios tradicionales de caracterización, y se puede medir el efecto de cualquier estrategia.


Sign in / Sign up

Export Citation Format

Share Document