scholarly journals Nonparametric bounds for causal effects in imperfect randomized experiments

Author(s):  
Erin E. Gabriel ◽  
Arvid Sjölander ◽  
Michael C. Sachs
2003 ◽  
Vol 28 (4) ◽  
pp. 353-368 ◽  
Author(s):  
Junni L. Zhang ◽  
Donald B. Rubin

The topic of “truncation by death” in randomized experiments arises in many fields, such as medicine, economics and education. Traditional approaches addressing this issue ignore the fact that the outcome after the truncation is neither “censored” nor “missing,” but should be treated as being defined on an extended sample space. Using an educational example to illustrate, we will outline here a formulation for tackling this issue, where we call the outcome “truncated by death” because there is no hidden value of the outcome variable masked by the truncating event. We first formulate the principal stratification ( Frangakis & Rubin, 2002 ) approach, and we then derive large sample bounds for causal effects within the principal strata, with or without various identification assumptions. Extensions are then briefly discussed.


2020 ◽  
pp. 1-10
Author(s):  
Leandro De Magalhães

Abstract Regression discontinuity design could be a valuable tool for identifying causal effects of a given party holding a legislative majority. However, the variable “number of seats” takes a finite number of values rather than a continuum and, hence, it is not suited as a running variable. Recent econometric advances suggest the necessary assumptions and empirical tests that allow us to interpret small intervals around the cut-off as local randomized experiments. These permit us to bypass the assumption that the running variable must be continuous. Herein, we implement these tests for US state legislatures and propose another: whether a slim-majority of one seat had at least one state-level district result that was itself a close race won by the majority party.


2018 ◽  
Vol 43 (5) ◽  
pp. 540-567 ◽  
Author(s):  
Jiannan Lu ◽  
Peng Ding ◽  
Tirthankar Dasgupta

Assessing the causal effects of interventions on ordinal outcomes is an important objective of many educational and behavioral studies. Under the potential outcomes framework, we can define causal effects as comparisons between the potential outcomes under treatment and control. However, unfortunately, the average causal effect, often the parameter of interest, is difficult to interpret for ordinal outcomes. To address this challenge, we propose to use two causal parameters, which are defined as the probabilities that the treatment is beneficial and strictly beneficial for the experimental units. However, although well-defined for any outcomes and of particular interest for ordinal outcomes, the two aforementioned parameters depend on the association between the potential outcomes and are therefore not identifiable from the observed data without additional assumptions. Echoing recent advances in the econometrics and biostatistics literature, we present the sharp bounds of the aforementioned causal parameters for ordinal outcomes, under fixed marginal distributions of the potential outcomes. Because the causal estimands and their corresponding sharp bounds are based on the potential outcomes themselves, the proposed framework can be flexibly incorporated into any chosen models of the potential outcomes and is directly applicable to randomized experiments, unconfounded observational studies, and randomized experiments with noncompliance. We illustrate our methodology via numerical examples and three real-life applications related to educational and behavioral research.


2018 ◽  
Vol 48 (1) ◽  
pp. 136-151 ◽  
Author(s):  
Guillaume W. Basse ◽  
Edoardo M. Airoldi

Randomized experiments on a network often involve interference between connected units, namely, a situation in which an individual’s treatment can affect the response of another individual. Current approaches to deal with interference, in theory and in practice, often make restrictive assumptions on its structure—for instance, assuming that interference is local—even when using otherwise nonparametric inference strategies. This reliance on explicit restrictions on the interference mechanism suggests a shared intuition that inference is impossible without any assumptions on the interference structure. In this paper, we begin by formalizing this intuition in the context of a classical nonparametric approach to inference, referred to as design-based inference of causal effects. Next, we show how, always in the context of design-based inference, even parametric structural assumptions that allow the existence of unbiased estimators cannot guarantee a decreasing variance even in the large sample limit. This lack of concentration in large samples is often observed empirically, in randomized experiments in which interference of some form is expected to be present. This result has direct consequences for the design and analysis of large experiments—for instance, in online social platforms—where the belief is that large sample sizes automatically guarantee small variance. More broadly, our results suggest that although strategies for causal inference in the presence of interference borrow their formalism and main concepts from the traditional causal inference literature, much of the intuition from the no-interference case do not easily transfer to the interference setting.


2007 ◽  
Vol 36 (4) ◽  
pp. 187-198 ◽  
Author(s):  
Elizabeth A. Stuart

Education researchers, practitioners, and policymakers alike are committed to identifying interventions that teach students more effectively. Increased emphasis on evaluation and accountability has increased desire for sound evaluations of these interventions; and at the same time, school-level data have become increasingly available. This article shows researchers how to bridge these two trends through careful use of school-level data to estimate the effectiveness of particular interventions. The author provides an overview of common methods for estimating causal effects with school-level data, including randomized experiments, regression analysis, pre–post studies, and nonexperimental comparison group designs. She stresses the importance of careful design of nonexperimental studies, particularly the need to compare units that were similar before treatment assignment. She gives examples of analyses that use school-level data and concludes with advice for researchers.


Sign in / Sign up

Export Citation Format

Share Document