scholarly journals Robust Nonparametric Confidence Intervals for Regression-Discontinuity Designs

Econometrica ◽  
2014 ◽  
Vol 82 (6) ◽  
pp. 2295-2326 ◽  
Author(s):  
Sebastian Calonico ◽  
Matias D. Cattaneo ◽  
Rocio Titiunik

2018 ◽  
Vol 108 (8) ◽  
pp. 2277-2304 ◽  
Author(s):  
Michal Kolesár ◽  
Christoph Rothe

We consider inference in regression discontinuity designs when the running variable only takes a moderate number of distinct values. In particular, we study the common practice of using confidence intervals (CIs) based on standard errors that are clustered by the running variable as a means to make inference robust to model misspecification (Lee and Card 2008). We derive theoretical results and present simulation and empirical evidence showing that these CIs do not guard against model misspecification, and that they have poor coverage properties. We therefore recommend against using these CIs in practice. We instead propose two alternative CIs with guaranteed coverage properties under easily interpretable restrictions on the conditional expectation function. (JEL C13, C51, J13, J31, J64, J65)



2020 ◽  
Vol 23 (2) ◽  
pp. 211-231
Author(s):  
Yang He ◽  
Otávio Bartalotti

Summary This paper develops a novel wild bootstrap procedure to construct robust bias-corrected valid confidence intervals for fuzzy regression discontinuity designs, providing an intuitive complement to existing robust bias-corrected methods. The confidence intervals generated by this procedure are valid under conditions similar to the procedures proposed by Calonico et al. (2014) and related literature. Simulations provide evidence that this new method is at least as accurate as the plug-in analytical corrections when applied to a variety of data-generating processes featuring endogeneity and clustering. Finally, we demonstrate its empirical relevance by revisiting Angrist and Lavy (1999) analysis of class size on student outcomes.



2020 ◽  
pp. 1-17
Author(s):  
Erin Hartman

Abstract Regression discontinuity (RD) designs are increasingly common in political science. They have many advantages, including a known and observable treatment assignment mechanism. The literature has emphasized the need for “falsification tests” and ways to assess the validity of the design. When implementing RD designs, researchers typically rely on two falsification tests, based on empirically testable implications of the identifying assumptions, to argue the design is credible. These tests, one for continuity in the regression function for a pretreatment covariate, and one for continuity in the density of the forcing variable, use a null of no difference in the parameter of interest at the discontinuity. Common practice can, incorrectly, conflate a failure to reject evidence of a flawed design with evidence that the design is credible. The well-known equivalence testing approach addresses these problems, but how to implement equivalence tests in the RD framework is not straightforward. This paper develops two equivalence tests tailored for RD designs that allow researchers to provide statistical evidence that the design is credible. Simulation studies show the superior performance of equivalence-based tests over tests-of-difference, as used in current practice. The tests are applied to the close elections RD data presented in Eggers et al. (2015b).



2017 ◽  
Vol 3 (2) ◽  
pp. 134-146
Author(s):  
Matias D. Cattaneo ◽  
Gonzalo Vazquez-Bare




2020 ◽  
Vol 8 (1) ◽  
pp. 164-181
Author(s):  
Cristian Crespo

Abstract This paper elaborates on administrative sorting, a threat to internal validity that has been overlooked in the regression discontinuity (RD) literature. Variation in treatment assignment near the threshold may still not be as good as random even when individuals are unable to precisely manipulate the running variable. This can be the case when administrative procedures, beyond individuals’ control and knowledge, affect their position near the threshold non-randomly. If administrative sorting is not recognized it can be mistaken as manipulation, preventing fixing the running variable and leading to discarding viable RD research designs.



Sign in / Sign up

Export Citation Format

Share Document