scholarly journals Beyond Manipulation: Administrative Sorting in Regression Discontinuity Designs

2020 ◽  
Vol 8 (1) ◽  
pp. 164-181
Author(s):  
Cristian Crespo

Abstract This paper elaborates on administrative sorting, a threat to internal validity that has been overlooked in the regression discontinuity (RD) literature. Variation in treatment assignment near the threshold may still not be as good as random even when individuals are unable to precisely manipulate the running variable. This can be the case when administrative procedures, beyond individuals’ control and knowledge, affect their position near the threshold non-randomly. If administrative sorting is not recognized it can be mistaken as manipulation, preventing fixing the running variable and leading to discarding viable RD research designs.

2021 ◽  
Author(s):  
Youmi Suk ◽  
Peter Steiner ◽  
Jee-Seon Kim ◽  
Hyunseung Kang

Regression discontinuity designs are commonly used for program evaluation with continuous treatment assignment variables. But in practice, treatment assignment is frequently based on discrete or ordinal variables. In this study, we propose a regression discontinuity design with an ordinal running variable to assess the effects of extended time accommodations (ETA) for English language learners (ELL). ETA eligibility is determined by ordinal ELL English proficiency categories of National Assessment of Educational Progress data. We discuss the identification and estimation of the average treatment effect, intent-to-treat effect, and the local average treatment effect at the cutoff. We also propose a series of sensitivity analyses to probe the effect estimates' robustness to the choices of scaling functions and cutoff scores, and unmeasured confounding.


2020 ◽  
pp. 1-17
Author(s):  
Erin Hartman

Abstract Regression discontinuity (RD) designs are increasingly common in political science. They have many advantages, including a known and observable treatment assignment mechanism. The literature has emphasized the need for “falsification tests” and ways to assess the validity of the design. When implementing RD designs, researchers typically rely on two falsification tests, based on empirically testable implications of the identifying assumptions, to argue the design is credible. These tests, one for continuity in the regression function for a pretreatment covariate, and one for continuity in the density of the forcing variable, use a null of no difference in the parameter of interest at the discontinuity. Common practice can, incorrectly, conflate a failure to reject evidence of a flawed design with evidence that the design is credible. The well-known equivalence testing approach addresses these problems, but how to implement equivalence tests in the RD framework is not straightforward. This paper develops two equivalence tests tailored for RD designs that allow researchers to provide statistical evidence that the design is credible. Simulation studies show the superior performance of equivalence-based tests over tests-of-difference, as used in current practice. The tests are applied to the close elections RD data presented in Eggers et al. (2015b).


2017 ◽  
Vol 3 (2) ◽  
pp. 134-146
Author(s):  
Matias D. Cattaneo ◽  
Gonzalo Vazquez-Bare

2008 ◽  
Vol 142 (2) ◽  
pp. 615-635 ◽  
Author(s):  
Guido W. Imbens ◽  
Thomas Lemieux

2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Otávio Bartalotti

AbstractIn regression discontinuity designs (RD), for a given bandwidth, researchers can estimate standard errors based on different variance formulas obtained under different asymptotic frameworks. In the traditional approach the bandwidth shrinks to zero as sample size increases; alternatively, the bandwidth could be treated as fixed. The main theoretical results for RD rely on the former, while most applications in the literature treat the estimates as parametric, implementing the usual heteroskedasticity-robust standard errors. This paper develops the “fixed-bandwidth” alternative asymptotic theory for RD designs, which sheds light on the connection between both approaches. I provide alternative formulas (approximations) for the bias and variance of common RD estimators, and conditions under which both approximations are equivalent. Simulations document the improvements in test coverage that fixed-bandwidth approximations achieve relative to traditional approximations, especially when there is local heteroskedasticity. Feasible estimators of fixed-bandwidth standard errors are easy to implement and are akin to treating RD estimators aslocallyparametric, validating the common empirical practice of using heteroskedasticity-robust standard errors in RD settings. Bias mitigation approaches are discussed and a novel bootstrap higher-order bias correction procedure based on the fixed bandwidth asymptotics is suggested.


Sign in / Sign up

Export Citation Format

Share Document