scholarly journals Testing identifying assumptions in fuzzy regression discontinuity designs

2021 ◽  
2021 ◽  
pp. 016237372199157
Author(s):  
J. Jacob Kirksey ◽  
Michael A. Gottfried

With the rise in the availability of absenteeism data, it is clear that students are missing a staggering amount of school. Policymakers have focused efforts on identifying school programs that might reduce absenteeism. This study examined whether implementing the program “Breakfast After-the-Bell” (BAB) might reduce school absenteeism. Exploring longitudinal statewide datasets (Colorado and Nevada) containing school breakfast information linked to national data on chronic absenteeism rates, we used sharp and fuzzy regression discontinuity designs to examine the effects of BAB. Our findings suggest that schools serving BAB experienced declines in chronic absenteeism. The strongest effects were experienced by high schools, schools with higher rates of breakfast participation, schools serving universally free meals, and suburban schools. Implications are discussed.


2020 ◽  
Vol 23 (2) ◽  
pp. 211-231
Author(s):  
Yang He ◽  
Otávio Bartalotti

Summary This paper develops a novel wild bootstrap procedure to construct robust bias-corrected valid confidence intervals for fuzzy regression discontinuity designs, providing an intuitive complement to existing robust bias-corrected methods. The confidence intervals generated by this procedure are valid under conditions similar to the procedures proposed by Calonico et al. (2014) and related literature. Simulations provide evidence that this new method is at least as accurate as the plug-in analytical corrections when applied to a variety of data-generating processes featuring endogeneity and clustering. Finally, we demonstrate its empirical relevance by revisiting Angrist and Lavy (1999) analysis of class size on student outcomes.


2020 ◽  
pp. 1-17
Author(s):  
Erin Hartman

Abstract Regression discontinuity (RD) designs are increasingly common in political science. They have many advantages, including a known and observable treatment assignment mechanism. The literature has emphasized the need for “falsification tests” and ways to assess the validity of the design. When implementing RD designs, researchers typically rely on two falsification tests, based on empirically testable implications of the identifying assumptions, to argue the design is credible. These tests, one for continuity in the regression function for a pretreatment covariate, and one for continuity in the density of the forcing variable, use a null of no difference in the parameter of interest at the discontinuity. Common practice can, incorrectly, conflate a failure to reject evidence of a flawed design with evidence that the design is credible. The well-known equivalence testing approach addresses these problems, but how to implement equivalence tests in the RD framework is not straightforward. This paper develops two equivalence tests tailored for RD designs that allow researchers to provide statistical evidence that the design is credible. Simulation studies show the superior performance of equivalence-based tests over tests-of-difference, as used in current practice. The tests are applied to the close elections RD data presented in Eggers et al. (2015b).


2017 ◽  
Vol 3 (2) ◽  
pp. 134-146
Author(s):  
Matias D. Cattaneo ◽  
Gonzalo Vazquez-Bare

Sign in / Sign up

Export Citation Format

Share Document