Instrumental Variables Analysis of Randomized Experiments with One-Sided Noncompliance

2010 ◽  
Vol 24 (2) ◽  
pp. 31-46 ◽  
Author(s):  
Edward E Leamer

My first reaction to “The Credibility Revolution in Empirical Economics,” authored by Joshua D. Angrist and Jörn-Steffen Pischke, was: Wow! This paper makes a stunningly good case for relying on purposefully randomized or accidentally randomized experiments to relieve the doubts that afflict inferences from nonexperimental data. On further reflection, I realized that I may have been overcome with irrational exuberance. Moreover, with this great honor bestowed on my “con” article, I couldn't easily throw this child of mine overboard. As Angrist and Pischke persuasively argue, either purposefully randomized experiments or accidentally randomized “natural” experiments can be extremely helpful, but Angrist and Pischke seem to me to overstate the potential benefits of the approach. I begin with some thoughts about the inevitable limits of randomization, and the need for sensitivity analysis in this area, as in all areas of applied empirical work. I argue that the recent financial catastrophe is a powerful illustration of the fact that extrapolating from natural experiments will inevitably be hazardous. I discuss how the difficulties of applied econometric work cannot be evaded with econometric innovations, offering as examples some under-recognized difficulties with instrumental variables and robust standard errors. I conclude with comments about the shortcomings of an experimentalist paradigm as applied to macroeconomics, and some warnings about the willingness of applied economists to apply push-button methodologies without sufficient hard thought regarding their applicability and shortcomings.


2010 ◽  
Vol 48 (2) ◽  
pp. 399-423 ◽  
Author(s):  
Guido W Imbens

Two recent papers, Deaton (2009) and Heckman and Urzua (2009), argue against what they see as an excessive and inappropriate use of experimental and quasi-experimental methods in empirical work in economics in the last decade. They specifically question the increased use of instrumental variables and natural experiments in labor economics and of randomized experiments in development economics. In these comments, I will make the case that this move toward shoring up the internal validity of estimates, and toward clarifying the description of the population these estimates are relevant for, has been important and beneficial in increasing the credibility of empirical work in economics. I also address some other concerns raised by the Deaton and Heckman–Urzua papers. (JEL C21, C31)


2020 ◽  
Vol 36 (2) ◽  
pp. 410-420 ◽  
Author(s):  
Anthony M. Gibson ◽  
Nathan A. Bowling

Abstract. The current paper reports the results of two randomized experiments designed to test the effects of questionnaire length on careless responding (CR). Both experiments also examined whether the presence of a behavioral consequence (i.e., a reward or a punishment) designed to encourage careful responding buffers the effects of questionnaire length on CR. Collectively, our two studies found (a) some support for the main effect of questionnaire length, (b) consistent support for the main effect of the consequence manipulations, and (c) very limited support for the buffering effect of the consequence manipulations. Because the advancement of many subfields of psychology rests on the availability of high-quality self-report data, further research should examine the causes and prevention of CR.


2013 ◽  
Vol 221 (3) ◽  
pp. 145-159 ◽  
Author(s):  
Gerard J. P. van Breukelen

This paper introduces optimal design of randomized experiments where individuals are nested within organizations, such as schools, health centers, or companies. The focus is on nested designs with two levels (organization, individual) and two treatment conditions (treated, control), with treatment assignment to organizations, or to individuals within organizations. For each type of assignment, a multilevel model is first presented for the analysis of a quantitative dependent variable or outcome. Simple equations are then given for the optimal sample size per level (number of organizations, number of individuals) as a function of the sampling cost and outcome variance at each level, with realistic examples. Next, it is explained how the equations can be applied if the dependent variable is dichotomous, or if there are covariates in the model, or if the effects of two treatment factors are studied in a factorial nested design, or if the dependent variable is repeatedly measured. Designs with three levels of nesting and the optimal number of repeated measures are briefly discussed, and the paper ends with a short discussion of robust design.


Sign in / Sign up

Export Citation Format

Share Document