finite sample inference
Recently Published Documents


TOTAL DOCUMENTS

23
(FIVE YEARS 7)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Author(s):  
James J Heckman ◽  
Ganesh Karapakula

Abstract This paper presents a simple decision-theoretic economic approach for analyzing social experiments with compromised random assignment protocols that are only partially documented. We model administratively constrained experimenters who satisfice in seeking covariate balance. We develop design-based small-sample hypothesis tests that use worst-case (least favorable) randomization null distributions. Our approach accommodates a variety of compromised experiments, including imperfectly documented re-randomization designs. To make our analysis concrete, we focus much of our discussion on the influential Perry Preschool Project. We reexamine previous estimates of program effectiveness using our methods. The choice of how to model reassignment vitally affects inference.


2019 ◽  
Vol 71 (2) ◽  
pp. 63-82
Author(s):  
Martin D. Klein ◽  
John Zylstra ◽  
Bimal K. Sinha

In this article, we develop finite sample inference based on multiply imputed synthetic data generated under the multiple linear regression model. We consider two methods of generating the synthetic data, namely posterior predictive sampling and plug-in sampling. Simulation results are presented to confirm that the proposed methodology performs as the theory predicts and to numerically compare the proposed methodology with the current state-of-the-art procedures for analysing multiply imputed partially synthetic data. AMS 2000 subject classification: 62F10, 62F25, 62J05


Econometrics ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 16
Author(s):  
Taehoon Kim ◽  
Jacob Schwartz ◽  
Kyungchul Song ◽  
Yoon-Jae Whang

This paper considers two-sided matching models with nontransferable utilities, with one side having homogeneous preferences over the other side. When one observes only one or several large matchings, despite the large number of agents involved, asymptotic inference is difficult because the observed matching involves the preferences of all the agents on both sides in a complex way, and creates a complicated form of cross-sectional dependence across observed matches. When we assume that the observed matching is a consequence of a stable matching mechanism with homogeneous preferences on one side, and the preferences are drawn from a parametric distribution conditional on observables, the large observed matching follows a parametric distribution. This paper shows in such a situation how the method of Monte Carlo inference can be a viable option. Being a finite sample inference method, it does not require independence or local dependence among the observations which are often used to obtain asymptotic validity. Results from a Monte Carlo simulation study are presented and discussed.


Author(s):  
Martin Klein ◽  
Bimal Sinha

In this paper we develop likelihood-based finite sample inference based on singly imputed partially synthetic data, when the original data follow either a multivariate normal or a multiple linear regression model. We assume that the synthetic data are generated by using the plug-in sampling method, where unknown parameters in the data model are set equal to observed values of their point estimators based on the original data, and synthetic data are drawn from this estimated version of the model. Empirical studies are presented to show that the proposed methods do indeed perform as the theory predicts, and to compare the proposed methods for singly imputed synthetic data with the combining rules that are used to analyze multiply imputed partially synthetic data. Some theoretical comparisons between singly and multiply imputed partially synthetic data inference are also provided. A data analysis example and disclosure risk evaluation of singly and multiply imputed partially synthetic data is presented based on public use data from the Current Population Survey. We discuss the specific conditions under which the proposed methodology will yield valid inference, and evaluate the performance of the methodology when certain conditions do not hold. We outline some ways to extend the proposed methodology for certain scenarios where the required set of conditions do not hold.


2015 ◽  
Vol 3 (1) ◽  
pp. 1-24 ◽  
Author(s):  
Matias D. Cattaneo ◽  
Brigham R. Frandsen ◽  
Rocío Titiunik

AbstractIn the Regression Discontinuity (RD) design, units are assigned a treatment based on whether their value of an observed covariate is above or below a fixed cutoff. Under the assumption that the distribution of potential confounders changes continuously around the cutoff, the discontinuous jump in the probability of treatment assignment can be used to identify the treatment effect. Although a recent strand of the RD literature advocates interpreting this design as a local randomized experiment, the standard approach to estimation and inference is based solely on continuity assumptions that do not justify this interpretation. In this article, we provide precise conditions in a randomization inference context under which this interpretation is directly justified and develop exact finite-sample inference procedures based on them. Our randomization inference framework is motivated by the observation that only a few observations might be available close enough to the threshold where local randomization is plausible, and hence standard large-sample procedures may be suspect. Our proposed methodology is intended as a complement and a robustness check to standard RD inference approaches. We illustrate our framework with a study of two measures of party-level advantage in U.S. Senate elections, where the number of close races is small and our framework is well suited for the empirical analysis.


Sign in / Sign up

Export Citation Format

Share Document