Adjusting Standard Errors and Confidence Intervals

2019 ◽  
pp. 233-248
Author(s):  
Peter Cummings
Author(s):  
Vaida Paketurytė ◽  
Vytautas Petrauskas ◽  
Asta Zubrienė ◽  
Olga Abian ◽  
Margarida Bastos ◽  
...  

2019 ◽  
Vol 11 (16) ◽  
pp. 1944 ◽  
Author(s):  
Jessica Esteban ◽  
Ronald McRoberts ◽  
Alfredo Fernández-Landa ◽  
José Tomé ◽  
Erik Nӕsset

Despite the popularity of random forests (RF) as a prediction algorithm, methods for constructing confidence intervals for population means using this technique are still only sparsely reported. For two regional study areas (Spain and Norway) RF was used to predict forest volume or aboveground biomass using remotely sensed auxiliary data obtained from multiple sensors. Additionally, the changes per unit area of these forest attributes were estimated using indirect and direct methods. Multiple inferential frameworks have attracted increased recent attention for estimating the variances required for confidence intervals. For this study, three different statistical frameworks, design-based expansion, model-assisted and model-based estimators, were used for estimating population parameters and their variances. Pairs and wild bootstrapping approaches at different levels were compared for estimating the variances of the model-based estimates of the population means, as well as for mapping the uncertainty of the change predictions. The RF models accurately represented the relationship between the response and remotely sensed predictor variables, resulting in increased precision for estimates of the population means relative to design-based expansion estimates. Standard errors based on pairs bootstrapping within or internal to RF were considerably larger than standard errors based on both pairs and wild external bootstrapping of the entire RF algorithm. Pairs and wild external bootstrapping produced similar standard errors, but wild bootstrapping better mimicked the original structure of the sample data and better preserved the ranges of the predictor variables.


2018 ◽  
Vol 108 (8) ◽  
pp. 2277-2304 ◽  
Author(s):  
Michal Kolesár ◽  
Christoph Rothe

We consider inference in regression discontinuity designs when the running variable only takes a moderate number of distinct values. In particular, we study the common practice of using confidence intervals (CIs) based on standard errors that are clustered by the running variable as a means to make inference robust to model misspecification (Lee and Card 2008). We derive theoretical results and present simulation and empirical evidence showing that these CIs do not guard against model misspecification, and that they have poor coverage properties. We therefore recommend against using these CIs in practice. We instead propose two alternative CIs with guaranteed coverage properties under easily interpretable restrictions on the conditional expectation function. (JEL C13, C51, J13, J31, J64, J65)


1996 ◽  
Vol 12 (3) ◽  
pp. 569-580 ◽  
Author(s):  
Paul Rilstone ◽  
Michael Veall

The usual standard errors for the regression coefficients in a seemingly unrelated regression model have a substantial downward bias. Bootstrapping the standard errors does not seem to improve inferences. In this paper, Monte Carlo evidence is reported which indicates that bootstrapping can result in substantially better inferences when applied to t-ratios rather than to standard errors.


Sign in / Sign up

Export Citation Format

Share Document