scholarly journals Robust Standard Errors for Panel Regressions with Cross-Sectional Dependence

Author(s):  
Daniel Hoechle

I present a new Stata program, xtscc, that estimates pooled ordinary least-squares/weighted least-squares regression and fixed-effects (within) regression models with Driscoll and Kraay (Review of Economics and Statistics 80: 549–560) standard errors. By running Monte Carlo simulations, I compare the finite-sample properties of the cross-sectional dependence–consistent Driscoll–Kraay estimator with the properties of other, more commonly used covariance matrix estimators that do not account for cross-sectional dependence. The results indicate that Driscoll–Kraay standard errors are well calibrated when cross-sectional dependence is present. However, erroneously ignoring cross-sectional correlation in the estimation of panel models can lead to severely biased statistical results. I illustrate the xtscc program by considering an application from empirical finance. Thereby, I also propose a Hausman-type test for fixed effects that is robust to general forms of cross-sectional and temporal dependence.

2014 ◽  
Vol 10 (4) ◽  
pp. 418-431 ◽  
Author(s):  
Imre Karafiath

Purpose – In the finance literature, fitting a cross-sectional regression with (estimated) abnormal returns as the dependent variable and firm-specific variables (e.g. financial ratios) as independent variables has become de rigueur for a publishable event study. In the absence of skewness and/or kurtosis the explanatory variable, the regression design does not exhibit leverage – an issue that has been addressed in the econometrics literature on the finite sample properties of heteroskedastic-consistent (HC) standard errors, but not in the finance literature on event studies. The paper aims to discuss this issue. Design/methodology/approach – In this paper, simulations are designed to evaluate the potential bias in the standard error of the regression coefficient when the regression design includes “points of high leverage” (Chesher and Jewitt, 1987) and heteroskedasticity. The empirical distributions of test statistics are tabulated from ordinary least squares, weighted least squares, and HC standard errors. Findings – None of the test statistics examined in these simulations are uniformly robust with regard to conditional heteroskedasticity when the regression includes “points of high leverage.” In some cases the bias can be quite large: an empirical rejection rate as high as 25 percent for a 5 percent nominal significance level. Further, the bias in OLS HC standard errors may be attenuated but not fully corrected with a “wild bootstrap.” Research limitations/implications – If the researcher suspects an event-induced increase in return variances, tests for conditional heteroskedasticity should be conducted and the regressor matrix should be evaluated for observations that exhibit a high degree of leverage. Originality/value – This paper is a modest step toward filling a gap on the finite sample properties of HC standard errors in the event methodology literature.


2009 ◽  
Vol 2009 ◽  
pp. 1-8 ◽  
Author(s):  
Janet Myhre ◽  
Daniel R. Jeske ◽  
Michael Rennie ◽  
Yingtao Bi

A heteroscedastic linear regression model is developed from plausible assumptions that describe the time evolution of performance metrics for equipment. The inherited motivation for the related weighted least squares analysis of the model is an essential and attractive selling point to engineers with interest in equipment surveillance methodologies. A simple test for the significance of the heteroscedasticity suggested by a data set is derived and a simulation study is used to evaluate the power of the test and compare it with several other applicable tests that were designed under different contexts. Tolerance intervals within the context of the model are derived, thus generalizing well-known tolerance intervals for ordinary least squares regression. Use of the model and its associated analyses is illustrated with an aerospace application where hundreds of electronic components are continuously monitored by an automated system that flags components that are suspected of unusual degradation patterns.


2021 ◽  
Author(s):  
Young Ri Lee ◽  
James E Pustejovsky

Cross-classified random effects modeling (CCREM) is a common approach for analyzing cross-classified data in education. However, when the focus of a study is on the regression coefficients at level one rather than on the random effects, ordinary least squares regression with cluster robust variance estimators (OLS-CRVE) or fixed effects regression with CRVE (FE-CRVE) could be appropriate approaches. These alternative methods may be advantageous because they rely on weaker assumptions than what is required by CCREM. We conducted a Monte Carlo Simulation study to compare the performance of CCREM, OLS-CRVE, and FE-CRVE in models with crossed random effects, including conditions where homoscedasticity assumptions and exogeneity assumptions held and conditions where they were violated. We found that CCREM performed the best when its assumptions are all met. However, when homoscedasticity assumptions are violated, OLS-CRVE and FE-CRVE provided similar or better performance than CCREM. FE-CRVE showed the best performance when the exogeneity assumption is violated. Thus, we recommend two-way FE-CRVE as a good alternative to CCREM, particularly if the homoscedasticity or exogeneity assumptions of the CCREM might be in doubt.


2014 ◽  
Vol 4 (2) ◽  
pp. 175-196 ◽  
Author(s):  
David Mutua Mathuva

Purpose – The purpose of this paper is to investigate whether non-financial firms listed on the Nairobi Securities Exchange (NSE) exhibit a target cash conversion cycle (CCC). The study also examines the speed of adjustment to the target CCC and the factors that influence corporate decisions on the optimum length of the CCC. Design/methodology/approach – Based on a sample of 33 publicly traded firms on the NSE for the period between 1993 and 2008, cross-sectional and time series analyses were carried out on the data comprising 468 firm-years. A target adjustment model was developed to examine the significant determinants of the CCC. Various regression approaches including ordinary least squares, fixed effects and two-stage least squares estimation models were used in data analysis. Findings – The results, which are robust for endogeneity, show that non-financial firms listed on the NSE maintain a target CCC. Further analysis reveals that these firms adjust to the target CCC at a slower rate. The results show that the determinants of the CCC include both firm-specific and economy-wide factors. Specifically, the study establishes that older firms and firms with more internal resources maintain longer CCC. Moreover higher return on assets, investment in capital expenditure and growth opportunities have a significant negative association with the CCC. The results also show a significant positive relation between inflation and the CCC. Practical implications – The study establishes that other than internal firm-specific factors, the CCC is also influenced by inflation, which is an external, economy-wide factor. Originality/value – To the best of the author's knowledge, this is the first study to examine whether listed non-financial firms in a frontier market maintain a target CCC.


2007 ◽  
Vol 42 (1) ◽  
pp. 229-256 ◽  
Author(s):  
Scott E. Harrington ◽  
David G. Shrider

AbstractWe demonstrate analytically that cross-sectional variation in the effects of events, i.e., in true abnormal returns, necessarily produces event-induced variance increases, biasing popular tests for mean abnormal returns in short-horizon event studies. We show that unexplained cross-sectional variation in true abnormal returns plausibly produces nonproportional heteroskedasticity in cross-sectional regressions, biasing coefficient standard errors for both ordinary and weighted least squares. Simulations highlight the resulting biases, the necessity of using tests robust to cross-sectional variation, and the power of robust tests, including regression-based tests for nonzero mean abnormal returns, which may increase power by conditioning on relevant explanatory variables.


1998 ◽  
Vol 84 (6) ◽  
pp. 2163-2170 ◽  
Author(s):  
Mitchell J. Rosen ◽  
John D. Sorkin ◽  
Andrew P. Goldberg ◽  
James M. Hagberg ◽  
Leslie I. Katzel

Studies assessing changes in maximal aerobic capacity (V˙o 2 max) associated with aging have traditionally employed the ratio ofV˙o 2 max to body weight. Log-linear, ordinary least-squares, and weighted least-squares models may avoid some of the inherent weaknesses associated with the use of ratios. In this study we used four different methods to examine the age-associated decline inV˙o 2 max in a cross-sectional sample of 276 healthy men, aged 45–80 yr. Sixty-one of the men were aerobically trained athletes, and the remainder were sedentary. The model that accounted for the largest proportion of variance was a weighted least-squares model that included age, fat-free mass, and an indicator variable denoting exercise training status. The model accounted for 66% of the variance inV˙o 2 max and satisfied all the important general linear model assumptions. The other approaches failed to satisfy one or more of these assumptions. The results indicated thatV˙o 2 max declines at the same rate in athletic and sedentary men (0.24 l/min or 9%/decade) and that 35% of this decline (0.08 l ⋅ min−1 ⋅ decade−1) is due to the age-associated loss of fat-free mass.


Separations ◽  
2018 ◽  
Vol 5 (4) ◽  
pp. 49 ◽  
Author(s):  
Juan Sanchez

It is necessary to determine the limit of detection when validating any analytical method. For methods with a linear response, a simple and low labor-consuming procedure is to use the linear regression parameters obtained in the calibration to estimate the blank standard deviation from the residual standard deviation (sres), or the intercept standard deviation (sb0). In this study, multiple experimental calibrations are evaluated, applying both ordinary and weighted least squares. Moreover, the analyses of replicated blank matrices, spiked at 2–5 times the lowest calculated limit values with the two regression methods, are performed to obtain the standard deviation of the blank. The limits of detection obtained with ordinary least squares, weighted least squares, the signal-to-noise ratio, and replicate blank measurements are then compared. Ordinary least squares, which is the simplest and most commonly applied calibration regression methodology, always overestimate the values of the standard deviations at the lower levels of calibration ranges. As a result, the detection limits are up to one order of magnitude greater than those obtained with the other approaches studied, which all gave similar limits.


Talanta ◽  
2010 ◽  
Vol 80 (3) ◽  
pp. 1102-1109 ◽  
Author(s):  
Rosilene S. Nascimento ◽  
Roberta E.S. Froes ◽  
Nilton O.C. e Silva ◽  
Rita L.P. Naveira ◽  
Denise B.C. Mendes ◽  
...  

2015 ◽  
Vol 76 (13) ◽  
Author(s):  
Khoo Li Peng ◽  
Robiah Adnan ◽  
Maizah Hura Ahmad

In this study, Leverage Based Near Neighbour–Robust Weighted Least Squares (LBNN-RWLS) method is proposed in order to estimate the standard error accurately in the presence of heteroscedastic errors and outliers in multiple linear regression. The data sets used in this study are simulated through monte carlo simulation. The data sets contain heteroscedastic errors and different percentages of outliers with different sample sizes.  The study discovered that LBNN-RWLS is able to produce smaller standard errors compared to Ordinary Least Squares (OLS), Least Trimmed of Squares (LTS) and Weighted Least Squares (WLS). This shows that LBNN-RWLS can estimate the standard error accurately even when heteroscedastic errors and outliers are present in the data sets.


Sign in / Sign up

Export Citation Format

Share Document