FINITE-SAMPLE MOMENTS OF THE COEFFICIENT OF VARIATION

2009 ◽  
Vol 25 (1) ◽  
pp. 291-297 ◽  
Author(s):  
Yong Bao

We study the finite-sample bias and mean squared error, when properly defined, of the sample coefficient of variation under a general distribution. We employ a Nagar-type expansion and use moments of quadratic forms to derive the results. We find that the approximate bias depends on not only the skewness but also the kurtosis of the distribution, whereas the approximate mean squared error depends on the cumulants up to order 6.

1996 ◽  
Vol 12 (3) ◽  
pp. 432-457 ◽  
Author(s):  
Eric Ghysels ◽  
Offer Lieberman

It is common for an applied researcher to use filtered data, like seasonally adjusted series, for instance, to estimate the parameters of a dynamic regression model. In this paper, we study the effect of (linear) filters on the distribution of parameters of a dynamic regression model with a lagged dependent variable and a set of exogenous regressors. So far, only asymptotic results are available. Our main interest is to investigate the effect of filtering on the small sample bias and mean squared error. In general, these results entail a numerical integration of derivatives of the joint moment generating function of two quadratic forms in normal variables. The computation of these integrals is quite involved. However, we take advantage of the Laplace approximations to the bias and mean squared error, which substantially reduce the computational burden, as they yield relatively simple analytic expressions. We obtain analytic formulae for approximating the effect of filtering on the finite sample bias and mean squared error. We evaluate the adequacy of the approximations by comparison with Monte Carlo simulations, using the Census X-11 filter as a specific example


1987 ◽  
Vol 3 (3) ◽  
pp. 359-370 ◽  
Author(s):  
Koichi Maekawa

We compare the distributional properties of the four predictors commonly used in practice. They are based on the maximum likelihood, two types of the least squared, and the Yule-Walker estimators. The asymptotic expansions of the distribution, bias, and mean-squared error for the four predictors are derived up to O(T−1), where T is the sample size. Examining the formulas of the asymptotic expansions, we find that except for the Yule-Walker type predictor, the other three predictors have the same distributional properties up to O(T−1).


2008 ◽  
Vol 24 (3) ◽  
pp. 631-650 ◽  
Author(s):  
Peter C.B. Phillips ◽  
Chirok Han

This paper introduces a simple first-difference-based approach to estimation and inference for the AR(1) model. The estimates have virtually no finite-sample bias and are not sensitive to initial conditions, and the approach has the unusual advantage that a Gaussian central limit theory applies and is continuous as the autoregressive coefficient passes through unity with a uniform $\sqrt{n}$ rate of convergence. En route, a useful central limit theorem (CLT) for sample covariances of linear processes is given, following Phillips and Solo (1992, Annals of Statistics, 20, 971–1001). The approach also has useful extensions to dynamic panels.


Econometrica ◽  
2019 ◽  
Vol 87 (4) ◽  
pp. 1307-1340 ◽  
Author(s):  
Matthew Gentzkow ◽  
Jesse M. Shapiro ◽  
Matt Taddy

We study the problem of measuring group differences in choices when the dimensionality of the choice set is large. We show that standard approaches suffer from a severe finite‐sample bias, and we propose an estimator that applies recent advances in machine learning to address this bias. We apply this method to measure trends in the partisanship of congressional speech from 1873 to 2016, defining partisanship to be the ease with which an observer could infer a congressperson's party from a single utterance. Our estimates imply that partisanship is far greater in recent years than in the past, and that it increased sharply in the early 1990s after remaining low and relatively constant over the preceding century.


Author(s):  
Yulia Kotlyarova ◽  
Marcia M. A. Schafgans ◽  
Victoria Zinde-Walsh

AbstractIn this paper, we summarize results on convergence rates of various kernel based non- and semiparametric estimators, focusing on the impact of insufficient distributional smoothness, possibly unknown smoothness and even non-existence of density. In the presence of a possible lack of smoothness and the uncertainty about smoothness, methods of safeguarding against this uncertainty are surveyed with emphasis on nonconvex model averaging. This approach can be implemented via a combined estimator that selects weights based on minimizing the asymptotic mean squared error. In order to evaluate the finite sample performance of these and similar estimators we argue that it is important to account for possible lack of smoothness.


Sign in / Sign up

Export Citation Format

Share Document