Moment Approximation for Least-Squares Estimator in First-Order Regression Models with Unit Root and Nonnormal Errors

Author(s):  
Yong Bao ◽  
Aman Ullah ◽  
Ru Zhang
2010 ◽  
Vol 53 (2) ◽  
pp. 371-386 ◽  
Author(s):  
Xin Chen ◽  
Min Tsao ◽  
Julie Zhou

2009 ◽  
Vol 25 (6) ◽  
pp. 1682-1715 ◽  
Author(s):  
Peter C.B. Phillips ◽  
Tassos Magdalinos

It is well known that unit root limit distributions are sensitive to initial conditions in the distant past. If the distant past initialization is extended to the infinite past, the initial condition dominates the limit theory, producing a faster rate of convergence, a limiting Cauchy distribution for the least squares coefficient, and a limit normal distribution for the t-ratio. This amounts to the tail of the unit root process wagging the dog of the unit root limit theory. These simple results apply in the case of a univariate autoregression with no intercept. The limit theory for vector unit root regression and cointegrating regression is affected but is no longer dominated by infinite past initializations. The latter contribute to the limiting distribution of the least squares estimator and produce a singularity in the limit theory, but do not change the principal rate of convergence. Usual cointegrating regression theory and inference continue to hold in spite of the degeneracy in the limit theory and are therefore robust to initial conditions that extend to the infinite past.


Author(s):  
M. P. Bazilevsky

When estimating regression models using the least squares method, one of its prerequisites is the lack of autocorrelation in the regression residuals. The presence of autocorrelation in the residuals makes the least-squares regression estimates to be ineffective, and the standard errors of these estimates to be untenable. Quantitatively, autocorrelation in the residuals of the regression model has traditionally been estimated using the Durbin-Watson statistic, which is the ratio of the sum of the squares of differences of consecutive residual values to the sum of squares of the residuals. Unfortunately, such an analytical form of the Durbin-Watson statistic does not allow it to be integrated, as linear constraints, into the problem of selecting informative regressors, which is, in fact, a mathematical programming problem in the regression model. The task of selecting informative regressors is to extract from the given number of possible regressors a given number of variables based on a certain quality criterion.The aim of the paper is to develop and study new criteria for detecting first-order autocorrelation in the residuals in regression models that can later be integrated into the problem of selecting informative regressors in the form of linear constraints. To do this, the paper proposes modular autocorrelation statistic for which, using the Gretl package, the ranges of their possible values and limit values were first determined experimentally, depending on the value of the selective coefficient of auto-regression. Then the results obtained were proved by model experiments using the Monte Carlo method. The disadvantage of the proposed modular statistic of adequacy is that their dependencies on the selective coefficient of auto-regression are not even functions. For this, double modular autocorrelation criteria are proposed, which, using special methods, can be used as linear constraints in mathematical programming problems to select informative regressors in regression models.


Sign in / Sign up

Export Citation Format

Share Document