scholarly journals A Zero-One Result for the Least Squares Estimator

1985 ◽  
Vol 1 (1) ◽  
pp. 85-96 ◽  
Author(s):  
Donald W. K. Andrews

The least squares estimator for the linear regression model is shown to converge to the true parameter vector either with probability one or with probability zero. In the latter case, it either converges to a point not equal to the true parameter with probability one, or it diverges with probability one. These results are shown to hold under weak conditions on the dependent random variable and regressor variables. No additional conditions are placed on the errors. The dependent and regressor variables are assumed to be weakly dependent—in particular, to be strong mixing. The regressors may be fixed or random and must exhibit a certain degree of independent variability. No further assumptions are needed. The model considered allows the number of regressors to increase without bound as the sample size increases. The proof proceeds by extending Kolmogorov's 0-1 law for independent randomvariables to strong mixing random variables.

1996 ◽  
Vol 24 (6) ◽  
pp. 2513-2523 ◽  
Author(s):  
Sara van de Geer ◽  
Marten Wegkamp

Sign in / Sign up

Export Citation Format

Share Document