scholarly journals Comparison of Two Estimators of the Regression Coefficient Vector Under Pitman’s Closeness Criterion

Author(s):  
Jibo Wu

Schaffrin and Toutenburg [1] proposed a weighted mixed estimation based on the sample information and the stochastic prior information, and they also show that the weighted mixed estimator is superior to the ordinary least squares estimator under the mean squared error criterion. However, there has no paper to discuss the performance of the two estimators under the Pitman’s closeness criterion. This paper presents the comparison of the weighted mixed estimator and the ordinary least squares estimator using the Pitman’s closeness criterion. A simulation study is performed to illustrate the performance of the weighted mixed estimator and the ordinary least squares estimator under the Pitman’s closeness criterion.

2014 ◽  
Vol 2014 ◽  
pp. 1-6
Author(s):  
Jibo Wu

Batah et al. (2009) combined the unbiased ridge estimator and principal components regression estimator and introduced the modifiedr-kclass estimator. They also showed that the modifiedr-kclass estimator is superior to the ordinary least squares estimator and principal components regression estimator in the mean squared error matrix. In this paper, firstly, we will give a new method to obtain the modifiedr-kclass estimator; secondly, we will discuss its properties in some detail, comparing the modifiedr-kclass estimator to the ordinary least squares estimator and principal components regression estimator under the Pitman closeness criterion. A numerical example and a simulation study are given to illustrate our findings.


2002 ◽  
Vol 18 (5) ◽  
pp. 1121-1138 ◽  
Author(s):  
DONG WAN SHIN ◽  
MAN SUK OH

For regression models with general unstable regressors having characteristic roots on the unit circle and general stationary errors independent of the regressors, sufficient conditions are investigated under which the ordinary least squares estimator (OLSE) is asymptotically efficient in that it has the same limiting distribution as the generalized least squares estimator (GLSE) under the same normalization. A key condition for the asymptotic efficiency of the OLSE is that one multiplicity of a characteristic root of the regressor process is strictly greater than the multiplicities of the other roots. Under this condition, the covariance matrix Γ of the errors and the regressor matrix X are shown to satisfy a relationship (ΓX = XC + V for some matrix C) for V asymptotically dominated by X, which is analogous to the condition (ΓX = XC for some matrix C) for numerical equivalence of the OLSE and the GLSE.


1993 ◽  
Vol 9 (1) ◽  
pp. 62-80 ◽  
Author(s):  
Jan F. Kiviet ◽  
Garry D.A. Phillips

The small sample bias of the least-squares coefficient estimator is examined in the dynamic multiple linear regression model with normally distributed whitenoise disturbances and an arbitrary number of regressors which are all exogenous except for the one-period lagged-dependent variable. We employ large sample (T → ∞) and small disturbance (σ → 0) asymptotic theory and derive and compare expressions to O(T−1) and to O(σ2), respectively, for the bias in the least-squares coefficient vector. In some simulations and for an empirical example, we examine the mean (squared) error of these expressions and of corrected estimation procedures that yield estimates that are unbiased to O(T−l) and to O(σ2), respectively. The large sample approach proves to be superior, easily applicable, and capable of generating more efficient and less biased estimators.


Sign in / Sign up

Export Citation Format

Share Document