Consistency of an adjusted least-squares estimator in a vector linear model with measurement errors

2013 ◽  
Vol 64 (11) ◽  
pp. 1739-1751
Author(s):  
I. O. Sen’ko
Author(s):  
Vladimir Ivanovich Denisov ◽  
Anastasiia Yurievna Timofeeva

The functional error-in-variable models don’t fit within standard regression formulation for the reason that input factors are unknown determinate variables which in practice have random errors. Usually, estimation of such models is performed using additional information: about input factor error variance (adjusted least squares estimator, developed specifically for estimating the polynomial dependencies) or the relation of the factor error variances (total least squares estimator). Their values are typically given by a priori assumptions. The paper attempts to weaken the model assumptions, namely to eliminate the need to set the input factor error dispersion due to the possibility of its estimation on the same data, for which the non-linear model is recovered, i.e. without additional information. This possibility occurs when the measurement errors are homogeneous. Then, if the estimates of unobservable input factor values are close to the true, homoscedasticity of errors should be detected, which is broken as soon as the input factor in the nonlinear model contains errors. In this paper it is shown analytically for polynomial models. Thus, in the proposed algorithm, such an estimate of the dispersion of the input factor error is selected, which minimizes test statistic of heteroskedasticity detection. In the computational experiments the algorithm outputs were compared by different criteria to test the hypothesis of homogeneity of error variance. Besides, the approximation accuracy was compared based on found estimates and using a usual least squares estimator. It was found that the developed algorithm provides a significant advantage for the residual sum of squares and thus can be recommended for use in practice.


2018 ◽  
Vol 24 (5) ◽  
pp. 1124-1150
Author(s):  
Chetan Dave ◽  
James Feigenbaum

In a canonical monetary policy model in which the central bank learns about underlying fundamentals by estimating the parameters of a Phillips curve, we show that the bank’s loss function is asymmetric such that parameter overestimates may be more or less costly than underestimates, creating a precautionary motive in estimation. This motive suggests the use of a more efficient variance-adjusted least-squares estimator for learning about fundamentals. Informed by this “precautionary learning” the central bank sets low inflation targets, and the economy can settle near a Ramsey equilibrium.


Sign in / Sign up

Export Citation Format

Share Document