A minimum asymptotic mean squared error controller for an IMA(1,1) noise process with a starting offset, and its resetting design

2011 ◽  
Vol 40 (3) ◽  
pp. 313-323
Author(s):  
Changsoon Park
1983 ◽  
Vol 32 (1-2) ◽  
pp. 47-56 ◽  
Author(s):  
S. K. Srivastava ◽  
H. S. Jhajj

For estimating the mean of a finite population, Srivastava and Jhajj (1981) defined a broad class of estimators which we information of the sample mean as well as the sample variance of an auxiliary variable. In this paper we extend this class of estimators to the case when such information on p(> 1) auxiliary variables is available. The estimators of the class involve unknown constants whose optimum values depend on unknown population parameters. When these population parameters are replaced by their consistent estimates, the resulting estimators are shown to have the same asymptotic mean squared error. An expression by which the mean squared error of such estimators is smaller than those which use only the population means of the auxiliary variables, is obtained.


Author(s):  
Yulia Kotlyarova ◽  
Marcia M. A. Schafgans ◽  
Victoria Zinde-Walsh

AbstractIn this paper, we summarize results on convergence rates of various kernel based non- and semiparametric estimators, focusing on the impact of insufficient distributional smoothness, possibly unknown smoothness and even non-existence of density. In the presence of a possible lack of smoothness and the uncertainty about smoothness, methods of safeguarding against this uncertainty are surveyed with emphasis on nonconvex model averaging. This approach can be implemented via a combined estimator that selects weights based on minimizing the asymptotic mean squared error. In order to evaluate the finite sample performance of these and similar estimators we argue that it is important to account for possible lack of smoothness.


1983 ◽  
Vol 32 (3-4) ◽  
pp. 135-142 ◽  
Author(s):  
D. Ray

The first and second order stationarity conditions for an autore-gressive model with random coefficients are obtained. In addition, for such a type of model, the asymptotic mean squared error of an h-step ahead forecast is also considered.


Biometrika ◽  
2019 ◽  
Vol 106 (3) ◽  
pp. 665-682
Author(s):  
K Alhorn ◽  
K Schorning ◽  
H Dette

SummaryWe consider the problem of designing experiments for estimating a target parameter in regression analysis when there is uncertainty about the parametric form of the regression function. A new optimality criterion is proposed that chooses the experimental design to minimize the asymptotic mean squared error of the frequentist model averaging estimate. Necessary conditions for the optimal solution of a locally and Bayesian optimal design problem are established. The results are illustrated in several examples, and it is demonstrated that Bayesian optimal designs can yield a reduction of the mean squared error of the model averaging estimator by up to 45%.


2009 ◽  
Vol 25 (6) ◽  
pp. 1498-1514 ◽  
Author(s):  
Bruce E. Hansen

This paper investigates selection and averaging of linear regressions with a possible structural break. Our main contribution is the construction of a Mallows criterion for the structural break model. We show that the correct penalty term is nonstandard and depends on unknown parameters, but it can be approximated by an average of limiting cases to yield a feasible penalty with good performance. Following Hansen (2007, Econometrica 75, 1175–1189) we recommend averaging the structural break estimates with the no-break estimates where the weight is selected to minimize the Mallows criterion. This estimator is simple to compute, as the weights are a simple function of the ratio of the penalty to the Andrews SupF test statistic.To assess performance we focus on asymptotic mean-squared error (AMSE) in a local asymptotic framework. We show that the AMSE of the estimators depends exclusively on the parameter variation function. Numerical comparisons show that the unrestricted least-squares and pretest estimators have very large AMSE for certain regions of the parameter space, whereas our averaging estimator has AMSE close to the infeasible optimum.


Sign in / Sign up

Export Citation Format

Share Document