Estimators which are Uniformly Better than the James-Stein Estimator

1997 ◽  
Vol 47 (3-4) ◽  
pp. 167-180 ◽  
Author(s):  
Nabendu Pal ◽  
Jyh-Jiuan Lin

Assume i.i.d. observations are available from a p-dimensional multivariate normal distribution with an unknown mean vector μ and an unknown p .d. diaper- . sion matrix ∑. Here we address the problem of mean estimation in a decision theoretic setup. It is well known that the unbiased as well as the maximum likelihood estimator of μ is inadmissible when p ≤ 3 and is dominated by the famous James-Stein estimator (JSE). There are a few estimators which are better than the JSE reported in the literature, but in this paper we derive wide classes of estimators uniformly better than the JSE. We use some of these estimators for further risk study.

Author(s):  
Hazim Mansour Gorgees ◽  
Bushra Abdualrasool Ali ◽  
Raghad Ibrahim Kathum

     In this paper, the maximum likelihood estimator and the Bayes estimator of the reliability function for negative exponential distribution has been derived, then a Monte –Carlo simulation technique was employed to compare the performance of such estimators. The integral mean square error (IMSE) was used as a criterion for this comparison. The simulation results displayed that the Bayes estimator performed better than the maximum likelihood estimator for different samples sizes.


1970 ◽  
Vol 13 (3) ◽  
pp. 391-393 ◽  
Author(s):  
B. K. Kale

Lehmann [1] in his lecture notes on estimation shows that for estimating the unknown mean of a normal distribution, N(θ, 1), the usual estimator is neither minimax nor admissible if it is known that θ belongs to a finite closed interval [a, b] and the loss function is squared error. It is shown that , the maximum likelihood estimator (MLE) of θ, has uniformly smaller mean squared error (MSE) than that of . It is natural to ask the question whether the MLE of θ in N(θ, 1) is admissible or not if it is known that θ ∊ [a, b]. The answer turns out to be negative and the purpose of this note is to present this result in a slightly generalized form.


2016 ◽  
Vol 30 (2) ◽  
pp. 141-152
Author(s):  
Xuan Leng ◽  
Jinsen Zhuang ◽  
Taizhong Hu

Let (X1, …, Xn) be a multivariate normal random vector with any mean vector, variances equal to 1 and covariances equal and positive. Turner and Whitehead [9] established that the largest order statistic max{X1, …, Xn} is less than the standard normal random variable in the dispersive order. In this paper, we give a new and straightforward proof for this result. Several possible extensions of this result are also discussed.


Sign in / Sign up

Export Citation Format

Share Document