Inadmissibility of the Maximum Likelihood Estimator in the Presence of Prior Information

1970 ◽  
Vol 13 (3) ◽  
pp. 391-393 ◽  
Author(s):  
B. K. Kale

Lehmann [1] in his lecture notes on estimation shows that for estimating the unknown mean of a normal distribution, N(θ, 1), the usual estimator is neither minimax nor admissible if it is known that θ belongs to a finite closed interval [a, b] and the loss function is squared error. It is shown that , the maximum likelihood estimator (MLE) of θ, has uniformly smaller mean squared error (MSE) than that of . It is natural to ask the question whether the MLE of θ in N(θ, 1) is admissible or not if it is known that θ ∊ [a, b]. The answer turns out to be negative and the purpose of this note is to present this result in a slightly generalized form.

2003 ◽  
Vol 54 (1-2) ◽  
pp. 17-30 ◽  
Author(s):  
Huizhen Guo ◽  
Nabendu Pal

This paper deals with estimation of θ when iid (independent and identically distributed) observations are available from a N( θ, cθ2) distribution where c > 0 is assumed to be known. Using the equivariance principle under the group of scale and direction transformations we first characterize the class of equivariant estimators of θ. We then investigate a few equivariant estimators, including the maximum likelihood estimator, in terms of standardized bias and standardized mean squared error.


1997 ◽  
Vol 47 (3-4) ◽  
pp. 167-180 ◽  
Author(s):  
Nabendu Pal ◽  
Jyh-Jiuan Lin

Assume i.i.d. observations are available from a p-dimensional multivariate normal distribution with an unknown mean vector μ and an unknown p .d. diaper- . sion matrix ∑. Here we address the problem of mean estimation in a decision theoretic setup. It is well known that the unbiased as well as the maximum likelihood estimator of μ is inadmissible when p ≤ 3 and is dominated by the famous James-Stein estimator (JSE). There are a few estimators which are better than the JSE reported in the literature, but in this paper we derive wide classes of estimators uniformly better than the JSE. We use some of these estimators for further risk study.


2014 ◽  
Vol 14 (07) ◽  
pp. 1450026 ◽  
Author(s):  
Mahdi Teimouri ◽  
Saralees Nadarajah

Teimouri and Nadarajah [Statist. Methodol.13 (2013) 12–24] considered bias corrected maximum likelihood estimation of the Weibull distribution based on upper record values. Here, we propose an estimator for the Weibull shape parameter based on consecutive upper records. It is shown by simulations that the proposed estimator has less bias and less mean squared error than an estimator due to Soliman et al. [Comput. Statist. Data Anal.51 (2006) 2065–2077] based on all upper records. Also, the proposed estimator can be considered as a good competitor for the maximum likelihood estimator of the shape parameter based on complete data. This is proved by simulations and using a real dataset.


2013 ◽  
Vol 10 (2) ◽  
pp. 480-488 ◽  
Author(s):  
Baghdad Science Journal

In this paper, Bayes estimators of the parameter of Maxwell distribution have been derived along with maximum likelihood estimator. The non-informative priors; Jeffreys and the extension of Jeffreys prior information has been considered under two different loss functions, the squared error loss function and the modified squared error loss function for comparison purpose. A simulation study has been developed in order to gain an insight into the performance on small, moderate and large samples. The performance of these estimators has been explored numerically under different conditions. The efficiency for the estimators was compared according to the mean square error MSE. The results of comparison by MSE show that the efficiency of Bayes estimators of the shape parameter of the Maxwell distribution decreases with the increase of Jeffreys prior constants. The results also show that values of Bayes estimators are almost close to the maximum likelihood estimator when the Jeffreys prior constants are small, yet they are identical in some certain cases. Comparison with respect to loss functions show that Bayes estimators under the modified squared error loss function has greater MSE than the squared error loss function especially with the increase of r.


Author(s):  
Hazim Mansour Gorgees ◽  
Bushra Abdualrasool Ali ◽  
Raghad Ibrahim Kathum

     In this paper, the maximum likelihood estimator and the Bayes estimator of the reliability function for negative exponential distribution has been derived, then a Monte –Carlo simulation technique was employed to compare the performance of such estimators. The integral mean square error (IMSE) was used as a criterion for this comparison. The simulation results displayed that the Bayes estimator performed better than the maximum likelihood estimator for different samples sizes.


Author(s):  
Nadia Hashim Al-Noor ◽  
Shurooq A.K. Al-Sultany

        In real situations all observations and measurements are not exact numbers but more or less non-exact, also called fuzzy. So, in this paper, we use approximate non-Bayesian computational methods to estimate inverse Weibull parameters and reliability function with fuzzy data. The maximum likelihood and moment estimations are obtained as non-Bayesian estimation. The maximum likelihood estimators have been derived numerically based on two iterative techniques namely “Newton-Raphson” and the “Expectation-Maximization” techniques. In addition, we provide compared numerically through Monte-Carlo simulation study to obtained estimates of the parameters and reliability function in terms of their mean squared error values and integrated mean squared error values respectively.


2021 ◽  
Author(s):  
Jakob Raymaekers ◽  
Peter J. Rousseeuw

AbstractMany real data sets contain numerical features (variables) whose distribution is far from normal (Gaussian). Instead, their distribution is often skewed. In order to handle such data it is customary to preprocess the variables to make them more normal. The Box–Cox and Yeo–Johnson transformations are well-known tools for this. However, the standard maximum likelihood estimator of their transformation parameter is highly sensitive to outliers, and will often try to move outliers inward at the expense of the normality of the central part of the data. We propose a modification of these transformations as well as an estimator of the transformation parameter that is robust to outliers, so the transformed data can be approximately normal in the center and a few outliers may deviate from it. It compares favorably to existing techniques in an extensive simulation study and on real data.


Sign in / Sign up

Export Citation Format

Share Document