scholarly journals Minimaxity and Limits of Risks Ratios of Shrinkage Estimators of a Multivariate Normal Mean in the Bayesian Case

2020 ◽  
Vol 8 (2) ◽  
pp. 507-520
Author(s):  
Abdenour Hamdaoui ◽  
Abdelkader Benkhaled ◽  
Nadia Mezouar

In this article, we consider two forms of shrinkage estimators of a multivariate normal mean with unknown variance. We take the prior law as a normal multivariate distribution and we construct a Modified Bayes estimator and an Empirical Modified Bayes estimator. We are interested instudying the minimaxity and the behavior of risks ratios of these estimators to the maximum likelihood estimator, when the dimension of the parameters space and the sample size tend to infinity.

Author(s):  
Abdenour Hamdaoui ◽  
Mekki Terbeche ◽  
Abdelkader Benkhaled

In this paper, we are interested in estimating a multivariate normal mean under the balanced loss function using the shrinkage estimators deduced from the Maximum Likelihood Estimator (MLE). First, we consider a class of estimators containing the James-Stein estimator, we then show that any estimator of this class dominates the MLE, consequently it is minimax. Secondly, we deal with shrinkage estimators which are not only minimax but also dominate the James- Stein estimator.


Author(s):  
Mekki Terbeche

In this paper we study the estimation of a multivariate normal mean under the balanced loss function. We present here a class of shrinkage estimators which generalizes the James-Stein estimator and we are interested to establish the asymptotic behaviour of risks ratios of these estimators to the maximum likelihood estimators (MLE). Thus, in the case where the dimension of the parameter space and the sample size are large, we determine the sufficient conditions for that the estimators cited previously are minimax


Author(s):  
Hazim Mansour Gorgees ◽  
Bushra Abdualrasool Ali ◽  
Raghad Ibrahim Kathum

     In this paper, the maximum likelihood estimator and the Bayes estimator of the reliability function for negative exponential distribution has been derived, then a Monte –Carlo simulation technique was employed to compare the performance of such estimators. The integral mean square error (IMSE) was used as a criterion for this comparison. The simulation results displayed that the Bayes estimator performed better than the maximum likelihood estimator for different samples sizes.


2020 ◽  
Vol 28 (3) ◽  
pp. 183-196
Author(s):  
Kouacou Tanoh ◽  
Modeste N’zi ◽  
Armel Fabrice Yodé

AbstractWe are interested in bounds on the large deviations probability and Berry–Esseen type inequalities for maximum likelihood estimator and Bayes estimator of the parameter appearing linearly in the drift of nonhomogeneous stochastic differential equation driven by fractional Brownian motion.


1997 ◽  
Vol 47 (3-4) ◽  
pp. 167-180 ◽  
Author(s):  
Nabendu Pal ◽  
Jyh-Jiuan Lin

Assume i.i.d. observations are available from a p-dimensional multivariate normal distribution with an unknown mean vector μ and an unknown p .d. diaper- . sion matrix ∑. Here we address the problem of mean estimation in a decision theoretic setup. It is well known that the unbiased as well as the maximum likelihood estimator of μ is inadmissible when p ≤ 3 and is dominated by the famous James-Stein estimator (JSE). There are a few estimators which are better than the JSE reported in the literature, but in this paper we derive wide classes of estimators uniformly better than the JSE. We use some of these estimators for further risk study.


Fractals ◽  
2015 ◽  
Vol 23 (04) ◽  
pp. 1550045 ◽  
Author(s):  
YEN-CHING CHANG

The efficiency and accuracy of estimating the Hurst exponent have been two inevitable considerations. Recently, an efficient implementation of the maximum likelihood estimator (MLE) (simply called the fast MLE) for the Hurst exponent was proposed based on a combination of the Levinson algorithm and Cholesky decomposition, and furthermore the fast MLE has also considered all four possible cases, including known mean, unknown mean, known variance, and unknown variance. In this paper, four cases of an approximate MLE (AMLE) were obtained based on two approximations of the logarithmic determinant and the inverse of a covariance matrix. The computational cost of the AMLE is much lower than that of the MLE, but a little higher than that of the fast MLE. To raise the computational efficiency of the proposed AMLE, a required power spectral density (PSD) was indirectly calculated by interpolating two suitable PSDs chosen from a set of established PSDs. Experimental results show that the AMLE through interpolation (simply called the interpolating AMLE) can speed up computation. The computational speed of the interpolating AMLE is on average over 24 times quicker than that of the fast MLE while remaining the accuracy very close to that of the MLE or the fast MLE.


2009 ◽  
Vol 25 (3) ◽  
pp. 793-805 ◽  
Author(s):  
Laura Chioda ◽  
Michael Jansson

This paper studies the asymptotic behavior of a Gaussian linear instrumental variables model in which the number of instruments diverges with the sample size. Asymptotic efficiency bounds are obtained for rotation invariant inference procedures and are shown to be attainable by procedures based on the limited information maximum likelihood estimator. The bounds are obtained by characterizing the limiting experiment associated with the model induced by the rotation invariance restriction.


Sign in / Sign up

Export Citation Format

Share Document