Comparison of Methods for Estimating Mixed Stock Fishery Composition

1990 ◽  
Vol 47 (11) ◽  
pp. 2235-2241 ◽  
Author(s):  
R. B. Millar

Given information on fish of known origin, and a random sample from the mixed stock fishery, the composition of that mixed fishery may be estimated in a number of ways. This study compares the performance of four classification-based estimators and a maximum likelihood estimator. Theoretical considerations show that the maximum likelihood estimator makes better use of the information contained in the mixed fishery sample. However, the classification estimators are shown to be more robust to violations in some of the model assumptions. Scale data from four regional stock groups of chinook salmon (Oncorhynchus tshawytscha) were used in an applied comparison of the five estimators. The results suggest that the maximum likelihood estimator performs best in practice.

2003 ◽  
Vol 2003 (34) ◽  
pp. 2147-2156 ◽  
Author(s):  
Rasul A. Khan

LetX1,X2,…,Xnbe a random sample from a normalN(θ,σ2)distribution with an unknown meanθ=0,±1,±2,…. Hammersley (1950) proposed the maximum likelihood estimator (MLE)d=[X¯n], nearest integer to the sample mean, as an unbiased estimator ofθand extended the Cramér-Rao inequality. The Hammersley lower bound for the variance of any unbiased estimator ofθis significantly improved, and the asymptotic (asn→∞) limit of Fraser-Guttman-Bhattacharyya bounds is also determined. A limiting property of a suitable distance is used to give some plausible explanations why such bounds cannot be attained. An almost uniformly minimum variance unbiased (UMVU) like property ofdis exhibited.


Author(s):  
Hazim Mansour Gorgees ◽  
Bushra Abdualrasool Ali ◽  
Raghad Ibrahim Kathum

     In this paper, the maximum likelihood estimator and the Bayes estimator of the reliability function for negative exponential distribution has been derived, then a Monte –Carlo simulation technique was employed to compare the performance of such estimators. The integral mean square error (IMSE) was used as a criterion for this comparison. The simulation results displayed that the Bayes estimator performed better than the maximum likelihood estimator for different samples sizes.


2021 ◽  
Author(s):  
Jakob Raymaekers ◽  
Peter J. Rousseeuw

AbstractMany real data sets contain numerical features (variables) whose distribution is far from normal (Gaussian). Instead, their distribution is often skewed. In order to handle such data it is customary to preprocess the variables to make them more normal. The Box–Cox and Yeo–Johnson transformations are well-known tools for this. However, the standard maximum likelihood estimator of their transformation parameter is highly sensitive to outliers, and will often try to move outliers inward at the expense of the normality of the central part of the data. We propose a modification of these transformations as well as an estimator of the transformation parameter that is robust to outliers, so the transformed data can be approximately normal in the center and a few outliers may deviate from it. It compares favorably to existing techniques in an extensive simulation study and on real data.


2013 ◽  
Vol 55 (3) ◽  
pp. 643-652
Author(s):  
Gauss M. Cordeiro ◽  
Denise A. Botter ◽  
Alexsandro B. Cavalcanti ◽  
Lúcia P. Barroso

2020 ◽  
Vol 28 (3) ◽  
pp. 183-196
Author(s):  
Kouacou Tanoh ◽  
Modeste N’zi ◽  
Armel Fabrice Yodé

AbstractWe are interested in bounds on the large deviations probability and Berry–Esseen type inequalities for maximum likelihood estimator and Bayes estimator of the parameter appearing linearly in the drift of nonhomogeneous stochastic differential equation driven by fractional Brownian motion.


Sign in / Sign up

Export Citation Format

Share Document