The mean squared errors of the maximum likelihood and natural-conjugate bayes regression estimators

1979 ◽  
Vol 11 (2-3) ◽  
pp. 319-334 ◽  
Author(s):  
D.E.A. Giles ◽  
A.C. Rayner
2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Usman Shahzad ◽  
Nadia H. Al-Noor ◽  
Noureen Afshan ◽  
David Anekeya Alilah ◽  
Muhammad Hanif ◽  
...  

Robust regression tools are commonly used to develop regression-type ratio estimators with traditional measures of location whenever data are contaminated with outliers. Recently, the researchers extended this idea and developed regression-type ratio estimators through robust minimum covariance determinant (MCD) estimation. In this study, the quantile regression with MCD-based measures of location is utilized and a class of quantile regression-type mean estimators is proposed. The mean squared errors (MSEs) of the proposed estimators are also obtained. The proposed estimators are compared with the reviewed class of estimators through a simulation study. We also incorporated two real-life applications. To assess the presence of outliers in these real-life applications, the Dixon chi-squared test is used. It is found that the quantile regression estimators are performing better as compared to some existing estimators.


2013 ◽  
Vol 23 (1) ◽  
pp. 117-129 ◽  
Author(s):  
Jiawen Bian ◽  
Huiming Peng ◽  
Jing Xing ◽  
Zhihui Liu ◽  
Hongwei Li

This paper considers parameter estimation of superimposed exponential signals in multiplicative and additive noise which are all independent and identically distributed. A modified Newton-Raphson algorithm is used to estimate the frequencies of the considered model, which is further used to estimate other linear parameters. It is proved that the modified Newton- Raphson algorithm is robust and the corresponding estimators of frequencies attain the same convergence rate with Least Squares Estimators (LSEs) under the same noise conditions, but it outperforms LSEs in terms of the mean squared errors. Finally, the effectiveness of the algorithm is verified by some numerical experiments.


2009 ◽  
Vol 6 (4) ◽  
pp. 705-710
Author(s):  
Baghdad Science Journal

This Research Tries To Investigate The Problem Of Estimating The Reliability Of Two Parameter Weibull Distribution,By Using Maximum Likelihood Method, And White Method. The Comparison Is done Through Simulation Process Depending On Three Choices Of Models (?=0.8 , ß=0.9) , (?=1.2 , ß=1.5) and (?=2.5 , ß=2). And Sample Size n=10 , 70, 150 We Use the Statistical Criterion Based On the Mean Square Error (MSE) For Comparison Amongst The Methods.


Symmetry ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 1738
Author(s):  
Selvi Mardalena ◽  
Purhadi Purhadi ◽  
Jerry Dwi Trijoyo Purnomo ◽  
Dedy Dwi Prastyo

Multivariate Poisson regression is used in order to model two or more count response variables. The Poisson regression has a strict assumption, that is the mean and the variance of response variables are equal (equidispersion). Practically, the variance can be larger than the mean (overdispersion). Thus, a suitable method for modelling these kind of data needs to be developed. One alternative model to overcome the overdispersion issue in the multi-count response variables is the Multivariate Poisson Inverse Gaussian Regression (MPIGR) model, which is extended with an exposure variable. Additionally, a modification of Bessel function that contain factorial functions is proposed in this work to make it computable. The objective of this study is to develop the parameter estimation and hypothesis testing of the MPIGR model. The parameter estimation uses the Maximum Likelihood Estimation (MLE) method, followed by the Newton–Raphson iteration. The hypothesis testing is constructed using the Maximum Likelihood Ratio Test (MLRT) method. The MPIGR model that has been developed is then applied to regress three response variables, i.e., the number of infant mortality, the number of under-five children mortality, and the number of maternal mortality on eight predictors. The unit observation is the cities and municipalities in Java Island, Indonesia. The empirical results show that three response variables that are previously mentioned are significantly affected by all predictors.


1978 ◽  
Vol 80 ◽  
pp. 49-52
Author(s):  
André Heck

Our algorithm for stellar luminosity calibrations (based on the principle of maximum likelihood) allows the calibration of relations of the type:Where n is the size of the sample at hand,Mi, are the individual absolute magnitudes,Cijare observational quantities (j = 1, …, N), andqjare the coefficients to be determined.If we put N = 1 and CiN= 1, we havethe mean absolute magnitude of the sample. As additional output, the algorithm provides us also with the dispersion in magnitude of the sample σM, the mean solar motion (U, V, W) and the corresponding velocity ellipsoid (σu, σV, σw).


2014 ◽  
Vol 2 (1) ◽  
pp. 13-74 ◽  
Author(s):  
Mark J. van der Laan

AbstractSuppose that we observe a population of causally connected units. On each unit at each time-point on a grid we observe a set of other units the unit is potentially connected with, and a unit-specific longitudinal data structure consisting of baseline and time-dependent covariates, a time-dependent treatment, and a final outcome of interest. The target quantity of interest is defined as the mean outcome for this group of units if the exposures of the units would be probabilistically assigned according to a known specified mechanism, where the latter is called a stochastic intervention. Causal effects of interest are defined as contrasts of the mean of the unit-specific outcomes under different stochastic interventions one wishes to evaluate. This covers a large range of estimation problems from independent units, independent clusters of units, and a single cluster of units in which each unit has a limited number of connections to other units. The allowed dependence includes treatment allocation in response to data on multiple units and so called causal interference as special cases. We present a few motivating classes of examples, propose a structural causal model, define the desired causal quantities, address the identification of these quantities from the observed data, and define maximum likelihood based estimators based on cross-validation. In particular, we present maximum likelihood based super-learning for this network data. Nonetheless, such smoothed/regularized maximum likelihood estimators are not targeted and will thereby be overly bias w.r.t. the target parameter, and, as a consequence, generally not result in asymptotically normally distributed estimators of the statistical target parameter.To formally develop estimation theory, we focus on the simpler case in which the longitudinal data structure is a point-treatment data structure. We formulate a novel targeted maximum likelihood estimator of this estimand and show that the double robustness of the efficient influence curve implies that the bias of the targeted minimum loss-based estimation (TMLE) will be a second-order term involving squared differences of two nuisance parameters. In particular, the TMLE will be consistent if either one of these nuisance parameters is consistently estimated. Due to the causal dependencies between units, the data set may correspond with the realization of a single experiment, so that establishing a (e.g. normal) limit distribution for the targeted maximum likelihood estimators, and corresponding statistical inference, is a challenging topic. We prove two formal theorems establishing the asymptotic normality using advances in weak-convergence theory. We conclude with a discussion and refer to an accompanying technical report for extensions to general longitudinal data structures.


Sign in / Sign up

Export Citation Format

Share Document