best linear unbiased estimators
Recently Published Documents


TOTAL DOCUMENTS

46
(FIVE YEARS 8)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Peter Teunissen

<p>Best integer equivariant (BIE) estimators provide minimum mean squared error (MMSE) solutions to the problem of GNSS carrier-phase ambiguity resolution for a wide range of distributions. The associated BIE estimators are universally optimal in the sense that they have an accuracy which is never poorer than that of any integer estimator and any linear unbiased estimator. Their accuracy is therefore always better or the same as that of Integer Least-Squares (ILS) estimators and Best Linear Unbiased Estimators (BLUEs).</p><p>Current theory is based on using BIE for the multivariate normal distribution. In this contribution this will be generalized to the contaminated normal distribution and the multivariate t-distribution, both of which have heavier tails than the normal. Their computational formulae are presented and discussed in relation to that of the normal distribution. In addition a GNSS real-data based analysis is carried out to demonstrate the universal MMSE properties of the BIE estimators for GNSS-baselines and associated parameters.</p><p> </p><p><strong>Keywords: </strong>Integer equivariant (IE) estimation · Best integer equivariant (BIE) · Integer Least-Squares (ILS) . Best linear unbiased estimation (BLUE) · Multivariate contaminated normal · Multivariate t-distribution . Global Navigation Satellite Systems (GNSSs)</p>


2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Bo Jiang ◽  
Yongge Tian

<p style='text-indent:20px;'>This paper is concerned with solving some fundamental estimation, prediction, and inference problems on a linear random-effects model with its parameter vector satisfying certain exact linear restrictions. Our work includes deriving analytical formulas for calculating the best linear unbiased predictors (BLUPs) and the best linear unbiased estimators (BLUEs) of all unknown parameters in the model by way of solving certain constrained quadratic matrix optimization problems, characterizing various mathematical and statistical properties of the predictors and estimators, establishing various fundamental rank and inertia formulas associated with the covariance matrices of predictors and estimators, and presenting necessary and sufficient conditions for several equalities and inequalities of covariance matrices of the predictors and estimators to hold.</p>


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Mukhtar M. Salah

In this paper the two-parameter α -power exponential distribution is studied. We study the two-parameter α -power exponential μ , λ distribution with the location parameter μ > 0 and scale parameter λ > 0 under progressive Type-II censored data with fixed shape parameter α . We estimate the maximum likelihood estimators of these unknown parameters numerically since it cannot be solved analytically. We use the approximate best linear unbiased estimators μ ∗ and λ ∗ , as an initial guesses to obtain the MLEs μ ^ and λ ^ . We estimate the interval estimation of these unknowns’ parameters. Monte Carlo simulations are performed and data examples have been provided for illustration and comparison.


Author(s):  
Arfang Badji ◽  
Lewis Machida ◽  
Daniel Bomet Kwemoi ◽  
Frank Kumi ◽  
Dennis Okii ◽  
...  

Genomic selection (GS) can accelerate variety release by shortening the variety development phase when factors that influence prediction accuracies (PA) of genomic prediction (GP) models such as training set (TS) size and relationship with the breeding set (BS) are optimized beforehand. In this study, PAs for the resistance to fall armyworm (FAW) and maize weevil (MW) in a diverse tropical maize panel composed of 341 double haploid and inbred lines were estimated using 16 parametric, semi-parametric, and nonparametric algorithms with a 10-fold and 5 repetitions cross-validation strategy. For MW resistance, 126 lines that had both genotypic and phenotypic data were used as a TS (37% of the panel) and the remaining lines, with only genotypic data, as a BS. Regarding FAW damage resistance, two TS determination strategies, namely: random-based TS (RBTS) with increasing sizes (37, 63, 75, and 85%) and pedigree-based TS (PBTS) were used. For both MW and FAW resistance datasets with an RBTS of 37%, PAs achieved with phenotypic best linear unbiased predictors were at least as twice as higher than those realized with best linear unbiased estimators. The PAs achieved with BLUPs for MW resistance traits varied from 0.66 to 0.82. The PAs with BLUPs for FAW resistance datasets ranged from 0.694 to 0.714 for RBTS of 37%, and 0.843 to 0.844 for RBTS of 85%. The PAs with BLUPs for FAW resistance with PBTS were generally high varying from 0.83 to 0.86, except for the third dataset which had the largest TS (86.22% of the panel) with PAs ranging from 0.11 to 0.75. GP models showed generally similar predictive abilities for each trait while the TS designation was determinant. There was a highly positive correlation (R=0.92***) between TS size and PAs for the RBTS approach while, for the PBTS, these parameters were highly negatively correlated (R=-0.44***), indicating the importance of the relationship between the TS and the BS with the smallest TS (31%) achieving the highest PAs (0.86). This study paves the way towards the use of GS for maize resistance to insect pests in sub-Saharan Africa.


2019 ◽  
Vol 17 (1) ◽  
pp. 979-989 ◽  
Author(s):  
Jian Hou ◽  
Yong Zhao

Abstract Linear regression models are foundation of current statistical theory and have been a prominent object of study in statistical data analysis and inference. A special class of linear regression models is called the seemingly unrelated regression models (SURMs) which allow correlated observations between different regression equations. In this article, we present a general approach to SURMs under some general assumptions, including establishing closed-form expressions of the best linear unbiased predictors (BLUPs) and the best linear unbiased estimators (BLUEs) of all unknown parameters in the models, establishing necessary and sufficient conditions for a family of equalities of the predictors and estimators under the single models and the combined model to hold. Some fundamental and valuable properties of the BLUPs and BLUEs under the SURM are also presented.


Author(s):  
Sandra S. Ferreira ◽  
Dário Ferreira ◽  
Célia Nunes ◽  
Francisco Carvalho ◽  
João Tiago Mexia

2017 ◽  
Vol 15 (1) ◽  
pp. 1300-1322 ◽  
Author(s):  
Bo Jiang ◽  
Yongge Tian ◽  
Xuan Zhang

Abstract A general linear model can be given in certain multiple partitioned forms, and there exist submodels associated with the given full model. In this situation, we can make statistical inferences from the full model and submodels, respectively. It has been realized that there do exist links between inference results obtained from the full model and its submodels, and thus it would be of interest to establish certain links among estimators of parameter spaces under these models. In this approach the methodology of additive matrix decompositions plays an important role to obtain satisfactory conclusions. In this paper, we consider the problem of establishing additive decompositions of estimators in the context of a general linear model with partial parameter restrictions. We will demonstrate how to decompose best linear unbiased estimators (BLUEs) under the constrained general linear model (CGLM) as the sums of estimators under submodels with parameter restrictions by using a variety of effective tools in matrix analysis. The derivation of our main results is based on heavy algebraic operations of the given matrices and their generalized inverses in the CGLM, while the whole contributions illustrate various skillful uses of state-of-the-art matrix analysis techniques in the statistical inference of linear regression models.


2017 ◽  
Vol 15 (1) ◽  
pp. 126-150 ◽  
Author(s):  
Yongge Tian

Abstract Matrix mathematics provides a powerful tool set for addressing statistical problems, in particular, the theory of matrix ranks and inertias has been developed as effective methodology of simplifying various complicated matrix expressions, and establishing equalities and inequalities occurred in statistical analysis. This paper describes how to establish exact formulas for calculating ranks and inertias of covariances of predictors and estimators of parameter spaces in general linear models (GLMs), and how to use the formulas in statistical analysis of GLMs. We first derive analytical expressions of best linear unbiased predictors/best linear unbiased estimators (BLUPs/BLUEs) of all unknown parameters in the model by solving a constrained quadratic matrix-valued function optimization problem, and present some well-known results on ordinary least-squares predictors/ordinary least-squares estimators (OLSPs/OLSEs). We then establish some fundamental rank and inertia formulas for covariance matrices related to BLUPs/BLUEs and OLSPs/OLSEs, and use the formulas to characterize a variety of equalities and inequalities for covariance matrices of BLUPs/BLUEs and OLSPs/OLSEs. As applications, we use these equalities and inequalities in the comparison of the covariance matrices of BLUPs/BLUEs and OLSPs/OLSEs. The work on the formulations of BLUPs/BLUEs and OLSPs/OLSEs, and their covariance matrices under GLMs provides direct access, as a standard example, to a very simple algebraic treatment of predictors and estimators in linear regression analysis, which leads a deep insight into the linear nature of GLMs and gives an efficient way of summarizing the results.


Sign in / Sign up

Export Citation Format

Share Document