Comparison of Orthogonal Regression and Least Squares in Measurement Error Modeling for Prediction of Material Property

2013 ◽  
Vol 661 ◽  
pp. 166-170
Author(s):  
Guo Liang Ding ◽  
Biao Chu ◽  
Yi Jin ◽  
Chang An Zhu

A critical challenge in prediction of material property is the accuracy of estimation for regression coefficient between the structure or process of material and its macroscopic property. One source of the estimation errors is measurement errors which commonly exist in practice. To provide guidance on the use of simple linear regression methods in measurement error modeling for prediction of material property, we investigated and compared least squares (LS) and orthogonal regression (OR) theoretically. And their applications in prediction of tensile strength for quenched and tempered steel 45 were presented as an example. OR has better performance than LS in the prediction of material property in presence of measurement errors under certain conditions.

2009 ◽  
Vol 7 ◽  
pp. 95 ◽  
Author(s):  
David L Borchers ◽  
Daniel G Pike ◽  
Thorvaldur Gunnlaugsson ◽  
Gísli A Víkingsson

We estimate the abundance of minke whales (Balaenoptera acutorostrata) from the Icelandic coastal shelf aerial surveys carried out as part of the 1987 and 2001 North Atlantic Sightings Surveys (NASS). In the case of the 1987 survey, the probability of detecting animals at distance zero (g(0)) is very close to 1 but there is substantial random measurement error in estimating distances. To estimate abundance from these data, we use methods which assume g(0)=1 but which includea distance measurement error model. In the case of the 2001 survey, measurement errors were sufficiently small to be negligible, and we use double platform methods which estimate g(0) and assume no measurement error to estimate abundance. From the 1987 survey, we estimate abundance to be 24,532 animals, with 95% CI (13,399; 44,916). From the 2001 NASS survey data, minke whale abundance is estimated to be 43,633 animals, with 95% CI (30,148; 63,149).


2019 ◽  
Vol 29 (3) ◽  
pp. 448-463 ◽  
Author(s):  
Manuel E. Rademaker ◽  
Florian Schuberth ◽  
Theo K. Dijkstra

Purpose The purpose of this paper is to enhance consistent partial least squares (PLSc) to yield consistent parameter estimates for population models whose indicator blocks contain a subset of correlated measurement errors. Design/methodology/approach Correction for attenuation as originally applied by PLSc is modified to include a priori assumptions on the structure of the measurement error correlations within blocks of indicators. To assess the efficacy of the modification, a Monte Carlo simulation is conducted. Findings In the presence of population measurement error correlation, estimated parameter bias is generally small for original and modified PLSc, with the latter outperforming the former for large sample sizes. In terms of the root mean squared error, the results are virtually identical for both original and modified PLSc. Only for relatively large sample sizes, high population measurement error correlation, and low population composite reliability are the increased standard errors associated with the modification outweighed by a smaller bias. These findings are regarded as initial evidence that original PLSc is comparatively robust with respect to misspecification of the structure of measurement error correlations within blocks of indicators. Originality/value Introducing and investigating a new approach to address measurement error correlation within blocks of indicators in PLSc, this paper contributes to the ongoing development and assessment of recent advancements in partial least squares path modeling.


Author(s):  
Анастасия Юрьевна Тимофеева

Рассматривается проблема оценки относительной активности мономеров на основе дифференциального уравнения сополимеризации. Обосновано включение в модель погрешности измерения входного признака в виде ошибки Берксона. Предложен алгоритм одновременного оценивания констант сополимеризации и дисперсий ошибок с помощью метода максимального правдоподобия. На примере сополимеризации виниловых эфиров произведено сравнение разных методов оценивания констант сополимеризации. Показано, что метод на основе симметричных уравнений дает некорректные результаты. Результаты оценивания с помощью предложенного алгоритма наиболее близки к оценкам, полученным по нелинейному методу наименьших квадратов Purpose. The purpose of this paper is to study methods for estimating copolymerization reactivity ratios based on the differential composition equation. Methodology. Most estimation methods reduce the differential composition equation to a linear form. They are based on the least squares method and do not take into account the measurement error in the input variable. Therefore they lead to statistically incorrect results. When analyzing the problem on the basis of the error-in-variables model in the classical case, additional information is required to determine the magnitude of the errors in measuring the concentration of monomers in the mixture and in the copolymer. Inclusion of the measurement error in the input variable into the model as the Berkson error is more consistent with the actual conditions of the experiments. It allows simultaneous estimating both the reactivity ratios and the variances of measurement errors using the maximum likelihood method. Results. The algorithm have been developed for estimating reactivity ratios with no additional information. The empirical study of estimation methods has been carried out using the example of copolymerization of vinyl esters. Findings. It is shown that the method based on symmetric equations gives incorrect results. Estimation results using the proposed algorithm are closest to the estimates obtained by the nonlinear least squares method


Author(s):  
Sergio Mendoza ◽  
Ji Liu ◽  
Partha Mishra ◽  
Hosam K. Fathy

This paper derives analytic expressions for both the mean and variance of battery state of charge (SOC) estimation error, assuming a least squares estimation law. The paper examines three sources of estimation error, namely: (i) voltage measurement errors (both bias and noise), (ii) current measurement bias, and (iii) mismatch between the order of the battery model used for estimation and the true order of the battery’s dynamics. There is already a rich literature on quantifying battery SOC estimation errors for different estimator designs. The novelty of this paper stems from its extensive examination of both the expected SOC estimation bias and noise, for a least squares estimation algorithm, in the presence of three different fundamental sources of these estimation errors. We show, both analytically and using Monte Carlo simulation, that under reasonable operating conditions, the expected bias in SOC estimation for lithium-ion batteries is dominant compared to the expected estimation variance. This leads to the important insight that quantifying SOC estimation variance using Fisher information furnishes overly optimistic predictions of achievable SOC estimation accuracy.


2017 ◽  
Vol 928 (10) ◽  
pp. 58-63 ◽  
Author(s):  
V.I. Salnikov

The initial subject for study are consistent sums of the measurement errors. It is assumed that the latter are subject to the normal law, but with the limitation on the value of the marginal error Δpred = 2m. It is known that each amount ni corresponding to a confidence interval, which provides the value of the sum, is equal to zero. The paradox is that the probability of such an event is zero; therefore, it is impossible to determine the value ni of where the sum becomes zero. The article proposes to consider the event consisting in the fact that some amount of error will change value within 2m limits with a confidence level of 0,954. Within the group all the sums have a limit error. These tolerances are proposed to use for the discrepancies in geodesy instead of 2m*SQL(ni). The concept of “the law of the truncated normal distribution with Δpred = 2m” is suggested to be introduced.


2021 ◽  
pp. 1-22
Author(s):  
Daisuke Kurisu ◽  
Taisuke Otsu

This paper studies the uniform convergence rates of Li and Vuong’s (1998, Journal of Multivariate Analysis 65, 139–165; hereafter LV) nonparametric deconvolution estimator and its regularized version by Comte and Kappus (2015, Journal of Multivariate Analysis 140, 31–46) for the classical measurement error model, where repeated noisy measurements on the error-free variable of interest are available. In contrast to LV, our assumptions allow unbounded supports for the error-free variable and measurement errors. Compared to Bonhomme and Robin (2010, Review of Economic Studies 77, 491–533) specialized to the measurement error model, our assumptions do not require existence of the moment generating functions of the square and product of repeated measurements. Furthermore, by utilizing a maximal inequality for the multivariate normalized empirical characteristic function process, we derive uniform convergence rates that are faster than the ones derived in these papers under such weaker conditions.


2000 ◽  
Vol 30 (2) ◽  
pp. 306-310 ◽  
Author(s):  
M S Williams ◽  
H T Schreuder

Assuming volume equations with multiplicative errors, we derive simple conditions for determining when measurement error in total height is large enough that only using tree diameter, rather than both diameter and height, is more reliable for predicting tree volumes. Based on data for different tree species of excurrent form, we conclude that measurement errors up to ±40% of the true height can be tolerated before inclusion of estimated height in volume prediction is no longer warranted.


Sign in / Sign up

Export Citation Format

Share Document