Estimation of copolymerization reactivity ratios in the Berkson error regression model

Author(s):  
Анастасия Юрьевна Тимофеева

Рассматривается проблема оценки относительной активности мономеров на основе дифференциального уравнения сополимеризации. Обосновано включение в модель погрешности измерения входного признака в виде ошибки Берксона. Предложен алгоритм одновременного оценивания констант сополимеризации и дисперсий ошибок с помощью метода максимального правдоподобия. На примере сополимеризации виниловых эфиров произведено сравнение разных методов оценивания констант сополимеризации. Показано, что метод на основе симметричных уравнений дает некорректные результаты. Результаты оценивания с помощью предложенного алгоритма наиболее близки к оценкам, полученным по нелинейному методу наименьших квадратов Purpose. The purpose of this paper is to study methods for estimating copolymerization reactivity ratios based on the differential composition equation. Methodology. Most estimation methods reduce the differential composition equation to a linear form. They are based on the least squares method and do not take into account the measurement error in the input variable. Therefore they lead to statistically incorrect results. When analyzing the problem on the basis of the error-in-variables model in the classical case, additional information is required to determine the magnitude of the errors in measuring the concentration of monomers in the mixture and in the copolymer. Inclusion of the measurement error in the input variable into the model as the Berkson error is more consistent with the actual conditions of the experiments. It allows simultaneous estimating both the reactivity ratios and the variances of measurement errors using the maximum likelihood method. Results. The algorithm have been developed for estimating reactivity ratios with no additional information. The empirical study of estimation methods has been carried out using the example of copolymerization of vinyl esters. Findings. It is shown that the method based on symmetric equations gives incorrect results. Estimation results using the proposed algorithm are closest to the estimates obtained by the nonlinear least squares method

2021 ◽  
Vol 21 (3) ◽  
pp. 659-668
Author(s):  
CANER TANIŞ ◽  
KADİR KARAKAYA

In this paper, we compare the methods of estimation for one parameter lifetime distribution, which is a special case of inverse Gompertz distribution. We discuss five different estimation methods such as maximum likelihood method, least-squares method, weighted least-squares method, the method of Anderson-Darling, and the method of Crámer–von Mises. It is evaluated the performances of these estimators via Monte Carlo simulations according to the bias and mean-squared error. Furthermore, two real data applications are performed.


2013 ◽  
Vol 278-280 ◽  
pp. 1323-1326
Author(s):  
Yan Hua Yu ◽  
Li Xia Song ◽  
Kun Lun Zhang

Fuzzy linear regression has been extensively studied since its inception symbolized by the work of Tanaka et al. in 1982. As one of the main estimation methods, fuzzy least squares approach is appealing because it corresponds, to some extent, to the well known statistical regression analysis. In this article, a restricted least squares method is proposed to fit fuzzy linear models with crisp inputs and symmetric fuzzy output. The paper puts forward a kind of fuzzy linear regression model based on structured element, This model has precise input data and fuzzy output data, Gives the regression coefficient and the fuzzy degree function determination method by using the least square method, studies the imitation degree question between the observed value and the forecast value.


1999 ◽  
Vol 56 (7) ◽  
pp. 1234-1240
Author(s):  
W R Gould ◽  
L A Stefanski ◽  
K H Pollock

All catch-effort estimation methods implicitly assume catch and effort are known quantities, whereas in many cases, they have been estimated and are subject to error. We evaluate the application of a simulation-based estimation procedure for measurement error models (J.R. Cook and L.A. Stefanski. 1994. J. Am. Stat. Assoc. 89: 1314-1328) in catch-effort studies. The technique involves a simulation component and an extrapolation step, hence the name SIMEX estimation. We describe SIMEX estimation in general terms and illustrate its use with applications to real and simulated catch and effort data. Correcting for measurement error with SIMEX estimation resulted in population size and catchability coefficient estimates that were substantially less than naive estimates, which ignored measurement errors in some cases. In a simulation of the procedure, we compared estimators from SIMEX with "naive" estimators that ignore measurement errors in catch and effort to determine the ability of SIMEX to produce bias-corrected estimates. The SIMEX estimators were less biased than the naive estimators but in some cases were also more variable. Despite the bias reduction, the SIMEX estimator had a larger mean squared error than the naive estimator for one of two artificial populations studied. However, our results suggest the SIMEX estimator may outperform the naive estimator in terms of bias and precision for larger populations.


2000 ◽  
Vol 122 (4) ◽  
pp. 482-487 ◽  
Author(s):  
M. Zuo ◽  
S. Chiovelli ◽  
Y. Nonaka

This paper comments on using the Larson-Miller parameter to fit the creep-rupture life distribution as a function of temperature and stress. The commonly used least-squares linear regression method assumes that the creep-rupture life follows the lognormal distribution. Most engineering literature does not discuss the validity of this assumption. In this paper, we outline the procedure for validating two critical assumptions when the least-squares method is used. The maximum likelihood method is suggested as an alternative and more powerful method for fitting creep-rupture life distributions. Examples are given to demonstrate the use of these two methods using Microsoft Excel and the LIFEREG procedure in SAS. [S0094-9930(00)00504-7]


Author(s):  
G. Navratil ◽  
E. Heer ◽  
J. Hahn

Geodetic survey data are typically analysed using the assumption that measurement errors can be modelled as noise. The least squares method models noise with the normal distribution and is based on the assumption that it selects measurements with the highest probability value (Ghilani, 2010, p. 179f). There are environment situations where no clear maximum for a measurement can be detected. This can happen, for example, if surveys take place in foggy conditions causing diffusion of light signals. This presents a problem for automated systems because the standard assumption of the least squares method does not hold. A measurement system trying to return a crisp value will produce an arbitrary value that lies within the area of maximum value. However repeating the measurement is unlikely to create a value following a normal distribution, which happens if measurement errors can be modelled as noise. In this article we describe a laboratory experiment that reproduces conditions similar to a foggy situation and present measurement data gathered from this setup. Furthermore we propose methods based on fuzzy set theory to evaluate the data from our measurement.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
N. Rahemi ◽  
M. R. Mosavi ◽  
A. A. Abedi ◽  
S. Mirzakuchaki

GPS is a satellite-based navigation system that is able to determine the exact position of objects on the Earth, sky, or space. By increasing the velocity of a moving object, the accuracy of positioning decreases; meanwhile, the calculation of the exact position in the movement by high velocities like airplane movement or very high velocities like satellite movement is so important. In this paper, seven methods for solving navigation equations in very high velocities using least squares method and its combination with the variance estimation methods for weighting observations based on their qualities are studied. Simulations on different data with different velocities from 100 m/s to 7000 m/s show that proposed method can improve the accuracy of positioning more than 50%.


Mathematics ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 62 ◽  
Author(s):  
Autcha Araveeporn

This paper compares the frequentist method that consisted of the least-squares method and the maximum likelihood method for estimating an unknown parameter on the Random Coefficient Autoregressive (RCA) model. The frequentist methods depend on the likelihood function that draws a conclusion from observed data by emphasizing the frequency or proportion of the data namely least squares and maximum likelihood methods. The method of least squares is often used to estimate the parameter of the frequentist method. The minimum of the sum of squared residuals is found by setting the gradient to zero. The maximum likelihood method carries out the observed data to estimate the parameter of a probability distribution by maximizing a likelihood function under the statistical model, while this estimator is obtained by a differential parameter of the likelihood function. The efficiency of two methods is considered by average mean square error for simulation data, and mean square error for actual data. For simulation data, the data are generated at only the first-order models of the RCA model. The results have shown that the least-squares method performs better than the maximum likelihood. The average mean square error of the least-squares method shows the minimum values in all cases that indicated their performance. Finally, these methods are applied to the actual data. The series of monthly averages of the Stock Exchange of Thailand (SET) index and daily volume of the exchange rate of Baht/Dollar are considered to estimate and forecast based on the RCA model. The result shows that the least-squares method outperforms the maximum likelihood method.


2019 ◽  
Vol 29 (3) ◽  
pp. 448-463 ◽  
Author(s):  
Manuel E. Rademaker ◽  
Florian Schuberth ◽  
Theo K. Dijkstra

Purpose The purpose of this paper is to enhance consistent partial least squares (PLSc) to yield consistent parameter estimates for population models whose indicator blocks contain a subset of correlated measurement errors. Design/methodology/approach Correction for attenuation as originally applied by PLSc is modified to include a priori assumptions on the structure of the measurement error correlations within blocks of indicators. To assess the efficacy of the modification, a Monte Carlo simulation is conducted. Findings In the presence of population measurement error correlation, estimated parameter bias is generally small for original and modified PLSc, with the latter outperforming the former for large sample sizes. In terms of the root mean squared error, the results are virtually identical for both original and modified PLSc. Only for relatively large sample sizes, high population measurement error correlation, and low population composite reliability are the increased standard errors associated with the modification outweighed by a smaller bias. These findings are regarded as initial evidence that original PLSc is comparatively robust with respect to misspecification of the structure of measurement error correlations within blocks of indicators. Originality/value Introducing and investigating a new approach to address measurement error correlation within blocks of indicators in PLSc, this paper contributes to the ongoing development and assessment of recent advancements in partial least squares path modeling.


2017 ◽  
Vol 24 (2) ◽  
pp. 3-12 ◽  
Author(s):  
Marek Hubert Zienkiewicz ◽  
Krzysztof Czaplewski

AbstractThe main aim of this paper is to assess the possibility of using non-conventional geodetic estimation methods in maritime navigation. The research subject of this paper concerns robust determination of vessel’s position using a method of parameters estimation in the split functional model (Msplitestimation). The studies performed will help in finding out if and in which situations the application of Msplitestimation as the method for determining vessel’s position is beneficial from the perspective of navigation safety. The results obtained were compared with the results of traditional estimation methods, i.e. least squares method and robust M-estimation.


Sign in / Sign up

Export Citation Format

Share Document