scholarly journals Comparison of several non-linear-regression methods for fitting the Michaelis-Menten equation

1985 ◽  
Vol 231 (1) ◽  
pp. 171-177 ◽  
Author(s):  
L Matyska ◽  
J Kovář

The known jackknife methods (i.e. standard jackknife, weighted jackknife, linear jackknife and weighted linear jackknife) for the determination of the parameters (as well as of their confidence regions) were tested and compared with the simple Marquardt's technique (comprising the calculation of confidence intervals from the variance-co-variance matrix). The simulated data corresponding to the Michaelis-Menten equation with defined structure and magnitude of error of the dependent variable were used for fitting. There were no essential differences between the results of both point and interval parameter estimations by the tested methods. Marquardt's procedure yielded slightly better results than the jackknives for five scattered data points (the use of this method is advisable for routine analyses). The classical jackknife was slightly superior to the other methods for 20 data points (this method can be recommended for very precise calculations if great numbers of data are available). The weighting does not seem to be necessary in this type of equation because the parameter estimates obtained with all methods with the use of constant weights were comparable with those calculated with the weights corresponding exactly to the real error structure whereas the relative weighting led to rather worse results.

2022 ◽  
Author(s):  
THEODORE MODIS

Look-up tables and graphs are provided for determining the uncertainties during logistic fits, on the three parameters M, α and to describing an S-curve of the form: S(t) = M/(1+exp(-α(t-t0))).The uncertainties and the associated confidence levels are given as a function of the uncertainty on the data points and the length of the historical period. Correlations between these variables are also examined; they make “what-if” games possible even before doing the fit.The study is based on some 35,000 S-curve fits on simulated data covering a variety of conditions and carried out via a χ2 minimization technique. A rule-of-thumb general result is that, given at least half of the S-curve range and a precision of better than 10% on each historical point, the uncertainty on M will be less than 20% with 90% confidence level.


Author(s):  
Salvatore D. Tomarchio ◽  
Paul D. McNicholas ◽  
Antonio Punzo

AbstractFinite mixtures of regressions with fixed covariates are a commonly used model-based clustering methodology to deal with regression data. However, they assume assignment independence, i.e., the allocation of data points to the clusters is made independently of the distribution of the covariates. To take into account the latter aspect, finite mixtures of regressions with random covariates, also known as cluster-weighted models (CWMs), have been proposed in the univariate and multivariate literature. In this paper, the CWM is extended to matrix data, e.g., those data where a set of variables are simultaneously observed at different time points or locations. Specifically, the cluster-specific marginal distribution of the covariates and the cluster-specific conditional distribution of the responses given the covariates are assumed to be matrix normal. Maximum likelihood parameter estimates are derived using an expectation-conditional maximization algorithm. Parameter recovery, classification assessment, and the capability of the Bayesian information criterion to detect the underlying groups are investigated using simulated data. Finally, two real data applications concerning educational indicators and the Italian non-life insurance market are presented.


1997 ◽  
Vol 78 (02) ◽  
pp. 855-858 ◽  
Author(s):  
Armando Tripodi ◽  
Veena Chantarangkul ◽  
Marigrazia Clerici ◽  
Barbara Negri ◽  
Pier Mannuccio Mannucci

SummaryA key issue for the reliable use of new devices for the laboratory control of oral anticoagulant therapy with the INR is their conformity to the calibration model. In the past, their adequacy has mostly been assessed empirically without reference to the calibration model and the use of International Reference Preparations (IRP) for thromboplastin. In this study we reviewed the requirements to be fulfilled and applied them to the calibration of a new near-patient testing device (TAS, Cardiovascular Diagnostics) which uses thromboplastin-containing test cards for determination of the INR. On each of 10 working days citrat- ed whole blood and plasma samples were obtained from 2 healthy subjects and 6 patients on oral anticoagulants. PT testing on whole blood and plasma was done with the TAS and parallel testing for plasma by the manual technique with the IRP CRM 149S. Conformity to the calibration model was judged satisfactory if the following requirements were met: (i) there was a linear relationship between paired log-PTs (TAS vs CRM 149S); (ii) the regression line drawn through patients data points, passed through those of normals; (iii) the precision of the calibration expressed as the CV of the slope was <3%. A good linear relationship was observed for calibration plots for plasma and whole blood (r = 0.98). Regression lines drawn through patients data points, passed through those of normals. The CVs of the slope were in both cases 2.2% and the ISIs were 0.965 and 1.000 for whole blood and plasma. In conclusion, our study shows that near-patient testing devices can be considered reliable tools to measure INR in patients on oral anticoagulants and provides guidelines for their evaluation.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 384
Author(s):  
Rocío Hernández-Sanjaime ◽  
Martín González ◽  
Antonio Peñalver ◽  
Jose J. López-Espín

The presence of unaccounted heterogeneity in simultaneous equation models (SEMs) is frequently problematic in many real-life applications. Under the usual assumption of homogeneity, the model can be seriously misspecified, and it can potentially induce an important bias in the parameter estimates. This paper focuses on SEMs in which data are heterogeneous and tend to form clustering structures in the endogenous-variable dataset. Because the identification of different clusters is not straightforward, a two-step strategy that first forms groups among the endogenous observations and then uses the standard simultaneous equation scheme is provided. Methodologically, the proposed approach is based on a variational Bayes learning algorithm and does not need to be executed for varying numbers of groups in order to identify the one that adequately fits the data. We describe the statistical theory, evaluate the performance of the suggested algorithm by using simulated data, and apply the two-step method to a macroeconomic problem.


Author(s):  
Zachary R. McCaw ◽  
Hanna Julienne ◽  
Hugues Aschard

AbstractAlthough missing data are prevalent in applications, existing implementations of Gaussian mixture models (GMMs) require complete data. Standard practice is to perform complete case analysis or imputation prior to model fitting. Both approaches have serious drawbacks, potentially resulting in biased and unstable parameter estimates. Here we present MGMM, an R package for fitting GMMs in the presence of missing data. Using three case studies on real and simulated data sets, we demonstrate that, when the underlying distribution is near-to a GMM, MGMM is more effective at recovering the true cluster assignments than state of the art imputation followed by standard GMM. Moreover, MGMM provides an accurate assessment of cluster assignment uncertainty even when the generative distribution is not a GMM. This assessment may be used to identify unassignable observations. MGMM is available as an R package on CRAN: https://CRAN.R-project.org/package=MGMM.


Author(s):  
A. S. Ustinov

The paper describes the existing problems in determination of all scheduled evaluations of missile warhead performance during flight tests and puts forward one of the possible methods of problem solving. Besides, the paper gives the results of investigation of the properties of the factor of dynamic relations between the velocity vector modulus and longitudinal acceleration of missile warheads within the atmospheric passive flight leg – the dynamic relation factor is constant in different flight test conditions. The notion of the reference dynamic relation factor is reasonably introduced for both parameters under study in order to provide reliable determination of parameter estimates, and hence, to conduct a complete analysis of experimental launch results.


Author(s):  
W. H. ElMaraghy ◽  
Z. Wu ◽  
H. A. ElMaraghy

Abstract This paper focuses on the development of a procedure and algorithms for the systematic comparison of geometric variations of measured features with their specified geometric tolerances. To automate the inspection of mechanical parts, it is necessary to analyze the measurement data captured by coordinate measuring machines (CMM) in order to detect out-of-tolerance conditions. A procedure for determining the geometric tolerances from the measured three dimensional coordinates on the surface of a cylindrical feature is presented. This procedure follows the definitions of the geometric tolerances used in the current Standards, and is capable of determining the value of each geometric tolerance from the composite 3-D data. The developed algorithms adopt the minimum tolerance zone criterion. Nonlinear numerical optimization techniques are used to fit the data to the minimum tolerance zone. Two test cases are given in the paper which demonstrate the successful determination of geometric tolerances from given simulated data.


Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 418 ◽  
Author(s):  
Alexander Erler ◽  
Daniel Riebe ◽  
Toralf Beitz ◽  
Hans-Gerd Löhmannsröben ◽  
Robin Gebbers

Precision agriculture (PA) strongly relies on spatially differentiated sensor information. Handheld instruments based on laser-induced breakdown spectroscopy (LIBS) are a promising sensor technique for the in-field determination of various soil parameters. In this work, the potential of handheld LIBS for the determination of the total mass fractions of the major nutrients Ca, K, Mg, N, P and the trace nutrients Mn, Fe was evaluated. Additionally, other soil parameters, such as humus content, soil pH value and plant available P content, were determined. Since the quantification of nutrients by LIBS depends strongly on the soil matrix, various multivariate regression methods were used for calibration and prediction. These include partial least squares regression (PLSR), least absolute shrinkage and selection operator regression (Lasso), and Gaussian process regression (GPR). The best prediction results were obtained for Ca, K, Mg and Fe. The coefficients of determination obtained for other nutrients were smaller. This is due to much lower concentrations in the case of Mn, while the low number of lines and very weak intensities are the reason for the deviation of N and P. Soil parameters that are not directly related to one element, such as pH, could also be predicted. Lasso and GPR yielded slightly better results than PLSR. Additionally, several methods of data pretreatment were investigated.


1998 ◽  
Vol 37 (3) ◽  
pp. 41-49 ◽  
Author(s):  
Gerard Blom ◽  
R. Hans Aalderink

Three resuspension and sedimentation models (Blom, Lick and Partheniades and Krone) are calibrated and evaluated on data from flume experiments with sediments from Lake Ketel and in situ suspended solids measurements. We applied a formal parameter estimation technique in combination with a statistical evaluation of the model fit and parameter estimates. All three models produce a reasonable reconstruction of the data from the flume experiment and the in situ observations. The differences in the model fit of the three models are small, except for the in situ observations. Here the sum of squared residuals for Partheniades and Krone's is about twice the sum for Blom's and Lick's model. The correlation between parameters in resuspension/sedimentation models can be very high, leading to an uncertainty in parameter estimates of 25-50. The parameter estimations based on the flume data are up to orders of magnitude higher than those estimated from field observations.


Sign in / Sign up

Export Citation Format

Share Document