Background Correction in Raman Spectroscopic Determination of Dimethylsulfone, Sulfate, and Bisulfate

1985 ◽  
Vol 39 (3) ◽  
pp. 463-470 ◽  
Author(s):  
Yong-Chien Ling ◽  
Thomas J. Vickers ◽  
Charles K. Mann

A study has been made to compare the effectiveness of thirteen methods of spectroscopic background correction in quantitative measurements. These include digital filters, least-squares fitting, and cross-correlation, as well as peak area and height measurements. Simulated data sets with varying S/N and degrees of background curvature were used. The results were compared with the results of corresponding treatments of Raman spectra of dimethyl sulfone, sulfate, and bisulfate. The range of variation of the simulated sets was greater than was possible with the experimental data, but where conditions were comparable, the agreement between them was good. This supports the conclusion that the simulations were valid. Best results were obtained by a least-squares fit with the use of simple polynomials to generate the background correction. Under the conditions employed, limits of detection were about 80 ppm for dimethyl sulfone and sulfate and 420 ppm for bisulfate.

Geophysics ◽  
1982 ◽  
Vol 47 (10) ◽  
pp. 1460-1460
Author(s):  
B. A. Sissons

Although the Tokaanu experiment does contradict the proposal that the gravitational constant G increases with scale, the result is not significant. The standard error in the least‐squares adjustment is at least 1 percent, which exceeds the predicted variation in G. The uncertainty in mean density is nearer 5 percent. Gravity data with sufficient precision to test for a scale effect in G are obtainable; the main problem appears to be the uncertainty in density determinations. Stacey et al (1981) made a least‐squares determination of G using gravity and density measurements from a mine. However, the pattern of residuals obtained indicated the presence of anomalous masses not adequately accounted for by their density averaging. The method I have used which models the spatial variation in density offers the possibility of obtaining a least‐squares fit for G with a satisfactory residual distribution. However, the problem of the effect on bulk density of joints and voids not sampled in hand specimens remains.


2019 ◽  
Vol 1 (1) ◽  
Author(s):  
Mariël F. van Stee ◽  
Shaji Krishnan ◽  
Albert K. Groen ◽  
Albert A. de Graaf

Abstract Background Triple tracer meal experiments used to investigate organ glucose-insulin dynamics, such as endogenous glucose production (EGP) of the liver are labor intensive and expensive. A procedure was developed to obtain individual liver related parameters to describe EGP dynamics without the need for tracers. Results The development used an existing formula describing the EGP dynamics comprising 4 parameters defined from glucose, insulin and C-peptide dynamics arising from triple meal studies. The method employs a set of partial differential equations in order to estimate the parameters for EGP dynamics. Tracer-derived and simulated data sets were used to develop and test the procedure. The predicted EGP dynamics showed an overall mean R2 of 0.91. Conclusions In summary, a method was developed for predicting the hepatic EGP dynamics for healthy, pre-diabetic, and type 2 diabetic individuals without applying tracer experiments.


Author(s):  
M Perzyk ◽  
R Biernacki ◽  
J Kozlowski

Determination of the most significant manufacturing process parameters using collected past data can be very helpful in solving important industrial problems, such as the detection of root causes of deteriorating product quality, the selection of the most efficient parameters to control the process, and the prediction of breakdowns of machines, equipment, etc. A methodology of determination of relative significances of process variables and possible interactions between them, based on interrogations of generalized regression models, is proposed and tested. The performance of several types of data mining tool, such as artificial neural networks, support vector machines, regression trees, classification trees, and a naïve Bayesian classifier, is compared. Also, some simple non-parametric statistical methods, based on an analysis of variance (ANOVA) and contingency tables, are evaluated for comparison purposes. The tests were performed using simulated data sets, with assumed hidden relationships, as well as on real data collected in the foundry industry. It was found that the performance of significance and interaction factors obtained from regression models, and, in particular, neural networks, is satisfactory, while the other methods appeared to be less accurate and/or less reliable.


1996 ◽  
Vol 26 (4) ◽  
pp. 590-600 ◽  
Author(s):  
Katherine L. Bolster ◽  
Mary E. Martin ◽  
John D. Aber

Further evaluation of near infrared reflectance spectroscopy as a method for the determination of nitrogen, lignin, and cellulose concentrations in dry, ground, temperate forest woody foliage is presented. A comparison is made between two regression methods, stepwise multiple linear regression and partial least squares regression. The partial least squares method showed consistently lower standard error of calibration and higher R2 values with first and second difference equations. The first difference partial least squares regression equation resulted in standard errors of calibration of 0.106%, with an R2 of 0.97 for nitrogen, 1.613% with an R2 of 0.88 for lignin, and 2.103% with an R2 of 0.89 for cellulose. The four most highly correlated wavelengths in the near infrared region, and the chemical bonds represented, are shown for each constituent and both regression methods. Generalizability of both methods for prediction of protein, lignin, and cellulose concentrations on independent data sets is discussed. Prediction accuracy for independent data sets and species from other sites was increased using partial least squares regression, but was poor for sample sets containing tissue types or laboratory-measured concentration ranges beyond those of the calibration set.


1988 ◽  
Vol 66 (11) ◽  
pp. 2329-2339 ◽  
Author(s):  
B. H. McArdle

Most biologists are now aware that ordinary least square regression is not appropriate when the X and Y variables are both subject to random error. When there is no information about their error variances, there is no correct unbiased solution. Although the major axis and reduced major axis (geometric mean) methods are widely recommended for this situation, they make different, equally restrictive assumptions about the error variances. By using simulated data sets that violate these assumptions, the reduced major axis method is shown to be generally more efficient and less biased than the major axis method. It is concluded that if the error rate of the X variable is thought to be more than a third of that on the Y variable, then the reduced major axis method is preferable; otherwise the least squares technique is acceptable. An analogous technique, the standard minor axis method, is described for use in place of least squares multiple regression when all of the variables are subject to error.


Sign in / Sign up

Export Citation Format

Share Document