Characteristics of statistical parameters used to interpret least-squares results.

1978 ◽  
Vol 24 (4) ◽  
pp. 611-620 ◽  
Author(s):  
R B Davis ◽  
J E Thompson ◽  
H L Pardue

Abstract This paper discusses properties of several statistical parameters that are useful in judging the quality of least-squares fits of experimental data and in interpreting least-squares results. The presentation includes simplified equations that emphasize similarities and dissimilarities among the standard error of estimate, the standard deviations of slopes and intercepts, the correlation coefficient, and the degree of correlation between the least-squares slope and intercept. The equations are used to illustrate dependencies of these parameters upon experimentally controlled variables such as the number of data points and the range and average value of the independent variable. Results are interpreted in terms of which parameters are most useful for different kinds of applications. The paper also includes a discussion of joint confidence intervals that should be used when slopes and intercepts are highly correlated and presents equations that can be used to judge the degree of correlation between these coefficients and to compute the elliptical joint confidence intervals. The parabolic confidence intervals for calibration cures are also discussed briefly.

2012 ◽  
Vol 80 (1) ◽  
Author(s):  
P. Collet ◽  
G. Gary ◽  
B. Lundberg

Methods for estimation of the complex modulus generally produce data from which discrete results can be obtained for a set of frequencies. As these results are normally afflicted by noise, they are not necessarily consistent with the principle of causality and requirements of thermodynamics. A method is established for noise-corrected estimation of the complex modulus, subject to the constraints of causality, positivity of dissipation rate and reality of relaxation function, given a finite set of angular frequencies and corresponding complex moduli obtained experimentally. Noise reduction is achieved by requiring that two self-adjoint matrices formed from the experimental data should be positive semidefinite. The method provides a rheological model that corresponds to a specific configuration of springs and dashpots. The poles of the complex modulus on the positive imaginary frequency axis are determined by a subset of parameters obtained as the common positive zeros of certain rational functions, while the remaining parameters are obtained from a least squares fit. If the set of experimental data is sufficiently large, the level of refinement of the rheological model is in accordance with the material behavior and the quality of the experimental data. The method was applied to an impact test with a Nylon bar specimen. In this case, data at the 29 lowest resonance frequencies resulted in a rheological model with 14 parameters. The method has added improvements to the identification of rheological models as follows: (1) Noise reduction is fully integrated. (2) A rheological model is provided with a number of elements in accordance with the complexity of the material behavior and the quality of the experimental data. (3) Parameters determining poles of the complex modulus are obtained without use of a least squares fit.


Geophysics ◽  
1977 ◽  
Vol 42 (6) ◽  
pp. 1265-1276 ◽  
Author(s):  
Anthony F. Gangi ◽  
James N. Shapiro

An algorithm is described which iteratively solves for the coefficients of successively higher‐order, least‐squares polynomial fits in terms of the results for the previous, lower‐order polynomial fit. The technique takes advantage of the special properties of the least‐squares or Hankel matrix, for which [Formula: see text]. Only the first and last column vectors of the inverse matrix are needed at each stage to continue the iteration to the next higher stage. An analogous procedure may be used to determine the inverse of such least‐squares type matrices. The inverse of each square submatrix is determined from the inverse of the previous, lower‐order submatrix. The results using this algorithm are compared with the method of fitting orthogonal polynomials to data points. While the latter method gives higher accuracy when high‐order polynomials are fitted to the data, it requires many more computations. The increased accuracy of the orthogonal‐polynomial fit is valuable when high precision of fitting is required; however, for experimental data with inherent inaccuracies, the added computations outweigh the possible benefit derived from the more accurate fitting. A Fortran listing of the algorithm is given.


1995 ◽  
Vol 23 (4) ◽  
pp. 315-326
Author(s):  
Ronald D. Flack

Uncertainties in least squares curve fits to data with uncertainties are examined. First, experimental data with nominal curve shapes, representing property profiles between boundaries, are simulated by adding known uncertainties to individual points. Next, curve fits to the simulated data are achieved and compared to the nominal curves. By using a large number of different sets of data, statistical differences between the two curves are quantified and, thus, the uncertainty of the curve fit is derived. Studies for linear, quadratic, and higher-order nominal curves with curve fits up to fourth order are presented herein. Typically, curve fits have uncertainties that are 50% or less than those of the individual data points. These uncertainties increase with increasing order of the least squares curve fit. The uncertainties decrease with increasing number of data points on the curves.


2015 ◽  
Vol 235 ◽  
pp. 1-8
Author(s):  
Jacek Pietraszek ◽  
Ewa Skrzypczak-Pietraszek

Experimental studies very often lead to datasets with a large number of noted attributes (observed properties) and relatively small number of records (observed objects). The classic analysis cannot explain recorded attributes in the form of regression relationships due to lack of sufficient number of data points. One of method making available a filtering of unimportant attributes is an approach known as ‘dimensionality reduction’. Well-known example of such approach is principal component analysis (PCA) which transforms the data from the high-dimensional space to a space of fewer dimensions and gives heuristics to select least but necessary number of dimensions. Authors used such technique successfully in their previous investigations but a question arose: whether PCA is robust and stable? This paper tries to answer this question by re-sampling experimental data and observing empirical confidence intervals of parameters used to make decision in PCA heuristics.


Geophysics ◽  
1979 ◽  
Vol 44 (9) ◽  
pp. 1589-1591
Author(s):  
A. F. Gangi ◽  
J. N. Shapiro

We are pleased to hear that Ohta and Saito found our “Propagating‐least‐Squares” algorithm (PROLSQ) for fitting polynomials, [Formula: see text], (1) simple to use and efficient in execution. We appreciate their pointing out that there can be a difficulty with the algorithm under very special (but easily determined) circumstances; that is, when the independent‐variable values [Formula: see text] at the data points are so distributed that the odd‐order moments [Formula: see text] are zero. We did not experience this difficulty because we never treated a case in which all these odd moments were zero.


1987 ◽  
Vol 41 (3) ◽  
pp. 447-449 ◽  
Author(s):  
Juwhan Liu ◽  
Jack L. Koenig

A baseline correction algorithm using a least-squares procedure is developed. Linear or quadratic types of baselines are obtained through successive fitting and rejection of data points on a statistical basis. After the entire spectrum or a subsection is fitted to a least-squares line, the standard error of estimate is utilized as a criterion to determine if the fluctuation of each data point about the line can be thought of as the baseline fluctuation. Comments on various baseline correction procedures are also made.


2020 ◽  
Author(s):  
Marc Cerdà-Domènech ◽  
Jaime Frigola ◽  
Anna Sanchez-Vidal ◽  
Miquel Canals

<p>X-ray fluorescence core scanners (XRF-CS) allow rapid, non-destructive and continuous high-resolution analyses of the elemental composition of sediment cores. Since XRF-CS analyses are usually performed in fresh untreated materials, elemental intensities can be affected by the physical properties of the sediment (e.g. pore water content, grain size, sediment irregularities and changes in matrix) and the selected excitation parameters. Accordingly, the records of the measured elemental intensity cannot be considered quantitative. Nonetheless, these data can be converted to quantitative data through a linear regression approach using a relatively small number of discrete samples analyzed by techniques providing absolute concentrations. Such conversion constitutes a powerful tool to determine pollution levels in sediments at very high resolution. However, a precise characterization of the errors associated with the linear function is required to evaluate the quality of the calibrated element concentrations.</p><p>Here we present a novel calibration of high-resolution XRF-CS for Ti, Mn, Fe, Zn, Pb and As measured in heavily contaminated marine deposits. Three widely applied regression methods have been tested to determine the best linear function for XRF data conversion, which are: the ordinary least-squares (OLS) method, which does not consider the standard error in any variable (x and y), the weighted ordinary least-squares (WOLS) method, which considers the weighted standard error of the vertical variable (y), and the weighted least-squares (WLS) method, which incorporates the standard error in both x and y variables.</p><p>The results, derived from the analysis of metal-polluted sediments from offshore Portmán Bay and Barcelona, in the Mediterranean Sea off Spain, demonstrate that the applied calibration procedure improves the quality of the linear regression for any of the three regression methods (OLS, WOLS, and WLS), thus increasing correlation coefficients, which are higher than r<sup>2</sup>=0.94, and reducing data deviation from the linear function. Nonetheless, the WLS appears as the best regression method to minimize errors in the calibrated element concentrations. Our results open the door to use calibrated XRF-CS data to evaluate marine sediment pollution according to the sediment quality guidelines (SQG) with errors lower than 0.4% to 2% for Fe, 1% to 7% for Zn, 3 to 14% for Pb, and 5% to 16% for Mn, which highlight the robustness of the calibration procedure here presented. Our study incorporates and evaluates for the first time the analytical and statistical errors of XRF-CS data calibration, and evidences that the errors of the calibrated element concentrations must be properly assessed in future calibration efforts.</p>


Author(s):  
Uppuluri Sirisha ◽  
G. Lakshme Eswari

This paper briefly introduces Internet of Things(IOT) as a intellectual connectivity among the physical objects or devices which are gaining massive increase in the fields like efficiency, quality of life and business growth. IOT is a global network which is interconnecting around 46 million smart meters in U.S. alone with 1.1 billion data points per day[1]. The total installation base of IOT connecting devices would increase to 75.44 billion globally by 2025 with a increase in growth in business, productivity, government efficiency, lifestyle, etc., This paper familiarizes the serious concern such as effective security and privacy to ensure exact and accurate confidentiality, integrity, authentication access control among the devices.


Sign in / Sign up

Export Citation Format

Share Document