A PROPAGATING ALGORITHM FOR DETERMINING NTH-ORDER POLYNOMIAL, LEAST‐SQUARES FITS

Geophysics ◽  
1977 ◽  
Vol 42 (6) ◽  
pp. 1265-1276 ◽  
Author(s):  
Anthony F. Gangi ◽  
James N. Shapiro

An algorithm is described which iteratively solves for the coefficients of successively higher‐order, least‐squares polynomial fits in terms of the results for the previous, lower‐order polynomial fit. The technique takes advantage of the special properties of the least‐squares or Hankel matrix, for which [Formula: see text]. Only the first and last column vectors of the inverse matrix are needed at each stage to continue the iteration to the next higher stage. An analogous procedure may be used to determine the inverse of such least‐squares type matrices. The inverse of each square submatrix is determined from the inverse of the previous, lower‐order submatrix. The results using this algorithm are compared with the method of fitting orthogonal polynomials to data points. While the latter method gives higher accuracy when high‐order polynomials are fitted to the data, it requires many more computations. The increased accuracy of the orthogonal‐polynomial fit is valuable when high precision of fitting is required; however, for experimental data with inherent inaccuracies, the added computations outweigh the possible benefit derived from the more accurate fitting. A Fortran listing of the algorithm is given.

Geophysics ◽  
1966 ◽  
Vol 31 (1) ◽  
pp. 253-259 ◽  
Author(s):  
E. L. Dougherty ◽  
S. T. Smith

The procedure used to discover subsurface formations where mineral resources may exist normally requires the accumulation and processing of large amounts of data concerning the earth’s fields. Data errors may strongly affect the conclusions drawn from the analysis. Thus, a method of checking for errors is essential. Since the field should be relatively smooth locally, a typical approach is to fit the data to a surface described by a low‐order polynomial. Deviations of data points from this surface can then be used to detect errors. Frequently a least‐squares approximation is used to determine the surface, but results could be misleading. Linear programming can be applied to give more satisfactory results. In this approach, the sum of the absolute values of the deviations is minimized rather than the squares of the deviations as in least squares. This paper describes in detail the formulation of the linear programming problem and cites an example of its application to error detection. Through this formulation, once errors are removed, the results are meaningful physically and, hence, can be used for detecting subsurface phenomena directly.


1978 ◽  
Vol 24 (4) ◽  
pp. 611-620 ◽  
Author(s):  
R B Davis ◽  
J E Thompson ◽  
H L Pardue

Abstract This paper discusses properties of several statistical parameters that are useful in judging the quality of least-squares fits of experimental data and in interpreting least-squares results. The presentation includes simplified equations that emphasize similarities and dissimilarities among the standard error of estimate, the standard deviations of slopes and intercepts, the correlation coefficient, and the degree of correlation between the least-squares slope and intercept. The equations are used to illustrate dependencies of these parameters upon experimentally controlled variables such as the number of data points and the range and average value of the independent variable. Results are interpreted in terms of which parameters are most useful for different kinds of applications. The paper also includes a discussion of joint confidence intervals that should be used when slopes and intercepts are highly correlated and presents equations that can be used to judge the degree of correlation between these coefficients and to compute the elliptical joint confidence intervals. The parabolic confidence intervals for calibration cures are also discussed briefly.


Geophysics ◽  
1979 ◽  
Vol 44 (9) ◽  
pp. 1588-1589
Author(s):  
Yoich Ohta ◽  
Masanori Saito

Gangi and Shapiro (1977) proposed a recursive algorithm for determining coefficients of least‐squares polynomials. The algorithm is simpler and more efficient than Trench’s (1965) algorithm or Phillips’ (1971) triangular decomposition algorithm and has an advantage that by monitoring the mean‐square errors at each iteration we can find an optimum order of polynomial fit. We have tried their algorithm and encountered a difficulty. It may be worth recording the source of the difficulty.


1995 ◽  
Vol 23 (4) ◽  
pp. 315-326
Author(s):  
Ronald D. Flack

Uncertainties in least squares curve fits to data with uncertainties are examined. First, experimental data with nominal curve shapes, representing property profiles between boundaries, are simulated by adding known uncertainties to individual points. Next, curve fits to the simulated data are achieved and compared to the nominal curves. By using a large number of different sets of data, statistical differences between the two curves are quantified and, thus, the uncertainty of the curve fit is derived. Studies for linear, quadratic, and higher-order nominal curves with curve fits up to fourth order are presented herein. Typically, curve fits have uncertainties that are 50% or less than those of the individual data points. These uncertainties increase with increasing order of the least squares curve fit. The uncertainties decrease with increasing number of data points on the curves.


2019 ◽  
Author(s):  
Liwei Cao ◽  
Danilo Russo ◽  
Vassilios S. Vassiliadis ◽  
Alexei Lapkin

<p>A mixed-integer nonlinear programming (MINLP) formulation for symbolic regression was proposed to identify physical models from noisy experimental data. The formulation was tested using numerical models and was found to be more efficient than the previous literature example with respect to the number of predictor variables and training data points. The globally optimal search was extended to identify physical models and to cope with noise in the experimental data predictor variable. The methodology was coupled with the collection of experimental data in an automated fashion, and was proven to be successful in identifying the correct physical models describing the relationship between the shear stress and shear rate for both Newtonian and non-Newtonian fluids, and simple kinetic laws of reactions. Future work will focus on addressing the limitations of the formulation presented in this work, by extending it to be able to address larger complex physical models.</p><p><br></p>


2011 ◽  
Vol 291-294 ◽  
pp. 1015-1020 ◽  
Author(s):  
Chong Jin ◽  
Hong Wang ◽  
Xiao Zhou Xia

Based on the superiority avoiding the matrix equation to be morbid for those fitting functions constructed by orthogonal base, the Legendre orthogonal polynomial is adopted to fit the experimental data of concrete uniaxial compression stress-strain curves under the frame of least-square. With the help of FORTRAN programming, 3 series of experimental data is fitted. And the fitting effect is very satisfactory when the item number of orthogonal base is not less than 5. What’s more, compared with those piecewise fitting functions, the Legendre orthogonal polynomial fitting function obtained can be introduced into the nonlinear harden-soften character of concrete constitute law more convenient because of its uniform function form and continuous derived feature. And the fitting idea by orthogonal base function will provide a widely road for studying the constitute law of concrete material.


2020 ◽  
pp. 000370282097751
Author(s):  
Xin Wang ◽  
Xia Chen

Many spectra have a polynomial-like baseline. Iterative polynomial fitting (IPF) is one of the most popular methods for baseline correction of these spectra. However, the baseline estimated by IPF may have substantially error when the spectrum contains significantly strong peaks or have strong peaks located at the endpoints. First, IPF uses temporary baseline estimated from the current spectrum to identify peak data points. If the current spectrum contains strong peaks, then the temporary baseline substantially deviates from the true baseline. Some good baseline data points of the spectrum might be mistakenly identified as peak data points and are artificially re-assigned with a low value. Second, if a strong peak is located at the endpoint of the spectrum, then the endpoint region of the estimated baseline might have significant error due to overfitting. This study proposes a search algorithm-based baseline correction method (SA) that aims to compress sample the raw spectrum to a dataset with small number of data points and then convert the peak removal process into solving a search problem in artificial intelligence (AI) to minimize an objective function by deleting peak data points. First, the raw spectrum is smoothened out by the moving average method to reduce noise and then divided into dozens of unequally spaced sections on the basis of Chebyshev nodes. Finally, the minimal points of each section are collected to form a dataset for peak removal through search algorithm. SA selects the mean absolute error (MAE) as the objective function because of its sensitivity to overfitting and rapid calculation. The baseline correction performance of SA is compared with those of three baseline correction methods: Lieber and Mahadevan–Jansen method, adaptive iteratively reweighted penalized least squares method, and improved asymmetric least squares method. Simulated and real FTIR and Raman spectra with polynomial-like baselines are employed in the experiments. Results show that for these spectra, the baseline estimated by SA has fewer error than those by the three other methods.


1936 ◽  
Vol 15 (1) ◽  
pp. 141-176
Author(s):  
Duncan C. Fraser

SynopsisThe paper is intended as an elementary introduction and companion to the paper on “Orthogonal Polynomials,” by G. J. Lidstone, J.I.A., vol. briv., p. 128, and the paper on the “Sum and Integral of the Product of Two Functions,” by A. W. Joseph, J.I.A., vol. lxiv., p. 329; and also to Dr. Aitken's paper on the “Graduation of Data by the Orthogonal Polynomials of Least Squares,” Proc. Roy. Soc. Edin., vol. liii., p. 54.Following Dr. Aitken Σux is defined for the immediate purpose to be u0+…+ux−1.The scheme of successive summations is set out in the form of a difference diagram and is extended to negative arguments. The special point to which attention is drawn is the existence of a wedge of zeros between the sums for positive arguments and those for negative arguments.The rest of the paper is for the greater part a study of the table of binomial coefficients for positive and for negative arguments. The Tchebychef polynomials are simple functions of the binomial coefficients, and after a description of a particular example and of its properties general methods are given of forming the polynomials by means of tables of differences. These tables furnish examples of simple, differences, of divided differences, of adjusted differences, and of a system of special adjusted differences which gives a very easy scheme for the formation of the Tchebychef polynomials.


Author(s):  
M. Martinez ◽  
B. Rocha ◽  
M. Li ◽  
G. Shi ◽  
A. Beltempo ◽  
...  

The National Research Council of Canada has developed Structural Health Monitoring (SHM) test platforms for load and damage monitoring, sensor system testing and validation. One of the SHM platform consists of two 2.25 meter long, simple cantilever aluminium beams that provide a perfect scenario for evaluating the capability of a load monitoring system to measure bending, torsion and shear loads. In addition to static and quasi-static loading procedures, these structures can be fatigue loaded using a realistic aircraft usage spectrum while SHM and load monitoring systems are assessed for their performance and accuracy. In this study, Micro-Electro-Mechanical Systems (MEMS), consisting of triads of gyroscopes, accelerometers and magnetometers, were used to compute changes in angles at discrete stations along the structure. A Least Squares based algorithm was developed for polynomial fitting of the different data obtained from the MEMS installed in several spatial locations of the structure. The angles obtained from the MEMS sensors were fitted with a second, third and/or fourth order degree polynomial surface, enabling the calculation of displacements at every point. The use of a novel Kalman filter architecture was evaluated for an accurate angle and subsequent displacement estimation. The outputs of the newly developed algorithms were then compared to the displacements obtained from the Linear Variable Displacement Transducers (LVDT) connected to the structures. The determination of the best Least Squares based polynomial fit order enabled the application of derivative operators with enough accuracy to permit the calculation of strains along the structure. The calculated strain values were subsequently compared to the measurements obtained from reference strain gauges installed at different locations on the structure. This new approach for load monitoring was able to provide accurate estimates of applied strains and loads.


Sign in / Sign up

Export Citation Format

Share Document