Curve-Fitting by the Method of Grouping

1952 ◽  
Vol 5 (2) ◽  
pp. 238
Author(s):  
PG Guest

A method of fitting polynomials is described in which the "normal" equations are obtained much more rapidly than the corresponding equations in the least-squares method. Efficiencies are found to be about 90 per cent. The method is illustrated by an example.

2013 ◽  
Vol 333-335 ◽  
pp. 1456-1460 ◽  
Author(s):  
Wen Bo Na ◽  
Zhi Wei Su ◽  
Ping Zhang

A new method which is least squares fitting combined with improved BP neural network based on LM algorithm was put forward. In order to overcome the weak points that easy to fall into local minimum, slow convergence of traditional BP neural network, we use LM algorithm to improve it. Least-squares curve fitting can be used to reflect the overall trend of the data changes, so we adopted least squares method firstly to make curve fitting for sample data firstly. Then, we corrected the fitting error by the improved BP Neural Network which has the advantages that reflecting external factors. Finally, the fitted values and error correction values were added to get oilfield production forecast. The results show that the oilfield production forecast error is significantly lower than the single curve fitting, BP Neural Network or LMBP.


2013 ◽  
Vol 699 ◽  
pp. 885-892
Author(s):  
Le Min Gu

P-Least Squares (P-LS) method is Least Squares (LS) method promotion, based on the criteria of error -squares minimal to select parameter , namely satisfies following constitute the curve-fitting method. Due to the arbitrariness of the number , P-LS method has a wide field of application, when , P-LS approximation translated Chebyshev optimal approximation. This paper discusses the general principles of P-LS method; provides a way to realize the general solution of P-LS approximation. P-Least Squares method not only has significantly reduces the maximum error, also has solved the problems of Chebyshev approximation non-solution in some complex non-linear approximations,and also has the computation conveniently, can carry on the large-scale multi-data processing ability. This method is introduced by some examples unified in the materials science, the chemical engineering and the life body change.


Geophysics ◽  
1957 ◽  
Vol 22 (1) ◽  
pp. 9-21 ◽  
Author(s):  
A. E. Scheidegger ◽  
P. L. Willmore

During large‐scale seismic surveys it is often impossible to arrange shot points and seismometers in a simple pattern, so that the data cannot be treated as simply as those of small‐scale prospecting arrays. It is shown that the problem of reducing seismic observations from m shot points and n seismometers (where there is no simple pattern of arranging these) is equivalent to solving (m+n) normal equations with (m+n) unknowns. These normal equations are linear, the matrix of their coefficients is symmetric. The problem of inverting that matrix is solved here by the calculus of “Cracovians,” mathematical entities similar to matrices. When all the shots have been observed at all the seismometers, the solution can even be given generally. Otherwise, a certain amount of computation is necessary. An example is given.


2013 ◽  
Vol 339 ◽  
pp. 602-607
Author(s):  
Chun Li Song ◽  
Di Chen Liu ◽  
Jun Wu ◽  
Fei Fei Dong ◽  
Lian Tu ◽  
...  

Identification and calculation of static frequency characteristics is of great significance for power system to maintain its stability. In this paper, coefficient of static frequency characteristics is fitted by the least squares method. Frequency deviation restriction point under different capacitances is forecasted by the fitted trend of coefficient of static frequency characteristics. Moreover, the new method is simulated and its calculation error is also compared.


1979 ◽  
Vol 101 (4) ◽  
pp. 286-291
Author(s):  
P. P. Pizzo

Observations concerning the statistical evaluation of creep data are presented. Methods currently employed in the determination of stress rupture regression lines can result in conflicting and necessarily invalid results. Anomalous behavior is principally associated with the selection of the dependent variable. However, it is the least squares method of curve fitting which introduces regression bias. Methods to improve the validity of least squares regressions are suggested.


1988 ◽  
Vol 110 (4) ◽  
pp. 429-434 ◽  
Author(s):  
Hui Cheng ◽  
K. C. Gupta

Based on the design equation of mechanism and the least-squares techniques, a rapidly convergent iteration method and simple direct methods for the synthesis of mechanisms are presented. It is proved that the so-called linear superposition method is a special case of the direct methods whose effectiveness depends upon the singular nature of the normal equations of the least-squares method as well as the smallness of the Lagrange multipliers of the compatibility equations for the mechanism. While the significance of the latter has been recognized in the literature, that of the former has not been documented in the literature. By examining the correlation matrix and the condition number for the normal equations, we show that these are near-singular. This property provides a fundamental basis for the direct methods presented in this paper. The sensitivity of solutions to the design specifications and to the precision of floating point computations also is discussed. The theory and associated algorithms can be applied to the synthesis of any planar or spatial mechanism where the use of the least-squares technique is contemplated.


2019 ◽  
Vol 1345 ◽  
pp. 042044
Author(s):  
Feihong Shen ◽  
Hanfeng Zhang ◽  
Kaichao Lin ◽  
Tong Zhang ◽  
Dongwei Guo ◽  
...  

2013 ◽  
Vol 2013 ◽  
pp. 1-4 ◽  
Author(s):  
Louis M. Houston

The least-squares method is the most popular method for fitting a polynomial curve to data. It is based on minimizing the total squared error between a polynomial model and the data. In this paper we develop a different approach that exploits the autocorrelation function. In particular, we use the nonzero lag autocorrelation terms to produce a system of quadratic equations that can be solved together with a linear equation derived from summing the data. There is a maximum of solutions when the polynomial is of degree . For the linear case, there are generally two solutions. Each solution is consistent with a total error of zero. Either visual examination or measurement of the total squared error is required to determine which solution fits the data. A comparison between the comparable autocorrelation term solution and linear least squares shows negligible difference.


Sign in / Sign up

Export Citation Format

Share Document