AUTOMATIC CONTOURING OF IRREGULARLY SPACED DATA

Geophysics ◽  
1968 ◽  
Vol 33 (3) ◽  
pp. 424-430 ◽  
Author(s):  
Chester R. Pelto ◽  
Thomas A. Elkins ◽  
H. A. Boyd

Machine contouring of irregularly spaced observations can be performed in three basic steps: (1) In large areas with no data points, control values are interpolated by a specified mathematical rule. These values keep the next step “well behaved.” (2) A regional polynomial surface is fitted by least squares to the original and interpolated points. (3) The surface of step (2) is deformed smoothly to pass through the original observations. The final product is similar in appearance to hand‐drawn maps. The complete mathematical theory is developed in an appendix.

2020 ◽  
pp. 000370282097751
Author(s):  
Xin Wang ◽  
Xia Chen

Many spectra have a polynomial-like baseline. Iterative polynomial fitting (IPF) is one of the most popular methods for baseline correction of these spectra. However, the baseline estimated by IPF may have substantially error when the spectrum contains significantly strong peaks or have strong peaks located at the endpoints. First, IPF uses temporary baseline estimated from the current spectrum to identify peak data points. If the current spectrum contains strong peaks, then the temporary baseline substantially deviates from the true baseline. Some good baseline data points of the spectrum might be mistakenly identified as peak data points and are artificially re-assigned with a low value. Second, if a strong peak is located at the endpoint of the spectrum, then the endpoint region of the estimated baseline might have significant error due to overfitting. This study proposes a search algorithm-based baseline correction method (SA) that aims to compress sample the raw spectrum to a dataset with small number of data points and then convert the peak removal process into solving a search problem in artificial intelligence (AI) to minimize an objective function by deleting peak data points. First, the raw spectrum is smoothened out by the moving average method to reduce noise and then divided into dozens of unequally spaced sections on the basis of Chebyshev nodes. Finally, the minimal points of each section are collected to form a dataset for peak removal through search algorithm. SA selects the mean absolute error (MAE) as the objective function because of its sensitivity to overfitting and rapid calculation. The baseline correction performance of SA is compared with those of three baseline correction methods: Lieber and Mahadevan–Jansen method, adaptive iteratively reweighted penalized least squares method, and improved asymmetric least squares method. Simulated and real FTIR and Raman spectra with polynomial-like baselines are employed in the experiments. Results show that for these spectra, the baseline estimated by SA has fewer error than those by the three other methods.


Author(s):  
Vassilios E. Theodoracatos ◽  
Vasudeva Bobba

Abstract In this paper an approach is presented for the generation of a NURBS (Non-Uniform Rational B-splines) surface from a large set of 3D data points. The main advantage of NURBS surface representation is the ability to analytically describe both, precise quadratic primitives and free-form curves and surfaces. An existing three dimensional laser-based vision system is used to obtain the spatial point coordinates of an object surface with respect to a global coordinate system. The least-squares approximation technique is applied in both the image and world space of the digitized physical object to calculate the homogeneous vector and the control net of the NURBS surface. A new non-uniform knot vectorization process is developed based on five data parametrization techniques including four existing techniques, viz., uniform, chord length, centripetal, and affine invariant angle and a new technique based on surface area developed in this study. Least-squares error distribution and surface interrogation are used to evaluate the quality of surface fairness for a minimum number of NURBS control points.


Author(s):  
Bo Wang ◽  
Chen Sun ◽  
Keming Zhang ◽  
Jubing Chen

Abstract As a representative type of outlier, the abnormal data in displacement measurement often inevitably occurred in full-field optical metrology and significantly affected the further evaluation, especially when calculating the strain field by differencing the displacement. In this study, an outlier removal method is proposed which can recognize and remove the abnormal data in optically measured displacement field. A iterative critical factor least squares algorithm (CFLS) is developed which distinguishes the distance between the data points and the least square plane to identify the outliers. A successive boundary point algorithm is proposed to divide the measurement domain to improve the applicability and effectiveness of the CFLS algorithm. The feasibility and precision of the proposed method are discussed in detail through simulations and experiments. Results show that the outliers are reliably recognized and the precision of the strain estimation is highly improved by using these methods.


1977 ◽  
Vol 14 (02) ◽  
pp. 411-415 ◽  
Author(s):  
E. J. Hannan ◽  
Marek Kanter

The least squares estimators β i(N), j = 1, …, p, from N data points, of the autoregressive constants for a stationary autoregressive model are considered when the disturbances have a distribution attracted to a stable law of index α < 2. It is shown that N1/δ(β i(N) – β) converges almost surely to zero for any δ > α. Some comments are made on alternative definitions of the βi (N).


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Fengqin Chen ◽  
Jinbo Huang ◽  
Xianjun Wu ◽  
Xiaoli Wu ◽  
Arash Arabmarkadeh

Biosurfactants are a series of organic compounds that are composed of two parts, hydrophobic and hydrophilic, and since they have properties such as less toxicity and biodegradation, they are widely used in the food industry. Important applications include healthy products, oil recycling, and biological refining. In this research, to calculate the curves of rhamnolipid adsorption compared to Amberlite XAD-2, the least-squares vector machine algorithm has been used. Then, the obtained model is formed by 204 adsorption data points. Various graphical and statistical approaches are applied to ensure the correctness of the model output. The findings of this study are compared with studies that have used artificial neural network (ANN) and data group management method (GMDH) models. The model used in this study has a lower percentage of absolute mean deviation than ANN and GMDH models, which is estimated to be 1.71%.The least-squares support vector machine (LSSVM) is very valuable for investigating the breakthrough curve of rhamnolipid, and it can also be used to help chemists working on biosurfactants. Moreover, our graphical interface program can assist everyone to determine easily the curves of rhamnolipid adsorption on Amberlite XAD-2.


Geophysics ◽  
1966 ◽  
Vol 31 (1) ◽  
pp. 253-259 ◽  
Author(s):  
E. L. Dougherty ◽  
S. T. Smith

The procedure used to discover subsurface formations where mineral resources may exist normally requires the accumulation and processing of large amounts of data concerning the earth’s fields. Data errors may strongly affect the conclusions drawn from the analysis. Thus, a method of checking for errors is essential. Since the field should be relatively smooth locally, a typical approach is to fit the data to a surface described by a low‐order polynomial. Deviations of data points from this surface can then be used to detect errors. Frequently a least‐squares approximation is used to determine the surface, but results could be misleading. Linear programming can be applied to give more satisfactory results. In this approach, the sum of the absolute values of the deviations is minimized rather than the squares of the deviations as in least squares. This paper describes in detail the formulation of the linear programming problem and cites an example of its application to error detection. Through this formulation, once errors are removed, the results are meaningful physically and, hence, can be used for detecting subsurface phenomena directly.


1979 ◽  
Vol 25 (3) ◽  
pp. 432-438 ◽  
Author(s):  
P J Cornbleet ◽  
N Gochman

Abstract The least-squares method is frequently used to calculate the slope and intercept of the best line through a set of data points. However, least-squares regression slopes and intercepts may be incorrect if the underlying assumptions of the least-squares model are not met. Two factors in particular that may result in incorrect least-squares regression coefficients are: (a) imprecision in the measurement of the independent (x-axis) variable and (b) inclusion of outliers in the data analysis. We compared the methods of Deming, Mandel, and Bartlett in estimating the known slope of a regression line when the independent variable is measured with imprecision, and found the method of Deming to be the most useful. Significant error in the least-squares slope estimation occurs when the ratio of the standard deviation of measurement of a single x value to the standard deviation of the x-data set exceeds 0.2. Errors in the least-squares coefficients attributable to outliers can be avoided by eliminating data points whose vertical distance from the regression line exceed four times the standard error the estimate.


1978 ◽  
Vol 24 (4) ◽  
pp. 611-620 ◽  
Author(s):  
R B Davis ◽  
J E Thompson ◽  
H L Pardue

Abstract This paper discusses properties of several statistical parameters that are useful in judging the quality of least-squares fits of experimental data and in interpreting least-squares results. The presentation includes simplified equations that emphasize similarities and dissimilarities among the standard error of estimate, the standard deviations of slopes and intercepts, the correlation coefficient, and the degree of correlation between the least-squares slope and intercept. The equations are used to illustrate dependencies of these parameters upon experimentally controlled variables such as the number of data points and the range and average value of the independent variable. Results are interpreted in terms of which parameters are most useful for different kinds of applications. The paper also includes a discussion of joint confidence intervals that should be used when slopes and intercepts are highly correlated and presents equations that can be used to judge the degree of correlation between these coefficients and to compute the elliptical joint confidence intervals. The parabolic confidence intervals for calibration cures are also discussed briefly.


2011 ◽  
Vol 90-93 ◽  
pp. 2907-2912
Author(s):  
Yu Sheng Gong ◽  
Qian Han ◽  
Li Ping Zhang

To make full use of geodetic height results measured by GNSS and improve the accuracy that GNSS geodetic height convert to normal height, method of polynomial surface fitting has been selected in this article to research into fitting of the elevation. In the first place, for least squares estimation do not have the ability of resisting gross error, robust estimation is introduced to data preprocessing, which has solve the problem of distortion model effectively and then combines with specific engineering to make comparison and to analyze accuracy of polynomial surface fitting data of different orders. MATLAB has been used in programming design in the whole process, which has realized automatic processing of data.


Sign in / Sign up

Export Citation Format

Share Document