THE USE OF LINEAR PROGRAMMING TO FILTER DIGITIZED MAP DATA

Geophysics ◽  
1966 ◽  
Vol 31 (1) ◽  
pp. 253-259 ◽  
Author(s):  
E. L. Dougherty ◽  
S. T. Smith

The procedure used to discover subsurface formations where mineral resources may exist normally requires the accumulation and processing of large amounts of data concerning the earth’s fields. Data errors may strongly affect the conclusions drawn from the analysis. Thus, a method of checking for errors is essential. Since the field should be relatively smooth locally, a typical approach is to fit the data to a surface described by a low‐order polynomial. Deviations of data points from this surface can then be used to detect errors. Frequently a least‐squares approximation is used to determine the surface, but results could be misleading. Linear programming can be applied to give more satisfactory results. In this approach, the sum of the absolute values of the deviations is minimized rather than the squares of the deviations as in least squares. This paper describes in detail the formulation of the linear programming problem and cites an example of its application to error detection. Through this formulation, once errors are removed, the results are meaningful physically and, hence, can be used for detecting subsurface phenomena directly.

Geophysics ◽  
1977 ◽  
Vol 42 (6) ◽  
pp. 1265-1276 ◽  
Author(s):  
Anthony F. Gangi ◽  
James N. Shapiro

An algorithm is described which iteratively solves for the coefficients of successively higher‐order, least‐squares polynomial fits in terms of the results for the previous, lower‐order polynomial fit. The technique takes advantage of the special properties of the least‐squares or Hankel matrix, for which [Formula: see text]. Only the first and last column vectors of the inverse matrix are needed at each stage to continue the iteration to the next higher stage. An analogous procedure may be used to determine the inverse of such least‐squares type matrices. The inverse of each square submatrix is determined from the inverse of the previous, lower‐order submatrix. The results using this algorithm are compared with the method of fitting orthogonal polynomials to data points. While the latter method gives higher accuracy when high‐order polynomials are fitted to the data, it requires many more computations. The increased accuracy of the orthogonal‐polynomial fit is valuable when high precision of fitting is required; however, for experimental data with inherent inaccuracies, the added computations outweigh the possible benefit derived from the more accurate fitting. A Fortran listing of the algorithm is given.


Geophysics ◽  
1966 ◽  
Vol 31 (4) ◽  
pp. 828-829
Author(s):  
Norman S. Neidell

I wish to comment on the short note by E. L. Dougherty and S. T. Smith titled “The Use of Linear Programming to Filter Digitized Map Data” which appeared in the February, 1966, issue of Geophysics. First, I would like to congratulate the authors for showing explicitly how Taylor’s theory can be used to justify a local polynomial fit. I think, however, that they should have made clear that this same justification is valid for the least squares method. In essence these authors have chosen to make an approximation in the [Formula: see text] Norm instead of the [Formula: see text] or Least Square Norm (see Rice, 1964).


2020 ◽  
pp. 000370282097751
Author(s):  
Xin Wang ◽  
Xia Chen

Many spectra have a polynomial-like baseline. Iterative polynomial fitting (IPF) is one of the most popular methods for baseline correction of these spectra. However, the baseline estimated by IPF may have substantially error when the spectrum contains significantly strong peaks or have strong peaks located at the endpoints. First, IPF uses temporary baseline estimated from the current spectrum to identify peak data points. If the current spectrum contains strong peaks, then the temporary baseline substantially deviates from the true baseline. Some good baseline data points of the spectrum might be mistakenly identified as peak data points and are artificially re-assigned with a low value. Second, if a strong peak is located at the endpoint of the spectrum, then the endpoint region of the estimated baseline might have significant error due to overfitting. This study proposes a search algorithm-based baseline correction method (SA) that aims to compress sample the raw spectrum to a dataset with small number of data points and then convert the peak removal process into solving a search problem in artificial intelligence (AI) to minimize an objective function by deleting peak data points. First, the raw spectrum is smoothened out by the moving average method to reduce noise and then divided into dozens of unequally spaced sections on the basis of Chebyshev nodes. Finally, the minimal points of each section are collected to form a dataset for peak removal through search algorithm. SA selects the mean absolute error (MAE) as the objective function because of its sensitivity to overfitting and rapid calculation. The baseline correction performance of SA is compared with those of three baseline correction methods: Lieber and Mahadevan–Jansen method, adaptive iteratively reweighted penalized least squares method, and improved asymmetric least squares method. Simulated and real FTIR and Raman spectra with polynomial-like baselines are employed in the experiments. Results show that for these spectra, the baseline estimated by SA has fewer error than those by the three other methods.


2017 ◽  
Vol 27 (3) ◽  
pp. 563-573 ◽  
Author(s):  
Rajendran Vidhya ◽  
Rajkumar Irene Hepzibah

AbstractIn a real world situation, whenever ambiguity exists in the modeling of intuitionistic fuzzy numbers (IFNs), interval valued intuitionistic fuzzy numbers (IVIFNs) are often used in order to represent a range of IFNs unstable from the most pessimistic evaluation to the most optimistic one. IVIFNs are a construction which helps us to avoid such a prohibitive complexity. This paper is focused on two types of arithmetic operations on interval valued intuitionistic fuzzy numbers (IVIFNs) to solve the interval valued intuitionistic fuzzy multi-objective linear programming problem with pentagonal intuitionistic fuzzy numbers (PIFNs) by assuming differentαandβcut values in a comparative manner. The objective functions involved in the problem are ranked by the ratio ranking method and the problem is solved by the preemptive optimization method. An illustrative example with MATLAB outputs is presented in order to clarify the potential approach.


Sign in / Sign up

Export Citation Format

Share Document