Gradient-boosted equivalent sources for gridding large gravity and magnetic datasets

Author(s):  
Santiago Rubén Soler ◽  
Leonardo Uieda

<p>The equivalent source technique is a well known method for interpolating gravity and magnetic data. It consists in defining a set of finite sources that generate the same observed field and using them to predict the values of the field at unobserved locations. The equivalent source technique has some advantages over general-purpose interpolators: the variation of the field due to the height of the observation points is taken into account and the predicted values belong to an harmonic field. These make equivalent sources a more suited interpolator for any data deriving from a harmonic field (like gravity disturbances and magnetic anomalies). Nevertheless, it has one drawback: the computational cost. The process of estimating the coefficients of the sources that best fit the observed values is very computationally demanding: a Jacobian matrix with number of observation points times number of sources elements must be built and then used to fit the source coefficients though a least-squares method. Increasing the number of data points can make the Jacobian matrix to grow so large that it cannot fit in computer memory.</p><p>We present a gradient-boosting equivalent source method for interpolating large datasets. In it, we define small subsets of equivalent sources that are fitted against neighbouring data points. The process is iteratively carried out, fitting one subset of sources on each iteration to the residual field from previous iterations. This new method is inspired by the gradient-boosting technique, mainly used in machine learning solutions.</p><p>We show that the gradient-boosted equivalent sources are capable of producing accurate predictions by testing against synthetic surveys. Moreover, we were able to grid a gravity dataset from Australia with more than 1.7 million points on a modest personal computer in less than half an hour.</p>

Geophysics ◽  
2014 ◽  
Vol 79 (6) ◽  
pp. J81-J90 ◽  
Author(s):  
Yaoguo Li ◽  
Misac Nabighian ◽  
Douglas W. Oldenburg

We present a reformulation of reduction to the pole (RTP) of magnetic data at low latitudes and the equator using equivalent sources. The proposed method addresses both the theoretical difficulty of low-latitude instability and the practical issue of computational cost. We prove that a positive equivalent source exists when the magnetic data are produced by normal induced magnetization, and we show that the positivity is sufficient to overcome the low-latitude instability in the space domain. We further apply a regularization term directly to the recovered RTP field to improve the solution. The use of equivalent source also naturally enables the processing of data acquired on uneven surface. The result is a practical algorithm that is effective at the equatorial region and can process large-scale data sets with uneven observation heights.


2021 ◽  
Author(s):  
Duan Li ◽  
Jinsong Du ◽  
Chao Chen ◽  
Qing Liang ◽  
Shida Sun

Abstract. Marine magnetic surveys over oceanic ridge regions are of great interest for investigations of structure and evolution of oceanic crust, and have played a key role in developing the theory of plate tectonics (Dyment, 1993; Maus et al, 2007; Vine and Matthews, 1963). In this study, we propose an interpolation approach based on the dual-layer equivalent source model for the generation of a magnetic anomaly map based on sparse survey line data over oceanic ridge areas. In this approach, information from an ocean crust age model is utilized as constraint for the inversion procedure. The constraints can affect the magnetization distribution of equivalent sources following crust age. The results of synthetic tests show that the obtained magnetic anomalies have higher accuracy than those obtained by other interpolation methods. Meanwhile, considering the unclear on the true magnetization directions of sources and the background field in the synthetic model, well interpolation result can still be obtained. We applied the approach to magnetic data obtained from five survey lines east of the Southeast Indian Ridge. This prediction result is useful to improve the lithospheric magnetic field models WDMAMv2 and EMAG2v3, in the terms of spatial resolution and the consistency with observed data.


Geophysics ◽  
1992 ◽  
Vol 57 (4) ◽  
pp. 629-636 ◽  
Author(s):  
Lindrith Cordell

Potential‐field geophysical data observed at scattered discrete points in three dimensions can be interpolated (gridded, for example, onto a level surface) by relating the point data to a continuous function of equivalent discrete point sources. The function used here is the inverse‐distance Newtonian potential. The sources, located beneath some of the data points at a depth proportional to distance to the nearest neighboring data point, are determined iteratively. Areas of no data are filled by minimum curvature. For two‐dimensional (2-D) data (all data points at the same elevation), grids calculated by minimum curvature and by equivalent sources are similar, but the equivalent‐source method can be tuned to reduce aliasing. Gravity data in an area of high topographic relief in southwest U.S.A. were gridded by minimum curvature (a 2-D algorithm) and also by equivalent sources (3-D). The minimum‐curvature grid shows strong correlation with topography, as expected, because variation in gravity effect due to variation in observation‐point elevation (topography) is ignored. However, the data gridded and reduced to a level surface at the mean observation‐point elevation, by means of the equivalent‐source method, also show strong correlation with topography even though variation in observation‐point elevation is accounted for. This can be attributed mostly to the inadequacy of constant‐density terrain correction or to data error. Three‐dimensional treatment in this example is required as a means of calculating the data onto a level surface, above regions where data and geologic sources overlap, as a necessary first step for making geologic correction, variable‐density terrain correction, and evaluating data error. Better spectral estimates are obtained by direct calculation of the Fourier transform of the equivalent‐source function than by the discrete fast Fourier transform computer algorithm.


Geophysics ◽  
2009 ◽  
Vol 74 (5) ◽  
pp. L67-L73 ◽  
Author(s):  
Fernando Guspí ◽  
Iván Novara

We have developed an equivalent-source method for performing reduction to the pole and related transforms from magnetic data measured on unevenly spaced stations at different elevations. The equivalent source is composed of points located vertically beneath the measurement stations, and their magnetic properties are chosen in such a way that the reduced-to-the-pole magnetic field generated by them is represented by an inverse-distance Newtonian potential. This function, which attenuates slowly with distance, provides better coverage for discrete data points. The magnetization intensity is determined iteratively until the observed field is fitted within a certain tolerance related to the level of noise; thus, advantages in computer time are gained over the resolution of large systems of equations. In the case of induced magnetization, the iteration converges well for verticalor horizontal inclinations, and results are stable if noise is taken into account properly. However, for a range of intermediate inclinations near 35°, a factor tending to zero makes it necessary to perform the reduction through a two-stage procedure, using an auxiliary magnetization direction, without significantly affecting the speed and stability of the method. The performance of the procedure was tested on a synthetic example based on a field generated on randomly scattered stations by a random set of magnetic dipoles, contaminated with noise, which is reduced to the pole for three different magnetization directions. Results provide a good approximation to the theoretical reduced-to-the-pole field using a one- or a two-stage reduction, showing minor noise artifacts when the direction is nearly horizontal. In a geophysical example with real data, the reduction to the pole was used to correct the estimated magnetization direction that originates an isolated anomaly over Sierra de San Luis, Argentina.


Geophysics ◽  
2010 ◽  
Vol 75 (3) ◽  
pp. L51-L59 ◽  
Author(s):  
Yaoguo Li ◽  
Douglas W. Oldenburg

We have developed a fast algorithm for generating an equivalent source by using fast wavelet transforms based on orthonormal, compactly supported wavelets. We apply a 2D wavelet transform to each row and column of the coefficient matrix and subsequently threshold the transformed matrix to generate a sparse representation in the wavelet domain. The algorithm then uses this sparse matrix to construct the the equivalent source directly in the wavelet domain. Performing an inverse wavelet transform then yields the equivalent source in the space domain. Using upward continuation of total-field magnetic data between uneven surfaces as examples, we have compared this approach with the direct solution using the dense matrix in the space domain. We have shown that the wavelet approach can reduce the CPU time by as many as two orders of magnitude.


2020 ◽  
Vol 1 (3) ◽  
Author(s):  
Maysam Abedi

The presented work examines application of an Augmented Iteratively Re-weighted and Refined Least Squares method (AIRRLS) to construct a 3D magnetic susceptibility property from potential field magnetic anomalies. This algorithm replaces an lp minimization problem by a sequence of weighted linear systems in which the retrieved magnetic susceptibility model is successively converged to an optimum solution, while the regularization parameter is the stopping iteration numbers. To avoid the natural tendency of causative magnetic sources to concentrate at shallow depth, a prior depth weighting function is incorporated in the original formulation of the objective function. The speed of lp minimization problem is increased by inserting a pre-conditioner conjugate gradient method (PCCG) to solve the central system of equation in cases of large scale magnetic field data. It is assumed that there is no remanent magnetization since this study focuses on inversion of a geological structure with low magnetic susceptibility property. The method is applied on a multi-source noise-corrupted synthetic magnetic field data to demonstrate its suitability for 3D inversion, and then is applied to a real data pertaining to a geologically plausible porphyry copper unit.  The real case study located in  Semnan province of  Iran  consists  of  an arc-shaped  porphyry  andesite  covered  by  sedimentary  units  which  may  have  potential  of  mineral  occurrences, especially  porphyry copper. It is demonstrated that such structure extends down at depth, and consequently exploratory drilling is highly recommended for acquiring more pieces of information about its potential for ore-bearing mineralization.


2020 ◽  
pp. 000370282097751
Author(s):  
Xin Wang ◽  
Xia Chen

Many spectra have a polynomial-like baseline. Iterative polynomial fitting (IPF) is one of the most popular methods for baseline correction of these spectra. However, the baseline estimated by IPF may have substantially error when the spectrum contains significantly strong peaks or have strong peaks located at the endpoints. First, IPF uses temporary baseline estimated from the current spectrum to identify peak data points. If the current spectrum contains strong peaks, then the temporary baseline substantially deviates from the true baseline. Some good baseline data points of the spectrum might be mistakenly identified as peak data points and are artificially re-assigned with a low value. Second, if a strong peak is located at the endpoint of the spectrum, then the endpoint region of the estimated baseline might have significant error due to overfitting. This study proposes a search algorithm-based baseline correction method (SA) that aims to compress sample the raw spectrum to a dataset with small number of data points and then convert the peak removal process into solving a search problem in artificial intelligence (AI) to minimize an objective function by deleting peak data points. First, the raw spectrum is smoothened out by the moving average method to reduce noise and then divided into dozens of unequally spaced sections on the basis of Chebyshev nodes. Finally, the minimal points of each section are collected to form a dataset for peak removal through search algorithm. SA selects the mean absolute error (MAE) as the objective function because of its sensitivity to overfitting and rapid calculation. The baseline correction performance of SA is compared with those of three baseline correction methods: Lieber and Mahadevan–Jansen method, adaptive iteratively reweighted penalized least squares method, and improved asymmetric least squares method. Simulated and real FTIR and Raman spectra with polynomial-like baselines are employed in the experiments. Results show that for these spectra, the baseline estimated by SA has fewer error than those by the three other methods.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. IM1-IM9 ◽  
Author(s):  
Nathan Leon Foks ◽  
Richard Krahenbuhl ◽  
Yaoguo Li

Compressive inversion uses computational algorithms that decrease the time and storage needs of a traditional inverse problem. Most compression approaches focus on the model domain, and very few, other than traditional downsampling focus on the data domain for potential-field applications. To further the compression in the data domain, a direct and practical approach to the adaptive downsampling of potential-field data for large inversion problems has been developed. The approach is formulated to significantly reduce the quantity of data in relatively smooth or quiet regions of the data set, while preserving the signal anomalies that contain the relevant target information. Two major benefits arise from this form of compressive inversion. First, because the approach compresses the problem in the data domain, it can be applied immediately without the addition of, or modification to, existing inversion software. Second, as most industry software use some form of model or sensitivity compression, the addition of this adaptive data sampling creates a complete compressive inversion methodology whereby the reduction of computational cost is achieved simultaneously in the model and data domains. We applied the method to a synthetic magnetic data set and two large field magnetic data sets; however, the method is also applicable to other data types. Our results showed that the relevant model information is maintained after inversion despite using 1%–5% of the data.


Sign in / Sign up

Export Citation Format

Share Document