The equivalent data concept applied to the interpolation of potential field data

Geophysics ◽  
1994 ◽  
Vol 59 (5) ◽  
pp. 722-732 ◽  
Author(s):  
Carlos Alberto Mendonça ◽  
João B. C. Silva

The equivalent layer calculation becomes more efficient by first converting the observed potential data set to a much smaller equivalent data set, thus saving considerable CPU time. This makes the equivalent‐source method of data interpolation very competitive with other traditional gridding techniques that ignore the fact that potential anomalies are harmonic functions. The equivalent data set is obtained by using a least‐squares iterative algorithm at each iteration that solves an underdetermined system fitting all observations selected from previous iterations and the observation with the greatest residual in the preceding iteration. The residuals are obtained by computing a set of “predicted observations” using the estimated parameters at the current iteration and subtracting them from the observations. The use of Cholesky’s decomposition to implement the algorithm leads to an efficient solution update everytime a new datum is processed. In addition, when applied to interpolation problems using equivalent layers, the method is optimized by approximating dot products by the discrete form of an analytic integration that can be evaluated with much less computational effort. Finally, the technique is applied to gravity data in a 2 × 2 degrees area containing 3137 observations, from Equant‐2 marine gravity survey offshore northern Brazil. Only 294 equivalent data are selected and used to interpolate the anomalies, creating a regular grid by using the equivalent‐layer technique. For comparison, the interpolation using the minimum‐curvature method was also obtained, producing equivalent results. The number of equivalent observations is usually one order of magnitude smaller than the total number of observations. As a result, the saving in computer time and memory is at least two orders of magnitude as compared to interpolation by equivalent layer using all observations.

Geophysics ◽  
2017 ◽  
Vol 82 (4) ◽  
pp. G57-G69 ◽  
Author(s):  
Fillipe C. L. Siqueira ◽  
Vanderlei C. Oliveira Jr. ◽  
Valéria C. F. Barbosa

We have developed a new iterative scheme for processing gravity data using a fast equivalent-layer technique. This scheme estimates a 2D mass distribution on a fictitious layer located below the observation surface and with finite horizontal dimensions composed by a set of point masses, one directly beneath each gravity station. Our method starts from an initial mass distribution that is proportional to the observed gravity data. Iteratively, our approach updates the mass distribution by adding mass corrections that are proportional to the gravity residuals. At each iteration, the computation of the residual is accomplished by the forward modeling of the vertical component of the gravitational attraction produced by all point masses setting up the equivalent layer. Our method is grounded on the excess of mass and on the positive correlation between the observed gravity data and the masses on the equivalent layer. Mathematically, the algorithm is formulated as an iterative least-squares method that requires neither matrix multiplications nor the solution of linear systems, leading to the processing of large data sets. The time spent on the forward modeling accounts for much of the total computation time, but this modeling demands a small computational effort. We numerically prove the stability of our method by comparing our solution with the one obtained via the classic equivalent-layer technique with the zeroth-order Tikhonov regularization. After estimating the mass distribution, we obtain a desired processed data by multiplying the matrix of the Green’s functions associated with the desired processing by the estimated mass distribution. We have applied the proposed method to interpolate, calculate the horizontal components, and continue gravity data upward (or downward). Testing on field data from the Vinton salt dome, Louisiana, USA, confirms the potential of our approach in processing large gravity data set over on undulating surface.


Geophysics ◽  
2020 ◽  
Vol 85 (6) ◽  
pp. G129-G141
Author(s):  
Diego Takahashi ◽  
Vanderlei C. Oliveira Jr. ◽  
Valéria C. F. Barbosa

We have developed an efficient and very fast equivalent-layer technique for gravity data processing by modifying an iterative method grounded on an excess mass constraint that does not require the solution of linear systems. Taking advantage of the symmetric block-Toeplitz Toeplitz-block (BTTB) structure of the sensitivity matrix that arises when regular grids of observation points and equivalent sources (point masses) are used to set up a fictitious equivalent layer, we develop an algorithm that greatly reduces the computational complexity and RAM memory necessary to estimate a 2D mass distribution over the equivalent layer. The structure of symmetric BTTB matrix consists of the elements of the first column of the sensitivity matrix, which, in turn, can be embedded into a symmetric block-circulant with circulant-block (BCCB) matrix. Likewise, only the first column of the BCCB matrix is needed to reconstruct the full sensitivity matrix completely. From the first column of the BCCB matrix, its eigenvalues can be calculated using the 2D fast Fourier transform (2D FFT), which can be used to readily compute the matrix-vector product of the forward modeling in the fast equivalent-layer technique. As a result, our method is efficient for processing very large data sets. Tests with synthetic data demonstrate the ability of our method to satisfactorily upward- and downward-continue gravity data. Our results show very small border effects and noise amplification compared to those produced by the classic approach in the Fourier domain. In addition, they show that, whereas the running time of our method is [Formula: see text] s for processing [Formula: see text] observations, the fast equivalent-layer technique used [Formula: see text] s with [Formula: see text]. A test with field data from the Carajás Province, Brazil, illustrates the low computational cost of our method to process a large data set composed of [Formula: see text] observations.


2014 ◽  
Vol 112 (11) ◽  
pp. 2729-2744 ◽  
Author(s):  
Carlo J. De Luca ◽  
Joshua C. Kline

Over the past four decades, various methods have been implemented to measure synchronization of motor-unit firings. In this work, we provide evidence that prior reports of the existence of universal common inputs to all motoneurons and the presence of long-term synchronization are misleading, because they did not use sufficiently rigorous statistical tests to detect synchronization. We developed a statistically based method (SigMax) for computing synchronization and tested it with data from 17,736 motor-unit pairs containing 1,035,225 firing instances from the first dorsal interosseous and vastus lateralis muscles—a data set one order of magnitude greater than that reported in previous studies. Only firing data, obtained from surface electromyographic signal decomposition with >95% accuracy, were used in the study. The data were not subjectively selected in any manner. Because of the size of our data set and the statistical rigor inherent to SigMax, we have confidence that the synchronization values that we calculated provide an improved estimate of physiologically driven synchronization. Compared with three other commonly used techniques, ours revealed three types of discrepancies that result from failing to use sufficient statistical tests necessary to detect synchronization. 1) On average, the z-score method falsely detected synchronization at 16 separate latencies in each motor-unit pair. 2) The cumulative sum method missed one out of every four synchronization identifications found by SigMax. 3) The common input assumption method identified synchronization from 100% of motor-unit pairs studied. SigMax revealed that only 50% of motor-unit pairs actually manifested synchronization.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. IM1-IM9 ◽  
Author(s):  
Nathan Leon Foks ◽  
Richard Krahenbuhl ◽  
Yaoguo Li

Compressive inversion uses computational algorithms that decrease the time and storage needs of a traditional inverse problem. Most compression approaches focus on the model domain, and very few, other than traditional downsampling focus on the data domain for potential-field applications. To further the compression in the data domain, a direct and practical approach to the adaptive downsampling of potential-field data for large inversion problems has been developed. The approach is formulated to significantly reduce the quantity of data in relatively smooth or quiet regions of the data set, while preserving the signal anomalies that contain the relevant target information. Two major benefits arise from this form of compressive inversion. First, because the approach compresses the problem in the data domain, it can be applied immediately without the addition of, or modification to, existing inversion software. Second, as most industry software use some form of model or sensitivity compression, the addition of this adaptive data sampling creates a complete compressive inversion methodology whereby the reduction of computational cost is achieved simultaneously in the model and data domains. We applied the method to a synthetic magnetic data set and two large field magnetic data sets; however, the method is also applicable to other data types. Our results showed that the relevant model information is maintained after inversion despite using 1%–5% of the data.


2014 ◽  
Vol 37 (4) ◽  
pp. 419-439 ◽  
Author(s):  
Wenjin Chen ◽  
Robert Tenzer ◽  
Xiang Gu
Keyword(s):  

2014 ◽  
Vol 644-650 ◽  
pp. 2670-2673
Author(s):  
Jun Wang ◽  
Xiao Hong Meng ◽  
Fang Li ◽  
Jun Jie Zhou

With the continuing growth in influence of near surface geophysics, the research of the subsurface structure is of great significance. Geophysical imaging is one of the efficient computer tools that can be applied. This paper utilize the inversion of potential field data to do the subsurface imaging. Here, gravity data and magnetic data are inverted together with structural coupled inversion algorithm. The subspace (model space) is divided into a set of rectangular cells by an orthogonal 2D mesh and assume a constant property (density and magnetic susceptibility) value within each cell. The inversion matrix equation is solved as an unconstrained optimization problem with conjugate gradient method (CG). This imaging method is applied to synthetic data for typical models of gravity and magnetic anomalies and is tested on field data.


Geophysics ◽  
1989 ◽  
Vol 54 (4) ◽  
pp. 497-507 ◽  
Author(s):  
Jorge W. D. Leão ◽  
João B. C. Silva

We present a new approach to perform any linear transformation of gridded potential field data using the equivalent‐layer principle. It is particularly efficient for processing areas with a large amount of data. An N × N data window is inverted using an M × M equivalent layer, with M greater than N so that the equivalent sources extend beyond the data window. Only the transformed field at the center of the data window is computed by premultiplying the equivalent source matrix by the row of the Green’s matrix (associated with the desired transformation) corresponding to the center of the data window. Since the inversion and the multiplication by the Green’s matrix are independent of the data, they are performed beforehand and just once for given values of N, M, and the depth of the equivalent layer. As a result, a grid operator for the desired transformation is obtained which is applied to the data by a procedure similar to discrete convolution. The application of this procedure in reducing synthetic anomalies to the pole and computing magnetization intensity maps shows that grid operators with N = 7 and M = 15 are sufficient to process large areas containing several interfering sources. The use of a damping factor allows the computation of meaningful maps even for unstable transformations in the presence of noise. Also, an equivalent layer larger than the data window takes into account part of the interfering sources so that a smaller damping factor is employed as compared with other damped inversion methods. Transformations of real data from Xingú River Basin and Amazon Basin, Brazil, demonstrate the contribution of this procedure for improvement of a preliminary geologic interpretation with minimum a priori information.


Author(s):  
F. Ma ◽  
J. H. Hwang

Abstract In analyzing a nonclassically damped linear system, one common procedure is to neglect those damping terms which are nonclassical, and retain the classical ones. This approach is termed the method of approximate decoupling. For large-scale systems, the computational effort at adopting approximate decoupling is at least an order of magnitude smaller than the method of complex modes. In this paper, the error introduced by approximate decoupling is evaluated. A tight error bound, which can be computed with relative ease, is given for this method of approximate solution. The role that modal coupling plays in the control of error is clarified. If the normalized damping matrix is strongly diagonally dominant, it is shown that adequate frequency separation is not necessary to ensure small errors.


2021 ◽  
Vol 77 (1) ◽  
pp. 19-27
Author(s):  
Hamish Todd ◽  
Paul Emsley

Biological macromolecules have complex three-dimensional shapes that are experimentally examined using X-ray crystallography and electron cryo-microscopy. Interpreting the data that these methods yield involves building 3D atomic models. With almost every data set, some portion of the time put into creating these models must be spent manually modifying the model in order to make it consistent with the data; this is difficult and time-consuming, in part because the data are `blurry' in three dimensions. This paper describes the design and assessment of CootVR (available at http://hamishtodd1.github.io/cvr), a prototype computer program for performing this task in virtual reality, allowing structural biologists to build molecular models into cryo-EM and crystallographic data using their hands. CootVR was timed against Coot for a very specific model-building task, and was found to give an order-of-magnitude speedup for this task. A from-scratch model build using CootVR was also attempted; from this experience it is concluded that currently CootVR does not give a speedup over Coot overall.


Sign in / Sign up

Export Citation Format

Share Document