Fast iterative equivalent-layer technique for gravity data processing: A method grounded on excess mass constraint

Geophysics ◽  
2017 ◽  
Vol 82 (4) ◽  
pp. G57-G69 ◽  
Author(s):  
Fillipe C. L. Siqueira ◽  
Vanderlei C. Oliveira Jr. ◽  
Valéria C. F. Barbosa

We have developed a new iterative scheme for processing gravity data using a fast equivalent-layer technique. This scheme estimates a 2D mass distribution on a fictitious layer located below the observation surface and with finite horizontal dimensions composed by a set of point masses, one directly beneath each gravity station. Our method starts from an initial mass distribution that is proportional to the observed gravity data. Iteratively, our approach updates the mass distribution by adding mass corrections that are proportional to the gravity residuals. At each iteration, the computation of the residual is accomplished by the forward modeling of the vertical component of the gravitational attraction produced by all point masses setting up the equivalent layer. Our method is grounded on the excess of mass and on the positive correlation between the observed gravity data and the masses on the equivalent layer. Mathematically, the algorithm is formulated as an iterative least-squares method that requires neither matrix multiplications nor the solution of linear systems, leading to the processing of large data sets. The time spent on the forward modeling accounts for much of the total computation time, but this modeling demands a small computational effort. We numerically prove the stability of our method by comparing our solution with the one obtained via the classic equivalent-layer technique with the zeroth-order Tikhonov regularization. After estimating the mass distribution, we obtain a desired processed data by multiplying the matrix of the Green’s functions associated with the desired processing by the estimated mass distribution. We have applied the proposed method to interpolate, calculate the horizontal components, and continue gravity data upward (or downward). Testing on field data from the Vinton salt dome, Louisiana, USA, confirms the potential of our approach in processing large gravity data set over on undulating surface.

Geophysics ◽  
2020 ◽  
Vol 85 (6) ◽  
pp. G129-G141
Author(s):  
Diego Takahashi ◽  
Vanderlei C. Oliveira Jr. ◽  
Valéria C. F. Barbosa

We have developed an efficient and very fast equivalent-layer technique for gravity data processing by modifying an iterative method grounded on an excess mass constraint that does not require the solution of linear systems. Taking advantage of the symmetric block-Toeplitz Toeplitz-block (BTTB) structure of the sensitivity matrix that arises when regular grids of observation points and equivalent sources (point masses) are used to set up a fictitious equivalent layer, we develop an algorithm that greatly reduces the computational complexity and RAM memory necessary to estimate a 2D mass distribution over the equivalent layer. The structure of symmetric BTTB matrix consists of the elements of the first column of the sensitivity matrix, which, in turn, can be embedded into a symmetric block-circulant with circulant-block (BCCB) matrix. Likewise, only the first column of the BCCB matrix is needed to reconstruct the full sensitivity matrix completely. From the first column of the BCCB matrix, its eigenvalues can be calculated using the 2D fast Fourier transform (2D FFT), which can be used to readily compute the matrix-vector product of the forward modeling in the fast equivalent-layer technique. As a result, our method is efficient for processing very large data sets. Tests with synthetic data demonstrate the ability of our method to satisfactorily upward- and downward-continue gravity data. Our results show very small border effects and noise amplification compared to those produced by the classic approach in the Fourier domain. In addition, they show that, whereas the running time of our method is [Formula: see text] s for processing [Formula: see text] observations, the fast equivalent-layer technique used [Formula: see text] s with [Formula: see text]. A test with field data from the Carajás Province, Brazil, illustrates the low computational cost of our method to process a large data set composed of [Formula: see text] observations.


Geophysics ◽  
1994 ◽  
Vol 59 (5) ◽  
pp. 722-732 ◽  
Author(s):  
Carlos Alberto Mendonça ◽  
João B. C. Silva

The equivalent layer calculation becomes more efficient by first converting the observed potential data set to a much smaller equivalent data set, thus saving considerable CPU time. This makes the equivalent‐source method of data interpolation very competitive with other traditional gridding techniques that ignore the fact that potential anomalies are harmonic functions. The equivalent data set is obtained by using a least‐squares iterative algorithm at each iteration that solves an underdetermined system fitting all observations selected from previous iterations and the observation with the greatest residual in the preceding iteration. The residuals are obtained by computing a set of “predicted observations” using the estimated parameters at the current iteration and subtracting them from the observations. The use of Cholesky’s decomposition to implement the algorithm leads to an efficient solution update everytime a new datum is processed. In addition, when applied to interpolation problems using equivalent layers, the method is optimized by approximating dot products by the discrete form of an analytic integration that can be evaluated with much less computational effort. Finally, the technique is applied to gravity data in a 2 × 2 degrees area containing 3137 observations, from Equant‐2 marine gravity survey offshore northern Brazil. Only 294 equivalent data are selected and used to interpolate the anomalies, creating a regular grid by using the equivalent‐layer technique. For comparison, the interpolation using the minimum‐curvature method was also obtained, producing equivalent results. The number of equivalent observations is usually one order of magnitude smaller than the total number of observations. As a result, the saving in computer time and memory is at least two orders of magnitude as compared to interpolation by equivalent layer using all observations.


Geophysics ◽  
2016 ◽  
Vol 81 (4) ◽  
pp. ID59-ID71 ◽  
Author(s):  
Kyle Basler-Reeder ◽  
John Louie ◽  
Satish Pullammanappallil ◽  
Graham Kent

Joint seismic and gravity analyses of the San Emidio geothermal field in the northwest Basin and Range province of Nevada demonstrate that joint optimization changes interpretation outcomes. The prior 0.3–0.5 km deep basin interpretation gives way to a deeper than 1.3 km basin model. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts, flattening antiformal reflectors that could have been interpreted as folds. Furthermore, joint optimization provides a clearer picture of the rangefront fault by increasing the depth of constrained velocities, which improves reflector coherency at depth. This technique provides new insight when applied to existing data sets and could replace the existing strategy of forward modeling to match gravity data. We have achieved stable joint optimization through simulated annealing, a global optimization algorithm that does not require an accurate initial model. Balancing the combined seismic-gravity objective function is accomplished by a new approach based on analysis of Pareto charts. Gravity modeling uses an efficient convolution model, and the basis of seismic modeling is the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests found that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Restricted offset-range migration analysis provides insights into precritical and gradient reflections in the data set.


Geophysics ◽  
2003 ◽  
Vol 68 (6) ◽  
pp. 1909-1916 ◽  
Author(s):  
Juan García‐Abdeslem

A method is developed for 2D forward modeling and nonlinear inversion of gravity data. The forward modeling calculates the gravity anomaly caused by a 2D source body with an assumed depth‐dependent density contrast given by a cubic polynomial. The source body is bounded at depth by a smooth, curvilinear surface given by the Fourier series, which represents the basement. The weighted and damped discrete nonlinear inverse method presented here can invert gravity data to infer the geometry of the source body. The use of the Fourier series to define the basement geometry allows the interpreter to reconstruct a broad variety of geometries for the geologic structures using a small number of free parameters. Both modeling and inversion methods are illustrated with examples using field gravity data across the San Jacinto graben in southern California and across the Sayula basin in Jalisco, Mexico. The inversion of the San Jacinto graben residual Bouguer gravity data yields results compatible with those from previous interpretations of the same data set, suggesting that this geologic structure accommodates about 2.5 km of sediments. The inversion of the residual Bouguer gravity data across the Sayula basin suggests a maximum of 1‐km‐thick sedimentary infill.


Geophysics ◽  
2006 ◽  
Vol 71 (6) ◽  
pp. J71-J80 ◽  
Author(s):  
Maria A. Annecchione ◽  
Pierre Keating ◽  
Michel Chouteau

Airborne gravimeters based on inertial navigation system (INS) technology are capable, in theory, of providing direct observations of the horizontal components of anomalous gravity. However, their accuracy and usefulness in geophysical or geological applications is unknown. Determining the accuracy of airborne horizontal component data is complicated by the lack of ground-surveyed control data. We determine the accuracy of airborne vector gravity data internally using repeatedly flown line data. Multilevel wavelet analyses of raw vector gravity data elucidate the limiting error source for the horizontal components. We demonstrate the usefulness of the airborne horizontal component data by performing Euler deconvolutions on real vector gravity data. The accuracy of the horizontal components is lower than the accuracy of the vertical component. Wavelet analyses of data from a test flight over Alexandria, Ontario, Canada, show that the main source of error limiting the accuracy of the horizontal components is time-dependent platform alignment errors. Euler deconvolutions performed on the Timmins data set show that the horizontal components help in constraining the 3D locations of regional geological features. It is thus concluded that the quality of the airborne horizontal component data is sufficient to motivate their use in resource exploration and geological applications.


Author(s):  
Sauro Mocetti

Abstract This paper contributes to the growing number of studies on intergenerational mobility by providing a measure of earnings elasticity for Italy. The absence of an appropriate data set is overcome by adopting the two-sample two-stage least squares method. The analysis, based on the Survey of Household Income and Wealth, shows that intergenerational mobility is lower in Italy than it is in other developed countries. We also examine the reasons why the long-term labor market success of children is related to that of their fathers.


Geophysics ◽  
2012 ◽  
Vol 77 (2) ◽  
pp. V41-V59 ◽  
Author(s):  
Olena Tiapkina ◽  
Martin Landrø ◽  
Yuriy Tyapkin ◽  
Brian Link

The advent of single receiver point, multi-component geophones has necessitated that ground roll be removed in the processing flow rather than through acquisition design. A wide class of processing methods for ground-roll elimination is polarization filtering. A number of these methods use singular value decomposition (SVD) or some related transformations. We focus on a single-station SVD-based polarization filter that we consider to be one of the best in the industry. The method is comprised of two stages: (1) ground-roll detection and (2) ground-roll estimation and filtering. To detect the ground roll, a special attribute dependent on the singular values of a three-column matrix formed by a sliding time window is used. The ground roll is approximated and subtracted using the first two eigenimages of this matrix. To limit the possible damage to the signal, the filter operates within the record intervals where the ground roll is detected and within the ground-roll frequency bandwidth only. We improve the ground-roll detector to make it theoretically insensitive to ambient noise and more sensitive to the presence of ground roll. The advantage of the new detector is demonstrated on synthetic and field data sets. We estimate theoretically and with synthetic data the attenuation of the underlying reflections that can be caused by the polarization filter. We show that the underlying signal always loses almost all the energy on the vertical component and on the horizontal component in the ground-roll propagation plane and within the ground-roll frequency bandwidth. The only signal component, if it exists, that can retain a significant part of its energy is the horizontal component orthogonal to the above plane. When 2D 3C field operations are conducted, the signal particle motion can deviate from the ground-roll propagation plane and can therefore retain some of its energy due to a set of offline reflections. In the case of 3D 3C seismic surveys, the reflected signal always deviates from the ground-roll propagation plane on the receiver lines that do not contain the source. This is confirmed with a 2.5D 3C synthetic data set. We discuss when the ability of the filter to effectively subtract the ground roll may, or may not, allow us to ignore the inevitable harm that is done to the underlying reflected waves.


Geophysics ◽  
2005 ◽  
Vol 70 (1) ◽  
pp. J1-J12 ◽  
Author(s):  
Lopamudra Roy ◽  
Mrinal K. Sen ◽  
Donald D. Blankenship ◽  
Paul L. Stoffa ◽  
Thomas G. Richter

Interpretation of gravity data warrants uncertainty estimation because of its inherent nonuniqueness. Although the uncertainties in model parameters cannot be completely reduced, they can aid in the meaningful interpretation of results. Here we have employed a simulated annealing (SA)–based technique in the inversion of gravity data to derive multilayered earth models consisting of two and three dimensional bodies. In our approach, we assume that the density contrast is known, and we solve for the coordinates or shapes of the causative bodies, resulting in a nonlinear inverse problem. We attempt to sample the model space extensively so as to estimate several equally likely models. We then use all the models sampled by SA to construct an approximate, marginal posterior probability density function (PPD) in model space and several orders of moments. The correlation matrix clearly shows the interdependence of different model parameters and the corresponding trade-offs. Such correlation plots are used to study the effect of a priori information in reducing the uncertainty in the solutions. We also investigate the use of derivative information to obtain better depth resolution and to reduce underlying uncertainties. We applied the technique on two synthetic data sets and an airborne-gravity data set collected over Lake Vostok, East Antarctica, for which a priori constraints were derived from available seismic and radar profiles. The inversion results produced depths of the lake in the survey area along with the thickness of sediments. The resulting uncertainties are interpreted in terms of the experimental geometry and data error.


2021 ◽  
Author(s):  
sara sayyadi ◽  
Magnús T. Gudmundsson ◽  
Thórdís Högnadóttir ◽  
James White ◽  
Joaquín M.C. Belart ◽  
...  

<p>The formation of the oceanic island Surtsey in the shallow ocean off the south coast of Iceland in 1963-1967 remains one of the best-studied examples of basaltic emergent volcanism to date. The island was built by both explosive, phreatomagmatic phases and by effusive activity forming lava shields covering parts of the explosively formed tuff cones.  Constraints on the subsurface structure of Surtsey achieved mainly based on the documented evolution during eruption and from drill cores in 1979 and in the ICDP-supported SUSTAIN drilling expedition in 2017(an inclined hole, directed 35° from the vertical). The 2017 drilling confirmed the existence of a diatreme, cut into the sedimentary pre-eruption seafloor (Jackson et al., 2019). </p><p>We use 3D-gravity modeling, constrained by the stratigraphy from the drillholes to study the structure of the island and the underlying diatreme.  Detailed gravity data were obtained on Surtsey in July 2014 with a gravity station spacing of ~100 m. Density measurements for the seafloor sedimentary and tephra samples of the surface were carried out using the ASTM1 protocol. By comparing the results with specific gravity measurements of cores from drillhole in 2017, a density contrast of about 200 kg m<sup>-3</sup> was found between the lapilli tuffs of the diatreme and the seafloor sediments.  Our approach is to divide the island into four main units of distinct density: (1) tuffs above sea level, (2) tuffs below sea level, (3) lavas above sea level, and (4) a lava delta below sea level, composed of breccias over which the lava advanced during the effusive eruption.  The boundaries between the bodies are defined from the eruption history and mapping done during the eruption, aided by the drill cores. </p><p>A complete Bouguer anomaly map is obtained by calculating a total terrain correction by applying the Nagy formula to dense DEMs (5 m spacing out to 1.2 km from station, 200 m spacing between 1.2 km and 50 km) of both island topography and ocean bathymetry.  Through the application of both forward and inverse modeling, using the GM-SYS 3D software, the results provide a 3-D model of the island itself, as well as constraints on diatreme shape and depth.</p>


Sign in / Sign up

Export Citation Format

Share Document