Structural inversion of gravity data using linear programming

Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. J41-J50 ◽  
Author(s):  
Tim van Zon ◽  
Kabir Roy-Chowdhury

Structural inversion of gravity data — deriving robust images of the subsurface by delineating lithotype boundaries using density anomalies — is an important goal in a range of exploration settings (e.g., ore bodies, salt flanks). Application of conventional inversion techniques in such cases, using [Formula: see text]-norms and regularization, produces smooth results and is thus suboptimal. We investigate an [Formula: see text]-norm-based approach which yields structural images without the need for explicit regularization. The density distribution of the subsurface is modeled with a uniform grid of cells. The density of each cell is inverted by minimizing the [Formula: see text]-norm of the data misfit using linear programming (LP) while satisfying a priori density constraints. The estimate of the noise level in a given data set is used to qualitatively determine an appropriate parameterization. The 2.5D and 3D synthetic tests adequately reconstruct the structure of the test models. The quality of the inversion depends upon a good prior estimation of the minimum depth of the anomalous body. A comparison of our results with one using truncated singular value decomposition (TSVD) on a noisy synthetic data set favors the LP-based method. There are two advantages in using LP for structural inversion of gravity data. First, it offers a natural way to incorporate a priori information regarding the model parameters. Second, it produces subsurface images with sharp boundaries (structure).

Geophysics ◽  
2005 ◽  
Vol 70 (1) ◽  
pp. J1-J12 ◽  
Author(s):  
Lopamudra Roy ◽  
Mrinal K. Sen ◽  
Donald D. Blankenship ◽  
Paul L. Stoffa ◽  
Thomas G. Richter

Interpretation of gravity data warrants uncertainty estimation because of its inherent nonuniqueness. Although the uncertainties in model parameters cannot be completely reduced, they can aid in the meaningful interpretation of results. Here we have employed a simulated annealing (SA)–based technique in the inversion of gravity data to derive multilayered earth models consisting of two and three dimensional bodies. In our approach, we assume that the density contrast is known, and we solve for the coordinates or shapes of the causative bodies, resulting in a nonlinear inverse problem. We attempt to sample the model space extensively so as to estimate several equally likely models. We then use all the models sampled by SA to construct an approximate, marginal posterior probability density function (PPD) in model space and several orders of moments. The correlation matrix clearly shows the interdependence of different model parameters and the corresponding trade-offs. Such correlation plots are used to study the effect of a priori information in reducing the uncertainty in the solutions. We also investigate the use of derivative information to obtain better depth resolution and to reduce underlying uncertainties. We applied the technique on two synthetic data sets and an airborne-gravity data set collected over Lake Vostok, East Antarctica, for which a priori constraints were derived from available seismic and radar profiles. The inversion results produced depths of the lake in the survey area along with the thickness of sediments. The resulting uncertainties are interpreted in terms of the experimental geometry and data error.


Geophysics ◽  
1986 ◽  
Vol 51 (1) ◽  
pp. 90-97 ◽  
Author(s):  
Alan G. Jones ◽  
John H. Foster

A scheme is described whereby the error associated with the least well‐resolved model eigenparameter in a magnetotelluric survey is reduced by focusing data collection on a specific range of frequencies. The scheme also gives a quantitative estimate of the statistical error associated with the least well‐resolved model parameter, and thus provides an objective criterion to the operator regarding when to cease data collection at that location. The scheme is based on a linearization of the relationship between variations in the model parameters and the changes thereby introduced to the computed response function. The matrix of partial derivatives describing this linearization is factored orthogonally by a singular value decomposition. The scheme is illustrated by applying it to a synthetic data set. Also, the algorithm has been coded in Basic on an HP9845 and employed in the field. An example is given of its field operation in a sedimentary basin environment.


Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. E293-E299
Author(s):  
Jorlivan L. Correa ◽  
Paulo T. L. Menezes

Synthetic data provided by geoelectric earth models are a powerful tool to evaluate a priori a controlled-source electromagnetic (CSEM) workflow effectiveness. Marlim R3D (MR3D) is an open-source complex and realistic geoelectric model for CSEM simulations of the postsalt turbiditic reservoirs at the Brazilian offshore margin. We have developed a 3D CSEM finite-difference time-domain forward study to generate the full-azimuth CSEM data set for the MR3D earth model. To that end, we fabricated a full-azimuth survey with 45 towlines striking the north–south and east–west directions over a total of 500 receivers evenly spaced at 1 km intervals along the rugged seafloor of the MR3D model. To correctly represent the thin, disconnected, and complex geometries of the studied reservoirs, we have built a finely discretized mesh of [Formula: see text] cells leading to a large mesh with a total of approximately 90 million cells. We computed the six electromagnetic field components (Ex, Ey, Ez, Hx, Hy, and Hz) at six frequencies in the range of 0.125–1.25 Hz. In our efforts to mimic noise in real CSEM data, we summed to the data a multiplicative noise with a 1% standard deviation. Both CSEM data sets (noise free and noise added), with inline and broadside geometries, are distributed for research or commercial use, under the Creative Common License, at the Zenodo platform.


1997 ◽  
Vol 43 (143) ◽  
pp. 180-191 ◽  
Author(s):  
Ε. M. Morris ◽  
H. -P. Bader ◽  
P. Weilenmann

AbstractA physics-based snow model has been calibrated using data collected at Halley Bay, Antarctica, during the International Geophysical Year. Variations in snow temperature and density are well-simulated using values for the model parameters within the range reported from other polar field experiments. The effect of uncertainty in the parameter values on the accuracy of the predictions is no greater than the effect of instrumental error in the input data. Thus, this model can be used with parameters determined a priori rather than by optimization. The model has been validated using an independent data set from Halley Bay and then used to estimate 10 m temperatures on the Antarctic Peninsula plateau over the last half-century.


Geophysics ◽  
2017 ◽  
Vol 82 (1) ◽  
pp. G1-G21 ◽  
Author(s):  
William J. Titus ◽  
Sarah J. Titus ◽  
Joshua R. Davis

We apply a Bayesian Markov chain Monte Carlo formalism to the gravity inversion of a single localized 2D subsurface object. The object is modeled as a polygon described by five parameters: the number of vertices, a density contrast, a shape-limiting factor, and the width and depth of an encompassing container. We first constrain these parameters with an interactive forward model and explicit geologic information. Then, we generate an approximate probability distribution of polygons for a given set of parameter values. From these, we determine statistical distributions such as the variance between the observed and model fields, the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the subsurface object). We introduce replica exchange to mitigate trapping in local optima and to compute model probabilities and their uncertainties. We apply our techniques to synthetic data sets and a natural data set collected across the Rio Grande Gorge Bridge in New Mexico. On the basis of our examples, we find that the occupancy probability is useful in visualizing the results, giving a “hazy” cross section of the object. We also find that the role of the container is important in making predictions about the subsurface object.


Geophysics ◽  
2020 ◽  
Vol 85 (6) ◽  
pp. G129-G141
Author(s):  
Diego Takahashi ◽  
Vanderlei C. Oliveira Jr. ◽  
Valéria C. F. Barbosa

We have developed an efficient and very fast equivalent-layer technique for gravity data processing by modifying an iterative method grounded on an excess mass constraint that does not require the solution of linear systems. Taking advantage of the symmetric block-Toeplitz Toeplitz-block (BTTB) structure of the sensitivity matrix that arises when regular grids of observation points and equivalent sources (point masses) are used to set up a fictitious equivalent layer, we develop an algorithm that greatly reduces the computational complexity and RAM memory necessary to estimate a 2D mass distribution over the equivalent layer. The structure of symmetric BTTB matrix consists of the elements of the first column of the sensitivity matrix, which, in turn, can be embedded into a symmetric block-circulant with circulant-block (BCCB) matrix. Likewise, only the first column of the BCCB matrix is needed to reconstruct the full sensitivity matrix completely. From the first column of the BCCB matrix, its eigenvalues can be calculated using the 2D fast Fourier transform (2D FFT), which can be used to readily compute the matrix-vector product of the forward modeling in the fast equivalent-layer technique. As a result, our method is efficient for processing very large data sets. Tests with synthetic data demonstrate the ability of our method to satisfactorily upward- and downward-continue gravity data. Our results show very small border effects and noise amplification compared to those produced by the classic approach in the Fourier domain. In addition, they show that, whereas the running time of our method is [Formula: see text] s for processing [Formula: see text] observations, the fast equivalent-layer technique used [Formula: see text] s with [Formula: see text]. A test with field data from the Carajás Province, Brazil, illustrates the low computational cost of our method to process a large data set composed of [Formula: see text] observations.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
1984 ◽  
Vol 49 (10) ◽  
pp. 1781-1793 ◽  
Author(s):  
Vincent Richard ◽  
Roger Bayer ◽  
Michel Cuer

The aim of this paper is to use linear inverse theory to interpret gravity surveys in mining exploration by incorporating a priori information on the densities and data in terms of Gaussian or uniform probability laws. The Bayesian approach and linear programming techniques lead to the solution of well‐posed questions resulting from the exploration process. In particular, we develop a method of measuring the possible heterogeneity within a given domain by using linear programming. These techniques are applied to gravity data taken over the massive sulfide deposit of Neves Corvo (Portugal). We show how crude constraints on the densities lead to a first estimation of the location of sources, while further geologic constraints allow us to estimate the heterogeneity and to put definite bounds on the ore masses.


Geophysics ◽  
2002 ◽  
Vol 67 (6) ◽  
pp. 1753-1768 ◽  
Author(s):  
Yuji Mitsuhata ◽  
Toshihiro Uchida ◽  
Hiroshi Amano

Interpretation of controlled‐source electromagnetic (CSEM) data is usually based on 1‐D inversions, whereas data of direct current (dc) resistivity and magnetotelluric (MT) measurements are commonly interpreted by 2‐D inversions. We have developed an algorithm to invert frequency‐Domain vertical magnetic data generated by a grounded‐wire source for a 2‐D model of the earth—a so‐called 2.5‐D inversion. To stabilize the inversion, we adopt a smoothness constraint for the model parameters and adjust the regularization parameter objectively using a statistical criterion. A test using synthetic data from a realistic model reveals the insufficiency of only one source to recover an acceptable result. In contrast, the joint use of data generated by a left‐side source and a right‐side source dramatically improves the inversion result. We applied our inversion algorithm to a field data set, which was transformed from long‐offset transient electromagnetic (LOTEM) data acquired in a Japanese oil and gas field. As demonstrated by the synthetic data set, the inversion of the joint data set automatically converged and provided a better resultant model than that of the data generated by each source. In addition, our 2.5‐D inversion accounted for the reversals in the LOTEM measurements, which is impossible using 1‐D inversions. The shallow parts (above about 1 km depth) of the final model obtained by our 2.5‐D inversion agree well with those of a 2‐D inversion of MT data.


Sign in / Sign up

Export Citation Format

Share Document