3-D resistivity inversion using the finite‐element method

Geophysics ◽  
1994 ◽  
Vol 59 (12) ◽  
pp. 1839-1848 ◽  
Author(s):  
Yutaka Sasaki

With the increased availability of faster computers, it is now practical to employ numerical modeling techniques to invert resistivity data for 3-D structure. Full and approximate 3-D inversion methods using the finite‐element solution for the forward problem have been developed. Both methods use reciprocity for efficient evaluations of the partial derivatives of apparent resistivity with respect to model resistivities. In the approximate method, the partial derivatives are approximated by those for a homogeneous half‐space, and thus the computation time and memory requirement are further reduced. The methods are applied to synthetic data sets from 3-D models to illustrate their effectiveness. They give a good approximation of the actual 3-D structure after several iterations in practical situations where the effects of model inadequacy and topography exist. Comparisons of numerical examples show that the full inversion method gives a better resolution, particularly for the near‐surface features, than does the approximate method. Since the full derivatives are more sensitive to local features of resistivity variations than are the approximate derivatives, the resolution of the full method may be further improved when the finite‐element solutions are performed more accurately and more efficiently.

Geophysics ◽  
2020 ◽  
Vol 85 (5) ◽  
pp. U109-U119
Author(s):  
Pengyu Yuan ◽  
Shirui Wang ◽  
Wenyi Hu ◽  
Xuqing Wu ◽  
Jiefu Chen ◽  
...  

A deep-learning-based workflow is proposed in this paper to solve the first-arrival picking problem for near-surface velocity model building. Traditional methods, such as the short-term average/long-term average method, perform poorly when the signal-to-noise ratio is low or near-surface geologic structures are complex. This challenging task is formulated as a segmentation problem accompanied by a novel postprocessing approach to identify pickings along the segmentation boundary. The workflow includes three parts: a deep U-net for segmentation, a recurrent neural network (RNN) for picking, and a weight adaptation approach to be generalized for new data sets. In particular, we have evaluated the importance of selecting a proper loss function for training the network. Instead of taking an end-to-end approach to solve the picking problem, we emphasize the performance gain obtained by using an RNN to optimize the picking. Finally, we adopt a simple transfer learning scheme and test its robustness via a weight adaptation approach to maintain the picking performance on new data sets. Our tests on synthetic data sets reveal the advantage of our workflow compared with existing deep-learning methods that focus only on segmentation performance. Our tests on field data sets illustrate that a good postprocessing picking step is essential for correcting the segmentation errors and that the overall workflow is efficient in minimizing human interventions for the first-arrival picking task.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. S197-S205 ◽  
Author(s):  
Zhaolun Liu ◽  
Abdullah AlTheyab ◽  
Sherif M. Hanafy ◽  
Gerard Schuster

We have developed a methodology for detecting the presence of near-surface heterogeneities by naturally migrating backscattered surface waves in controlled-source data. The near-surface heterogeneities must be located within a depth of approximately one-third the dominant wavelength [Formula: see text] of the strong surface-wave arrivals. This natural migration method does not require knowledge of the near-surface phase-velocity distribution because it uses the recorded data to approximate the Green’s functions for migration. Prior to migration, the backscattered data are separated from the original records, and the band-passed filtered data are migrated to give an estimate of the migration image at a depth of approximately one-third [Formula: see text]. Each band-passed data set gives a migration image at a different depth. Results with synthetic data and field data recorded over known faults validate the effectiveness of this method. Migrating the surface waves in recorded 2D and 3D data sets accurately reveals the locations of known faults. The limitation of this method is that it requires a dense array of receivers with a geophone interval less than approximately one-half [Formula: see text].


Geophysics ◽  
1985 ◽  
Vol 50 (11) ◽  
pp. 1701-1720 ◽  
Author(s):  
Glyn M. Jones ◽  
D. B. Jovanovich

A new technique is presented for the inversion of head‐wave traveltimes to infer near‐surface structure. Traveltimes computed along intersecting pairs of refracted rays are used to reconstruct the shape of the first refracting horizon beneath the surface and variations in refractor velocity along this boundary. The information derived can be used as the basis for further processing, such as the calculation of near‐surface static delays. One advantage of the method is that the shape of the refractor is determined independently of the refractor velocity. With multifold coverage, rapid lateral changes in refractor geometry or velocity can be mapped. Two examples of the inversion technique are presented: one uses a synthetic data set; the other is drawn from field data shot over a deep graben filled with sediment. The results obtained using the synthetic data validate the method and support the conclusions of an error analysis, in which errors in the refractor velocity determined using receivers to the left and right of the shots are of opposite sign. The true refractor velocity therefore falls between the two sets of estimates. The refraction image obtained by inversion of the set of field data is in good agreement with a constant‐velocity reflection stack and illustrates that the ray inversion method can handle large lateral changes in refractor velocity or relief.


Geophysics ◽  
2009 ◽  
Vol 74 (1) ◽  
pp. E75-E91 ◽  
Author(s):  
Gong Li Wang ◽  
Carlos Torres-Verdín ◽  
Jesús M. Salazar ◽  
Benjamin Voss

In addition to reliability and stability, the efficiency and expediency of inversion methods have long been a strong concern for their routine applications by well-log interpreters. We have developed and successfully validated a new inversion method to estimate 2D parametric spatial distributions of electrical resistivity from array-induction measurements acquired in a vertical well. The central component of the method is an efficient approximation to Fréchet derivatives where both the incident and adjoint fields are precomputed and kept unchanged during inversion. To further enhance the overall efficiency of the inversion, we combined the new approximation with both the improved numerical mode-matching method and domain decomposition. Examples of application with synthetic data sets show that the new methodis computer efficient and capable of retrieving original model re-sistivities even in the presence of noise, performing equally well in both high and low contrasts of formation resistivity. In thin resistive beds, the new inversion method estimates more accurate resistivities than standard commercial deconvolution software. We also considered examples of application with field data sets that confirm the new method can successfully process a large data set that includes 200 beds in approximately [Formula: see text] of CPU time on a desktop computer. In addition to 2D parametric spatial distributions of electrical resistivity, the new inversion method provides a qualitative indicator of the uncertainty of estimated parameters based on the estimator’s covariance matrix. The uncertainty estimator provides a qualitative measure of the nonuniqueness of estimated resistivity parameters when the data misfit lies within the measurement error (noise).


Geophysics ◽  
2013 ◽  
Vol 78 (5) ◽  
pp. M29-M41 ◽  
Author(s):  
Mahdi H. Almutlaq ◽  
Gary F. Margrave

We evaluated the concept of surface-consistent matching filters for processing time-lapse seismic data, in which matching filters are convolutional filters that minimize the sum-squared error between two signals. Because in the Fourier domain a matching filter is the spectral ratio of the two signals, we extended the well-known surface-consistent hypothesis such that the data term is a trace-by-trace spectral ratio of two data sets instead of only one (i.e., surface-consistent deconvolution). To avoid unstable division of spectra, we computed the spectral ratios in the time domain by first designing trace-sequential, least-squares matching filters, then Fourier transforming them. A subsequent least-squares solution then factored the trace-sequential matching filters into four operators: two surface-consistent (source and receiver) and two subsurface-consistent (offset and midpoint). We evaluated a time-lapse synthetic data set with nonrepeatable acquisition parameters, complex near-surface geology, and a variable subsurface reservoir layer. We computed the four-operator surface-consistent matching filters from two surveys, baseline and monitor, then applied these matching filters to the monitor survey to match it to the baseline survey over a temporal window where changes were not expected. This algorithm significantly reduced the effect of most of the nonrepeatable parameters, such as differences in source strength, receiver coupling, wavelet bandwidth and phase, and static shifts. We computed the normalized root mean square difference on raw stacked data (baseline and monitor) and obtained a mean value of 70%. This value was significantly reduced after applying the 4C surface-consistent matching filters to about 13.6% computed from final stacks.


Geophysics ◽  
2000 ◽  
Vol 65 (4) ◽  
pp. 1128-1141 ◽  
Author(s):  
Juan García‐Abdeslem

A description is given of numerical methods for 2-D gravity modeling and nonlinear inversion. The forward model solution is suitable for calculating the gravity anomaly caused by a 2-D source body with depth‐dependent density that is laterally bounded by continuous surfaces and can easily accommodate different kinds of geologic structures. The weighted and damped discrete nonlinear inverse method addressed here can invert both density and geometry of the source body. Both modeling and inversion methods are illustrated with several examples using synthetic and two field gravity data sets—one over a sulfide ore body and other across a sedimentary basin. A sensitivity analysis is carried out for the resulting solutions by means of the resolution, covariance, and correlation matrices, providing insight into the capabilities and limitations of the inversion method. The inversion of synthetic data provides meaningful results, showing that the method is robust in the presence of noise. Its sensitivity analysis indicates an almost perfect resolution and small covariance, but high correlation between some parameters. Differences in the asperity aspect of the inverted‐field data sets turned out to be important for the inversion capabilities of the algorithm, making a significant difference in the resolution achieved, its covariance, and the degree of correlation among parameters.


Geophysics ◽  
2018 ◽  
Vol 83 (5) ◽  
pp. KS77-KS85 ◽  
Author(s):  
Yangyang Yu ◽  
Chuntao Liang ◽  
Furong Wu ◽  
Xuben Wang ◽  
Gang Yu ◽  
...  

We have developed the joint source scanning algorithm (JSSA) to determine the locations and focal mechanisms (FMs) of microseismic events simultaneously. However, the computational expense of using JSSA is too high to meet the requirements of real-time monitoring in industrial production. We have developed several scanning schemas to reduce computation time. A multistage scanning schema can significantly improve efficiency while retaining accuracy. For the optimized joint inversion method, a series of tests has been carried out using actual field data and synthetic data to evaluate the accuracy of the method, as well as its dependence on the noise level, source depths, FMs, and other factors. The surface-based arrays better constrain horizontal location errors ([Formula: see text]) and angular errors of P-axes (within 10° for [Formula: see text]). For sources with varying rakes, dips, strikes, and depths, the errors are mostly controlled by the partition of positive and negative polarities in different quadrants. More evenly partitioned polarities in different quadrants yield better results for locations and FMs. Nevertheless, even when some FMs have bad resolutions, the optimized JSSA method can still significantly improve location accuracies.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2014 ◽  
Vol 7 (3) ◽  
pp. 781-797 ◽  
Author(s):  
P. Paatero ◽  
S. Eberly ◽  
S. G. Brown ◽  
G. A. Norris

Abstract. The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement of factor elements (BS-DISP). The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.


2010 ◽  
Vol 26 (11) ◽  
pp. 115010 ◽  
Author(s):  
Amer Zakaria ◽  
Colin Gilmore ◽  
Joe LoVetri

Sign in / Sign up

Export Citation Format

Share Document