2.5‐D inversion of frequency‐domain electromagnetic data generated by a grounded‐wire source

Geophysics ◽  
2002 ◽  
Vol 67 (6) ◽  
pp. 1753-1768 ◽  
Author(s):  
Yuji Mitsuhata ◽  
Toshihiro Uchida ◽  
Hiroshi Amano

Interpretation of controlled‐source electromagnetic (CSEM) data is usually based on 1‐D inversions, whereas data of direct current (dc) resistivity and magnetotelluric (MT) measurements are commonly interpreted by 2‐D inversions. We have developed an algorithm to invert frequency‐Domain vertical magnetic data generated by a grounded‐wire source for a 2‐D model of the earth—a so‐called 2.5‐D inversion. To stabilize the inversion, we adopt a smoothness constraint for the model parameters and adjust the regularization parameter objectively using a statistical criterion. A test using synthetic data from a realistic model reveals the insufficiency of only one source to recover an acceptable result. In contrast, the joint use of data generated by a left‐side source and a right‐side source dramatically improves the inversion result. We applied our inversion algorithm to a field data set, which was transformed from long‐offset transient electromagnetic (LOTEM) data acquired in a Japanese oil and gas field. As demonstrated by the synthetic data set, the inversion of the joint data set automatically converged and provided a better resultant model than that of the data generated by each source. In addition, our 2.5‐D inversion accounted for the reversals in the LOTEM measurements, which is impossible using 1‐D inversions. The shallow parts (above about 1 km depth) of the final model obtained by our 2.5‐D inversion agree well with those of a 2‐D inversion of MT data.

2020 ◽  
Vol 223 (2) ◽  
pp. 1378-1397
Author(s):  
Rosemary A Renaut ◽  
Jarom D Hogue ◽  
Saeed Vatankhah ◽  
Shuang Liu

SUMMARY We discuss the focusing inversion of potential field data for the recovery of sparse subsurface structures from surface measurement data on a uniform grid. For the uniform grid, the model sensitivity matrices have a block Toeplitz Toeplitz block structure for each block of columns related to a fixed depth layer of the subsurface. Then, all forward operations with the sensitivity matrix, or its transpose, are performed using the 2-D fast Fourier transform. Simulations are provided to show that the implementation of the focusing inversion algorithm using the fast Fourier transform is efficient, and that the algorithm can be realized on standard desktop computers with sufficient memory for storage of volumes up to size n ≈ 106. The linear systems of equations arising in the focusing inversion algorithm are solved using either Golub–Kahan bidiagonalization or randomized singular value decomposition algorithms. These two algorithms are contrasted for their efficiency when used to solve large-scale problems with respect to the sizes of the projected subspaces adopted for the solutions of the linear systems. The results confirm earlier studies that the randomized algorithms are to be preferred for the inversion of gravity data, and for data sets of size m it is sufficient to use projected spaces of size approximately m/8. For the inversion of magnetic data sets, we show that it is more efficient to use the Golub–Kahan bidiagonalization, and that it is again sufficient to use projected spaces of size approximately m/8. Simulations support the presented conclusions and are verified for the inversion of a magnetic data set obtained over the Wuskwatim Lake region in Manitoba, Canada.


Geophysics ◽  
1995 ◽  
Vol 60 (3) ◽  
pp. 796-809 ◽  
Author(s):  
Zhong‐Min Song ◽  
Paul R. Williamson ◽  
R. Gerhard Pratt

In full‐wave inversion of seismic data in complex media it is desirable to use finite differences or finite elements for the forward modeling, but such methods are still prohibitively expensive when implemented in 3-D. Full‐wave 2-D inversion schemes are of limited utility even in 2-D media because they do not model 3-D dynamics correctly. Many seismic experiments effectively assume that the geology varies in two dimensions only but generate 3-D (point source) wavefields; that is, they are “two‐and‐one‐half‐dimensional” (2.5-D), and this configuration can be exploited to model 3-D propagation efficiently in such media. We propose a frequency domain full‐wave inversion algorithm which uses a 2.5-D finite difference forward modeling method. The calculated seismogram can be compared directly with real data, which allows the inversion to be iterated. We use a descents‐related method to minimize a least‐squares measure of the wavefield mismatch at the receivers. The acute nonlinearity caused by phase‐wrapping, which corresponds to time‐domain cycle‐skipping, is avoided by the strategy of either starting the inversion using a low frequency component of the data or constructing a starting model using traveltime tomography. The inversion proceeds by stages at successively higher frequencies across the observed bandwidth. The frequency domain is particularly efficient for crosshole configurations and also allows easy incorporation of attenuation, via complex velocities, in both forward modeling and inversion. This also requires the introduction of complex source amplitudes into the inversion as additional unknowns. Synthetic studies show that the iterative scheme enables us to achieve the theoretical maximum resolution for the velocity reconstruction and that strongly attenuative zones can be recovered with reasonable accuracy. Preliminary results from the application of the method to a real data set are also encouraging.


Geophysics ◽  
2009 ◽  
Vol 74 (2) ◽  
pp. R1-R14 ◽  
Author(s):  
Wenyi Hu ◽  
Aria Abubakar ◽  
Tarek M. Habashy

We present a simultaneous multifrequency inversion approach for seismic data interpretation. This algorithm inverts all frequency data components simultaneously. A data-weighting scheme balances the contributions from different frequency data components so the inversion process does not become dominated by high-frequency data components, which produce a velocity image with many artifacts. A Gauss-Newton minimization approach achieves a high convergence rate and an accurate reconstructed velocity image. By introducing a modified adjoint formulation, we can calculate the Jacobian matrix efficiently, allowing the material properties in the perfectly matched layers (PMLs) to be updated automatically during the inversion process. This feature ensures the correct behavior of the inversion and implies that the algorithm is appropriate for realistic applications where a priori information of the background medium is unavailable. Two different regularization schemes, an [Formula: see text]-norm and a weighted [Formula: see text]-norm function, are used in this algorithm for smooth profiles and profiles with sharp boundaries, respectively. The regularization parameter is determined automatically and adaptively by the so-called multiplicative regularization technique. To test the algorithm, we implement the inversion to reconstruct the Marmousi velocity model using synthetic data generated by the finite-difference time-domain code. These numerical simulation results indicate that this inversion algorithm is robust in terms of starting model and noise suppression. Under some circumstances, it is more robust than a traditional sequential inversion approach.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. M1-M10 ◽  
Author(s):  
Leonardo Azevedo ◽  
Ruben Nunes ◽  
Pedro Correia ◽  
Amílcar Soares ◽  
Luis Guerreiro ◽  
...  

Due to the nature of seismic inversion problems, there are multiple possible solutions that can equally fit the observed seismic data while diverging from the real subsurface model. Consequently, it is important to assess how inverse-impedance models are converging toward the real subsurface model. For this purpose, we evaluated a new methodology to combine the multidimensional scaling (MDS) technique with an iterative geostatistical elastic seismic inversion algorithm. The geostatistical inversion algorithm inverted partial angle stacks directly for acoustic and elastic impedance (AI and EI) models. It was based on a genetic algorithm in which the model perturbation at each iteration was performed recurring to stochastic sequential simulation. To assess the reliability and convergence of the inverted models at each step, the simulated models can be projected in a metric space computed by MDS. This projection allowed distinguishing similar from variable models and assessing the convergence of inverted models toward the real impedance ones. The geostatistical inversion results of a synthetic data set, in which the real AI and EI models are known, were plotted in this metric space along with the known impedance models. We applied the same principle to a real data set using a cross-validation technique. These examples revealed that the MDS is a valuable tool to evaluate the convergence of the inverse methodology and the impedance model variability among each iteration of the inversion process. Particularly for the geostatistical inversion algorithm we evaluated, it retrieves reliable impedance models while still producing a set of simulated models with considerable variability.


Geophysics ◽  
1999 ◽  
Vol 64 (1) ◽  
pp. 162-181 ◽  
Author(s):  
Philippe Thierry ◽  
Stéphane Operto ◽  
Gilles Lambaré

In this paper, we evaluate the capacity of a fast 2-D ray+Born migration/inversion algorithm to recover the true amplitude of the model parameters in 2-D complex media. The method is based on a quasi‐Newtonian linearized inversion of the scattered wavefield. Asymptotic Green’s functions are computed in a smooth reference model with a dynamic ray tracing based on the wavefront construction method. The model is described by velocity perturbations associated with diffractor points. Both the first traveltime and the strongest arrivals can be inverted. The algorithm is implemented with several numerical approximations such as interpolations and aperture limitation around common midpoints to speed the algorithm. Both theoritical and numerical aspects of the algorithm are assessed with three synthetic and real data examples including the 2-D Marmousi example. Comparison between logs extracted from the exact Marmousi perturbation model and the computed images shows that the amplitude of the velocity perturbations are recovered accurately in the regions of the model where the ray field is single valued. In the presence of caustics, neither the first traveltime nor the most energetic arrival inversion allow for a full recovery of the amplitudes although the latter improves the results. We conclude that all the arrivals associated with multipathing through transmission caustics must be taken into account if the true amplitude of the perturbations is to be found. Only 22 minutes of CPU time is required to migrate the full 2-D Marmousi data set on a Sun SPARC 20 workstation. The amplitude loss induced by the numerical approximations on the first traveltime and the most energetic migrated images are evaluated quantitatively and do not exceed 8% of the energy of the image computed without numerical approximation. Computational evaluation shows that extension to a 3-D ray+Born migration/inversion algorithm is realistic.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. J47-J60 ◽  
Author(s):  
Nathan Leon Foks ◽  
Yaoguo Li

Boundary extraction is a collective term that we use for the process of extracting the locations of faults, lineaments, and lateral boundaries between geologic units using geophysical observations, such as measurements of the magnetic field. The process typically begins with a preprocessing stage, where the data are transformed to enhance the visual clarity of pertinent features and hence improve the interpretability of the data. The majority of the existing methods are based on raster grid enhancement techniques, and the boundaries are extracted as a series of points or line segments. In contrast, we set out a methodology for boundary extraction from magnetic data, in which we represent the transformed data as a surface in 3D using a mesh of triangular facets. After initializing the mesh, we modify the node locations, such that the mesh smoothly represents the transformed data and that facet edges are aligned with features in the data that approximate the horizontal locations of subsurface boundaries. To illustrate our boundary extraction algorithm, we first apply it to a synthetic data set. We then apply it to identify boundaries in a magnetic data set from the McFaulds Lake area in Ontario, Canada. The extracted boundaries are in agreement with known boundaries and several of the regions that are completely enclosed by extracted boundaries coincide with regions of known mineralization.


2019 ◽  
Vol 68 (1) ◽  
pp. 29-46 ◽  
Author(s):  
Elisabeth Dietze ◽  
Michael Dietze

Abstract. The analysis of grain-size distributions has a long tradition in Quaternary Science and disciplines studying Earth surface and subsurface deposits. The decomposition of multi-modal grain-size distributions into inherent subpopulations, commonly termed end-member modelling analysis (EMMA), is increasingly recognised as a tool to infer the underlying sediment sources, transport and (post-)depositional processes. Most of the existing deterministic EMMA approaches are only able to deliver one out of many possible solutions, thereby shortcutting uncertainty in model parameters. Here, we provide user-friendly computational protocols that support deterministic as well as robust (i.e. explicitly accounting for incomplete knowledge about input parameters in a probabilistic approach) EMMA, in the free and open software framework of R. In addition, and going beyond previous validation tests, we compare the performance of available grain-size EMMA algorithms using four real-world sediment types, covering a wide range of grain-size distribution shapes (alluvial fan, dune, loess and floodplain deposits). These were randomly mixed in the lab to produce a synthetic data set. Across all algorithms, the original data set was modelled with mean R2 values of 0.868 to 0.995 and mean absolute deviation (MAD) values of 0.06 % vol to 0.34 % vol. The original grain-size distribution shapes were modelled as end-member loadings with mean R2 values of 0.89 to 0.99 and MAD of 0.04 % vol to 0.17 % vol. End-member scores reproduced the original mixing ratios in the synthetic data set with mean R2 values of 0.68 to 0.93 and MAD of 0.1 % vol to 1.6 % vol. Depending on the validation criteria, all models provided reliable estimates of the input data, and each of the models exhibits individual strengths and weaknesses. Only robust EMMA allowed uncertainties of the end-members to be objectively estimated and expert knowledge to be included in the end-member definition. Yet, end-member interpretation should carefully consider the geological and sedimentological meaningfulness in terms of sediment sources, transport and deposition as well as post-depositional alteration of grain sizes. EMMA might also be powerful in other geoscientific contexts where the goal is to unmix sources and processes from compositional data sets.


Geophysics ◽  
2000 ◽  
Vol 65 (3) ◽  
pp. 783-790 ◽  
Author(s):  
Shashi P. Sharma ◽  
Pertti Kaikkonen

A platelike conducting body in free space is used as a model to invert transient electromagnetic data using the very fast simulated annealing procedure as a global optimization tool. When the host rock conductivity is non‐zero, acceptable fits between the observed and computed responses are difficult to obtain. In general, the conducting body is assigned a lower conductance, larger dimensions (strike length and depth extent) and a smaller depth than the true values. We approximate the response of a conducting host to yield reliable estimates of model parameters as well as a good fit between the observed and computed responses. Our procedure is based on the assumption that the observed electromagnetic response is the sum of the response due to the conductive target and the response due to conducting surroundings (host and overburden). It is also assumed that the host response is laterally invariant, implying a layered earth and fixed source‐receiver geometry. The validity of the superposition assumption is tested against the full solution for a conductive plate in a finite conducting host. The efficacy of our approach is demonstrated using noise‐free and noisy synthetic data and two field examples measured in different geological conditions.


Geophysics ◽  
2021 ◽  
pp. 1-34
Author(s):  
Roland Karcol ◽  
Roman Pašteka

The Tikhonov regularized approach to the downward continuation of potential fields is a partial but strong answer to the instability and ambiguity of the inverse problem solution in studies of applied gravimetry and magnetometry. The task is described with two functionals, which incorporate the properties of the desired solution, and it is solved as a minimization problem in the Fourier domain. The result is a filter in which the high-pass component is damped by a stabilizing condition, which is controlled by a regularization parameter (RP) — this parameter setting is the crucial step in the regularization approach. The ability of using the values of the functionals themselves as the tool for RP setting in the comparison with commonly used tools such as various types of LP norms is demonstrated, as well as their possible role in the source’s upper boundary estimation. The presented method is tested in a complex synthetic data test and is then applied to real detailed magnetic data from an unexploded ordnance survey and regional gravity data as well to verify its usability.


Geophysics ◽  
2019 ◽  
Vol 84 (4) ◽  
pp. J17-J29 ◽  
Author(s):  
Jiajia Sun ◽  
Yaoguo Li

The unknown magnetization directions in the presence of remanence have posed great challenges for interpreting magnetic data. Estimating magnetization directions based on magnetic measurements, therefore, has been an active area of research within the applied geophysics community. Despite the availability of several methods for estimating magnetization directions, quantifying the uncertainty of such estimates has remained untackled. We have investigated the use of the magnetization-clustering inversion (MCI) method for the purpose of assessing the uncertainty of the recovered magnetization directions. Specifically, we have leveraged the fact that the number of clusters that one expects to see among the magnetization directions recovered from MCI needs to be supplied by a user. We propose to implement a sequence of MCIs by assuming a series of different cluster numbers, and subsequently, to calculate the standard deviations of the recovered magnetization directions at each location in a model as a practical way of quantifying the uncertainty of the estimated magnetization directions. We have developed two different methods for the calculations of the standard deviations, and have also investigated the maximum number of clusters that one needs to consider to reliably assess the uncertainty. After the proof-of-concept study on a synthetic data set, we applied our methods to a field data set from an iron-oxide-copper-gold deposit exploration in the Carajás Mineral Province, Brazil. The high-confidence zones that correspond to low-uncertainty zones indicate a high spatial correspondence with the mineralization zones inferred from the drillholes and geology.


Sign in / Sign up

Export Citation Format

Share Document