scholarly journals Amplitude-phase decomposition of the magnetotelluric impedance tensor

Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. E301-E310 ◽  
Author(s):  
Maik Neukirch ◽  
Daniel Rudolf ◽  
Xavier Garcia ◽  
Savitri Galiana

The introduction of the phase tensor marked a breakthrough in the understanding and analysis of electric galvanic distortion effects. It has been used for (distortion-free) dimensionality analysis, distortion analysis, mapping, and subsurface model inversion. However, the phase tensor can only represent half of the information contained in a complete impedance data set. Nevertheless, to avoid uncertainty due to galvanic distortion effects, practitioners often choose to discard half of the measured data and concentrate interpretation efforts on the phase tensor part. Our work assesses the information loss due to pure phase tensor interpretation of a complete impedance data set. To achieve this, a new MT impedance tensor decomposition into the known phase tensor and a newly defined amplitude tensor is motivated and established. In addition, the existence and uniqueness of the amplitude tensor is proven. Synthetic data are used to illustrate the amplitude tensor information content compared with the phase tensor. Although the phase tensor only describes the inductive effects within the subsurface, the amplitude tensor holds information about inductive and galvanic effects that can help to identify conductivity or thickness of (conductive) anomalies more accurately than the phase tensor. Furthermore, the amplitude and phase tensors sense anomalies at different periods, and thus the combination of both provides a means to evaluate and differentiate anomaly top depths in the event of data unavailability at extended period ranges, e.g., due to severe noise.

Geophysics ◽  
2020 ◽  
Vol 85 (3) ◽  
pp. E79-E98 ◽  
Author(s):  
Maik Neukirch ◽  
Savitri Galiana ◽  
Xavier Garcia

The introduction of the phase tensor marked a major breakthrough in the analysis and treatment of electric field galvanic distortion in the magnetotellurics method. Recently, the phase tensor formulation has been extended to a complete impedance tensor decomposition by introducing the complementary amplitude tensor, and both tensors can be further parameterized to represent geometric properties such as dimensionality, strike angle, and macroscopic anisotropy. Both tensors are characteristic for the electromagnetic induction phenomenon in the conductive subsurface with its specific geometric structure. The central hypothesis is that this coupling should result in similarities in both tensor’s geometric parameters, skew, strike, and anisotropy. A synthetic example illustrates that the undistorted amplitude tensor parameters are more similar to the phase tensor than increasingly distorted ones and provides empiric evidence for the predictability of the proposed hypothesis. Conclusions drawn are reverse engineered to produce an objective function that minimizes when amplitude and phase tensor parameter dissimilarity is, along with any present distortion, minimal. A genetic algorithm with such an objective function is used to systematically seek the distortion parameters necessary to correct any affected amplitude tensor and, thus, impedance data. The successful correction of a large synthetic impedance data set with random distortion further supports the central hypothesis and serves as comparison to the state-of-the-art. The classic BC87 data set sites lit007/ lit008 and lit901/ lit902 have been noted by various authors to contain significant distortion and a 3D regional response, thus invalidating current distortion analysis methods and eluding geologic interpretation. Correction of the BC87 responses based on the present hypothesis conforms to the regional geology.


Geophysics ◽  
2001 ◽  
Vol 66 (1) ◽  
pp. 158-173 ◽  
Author(s):  
Gary W. McNeice ◽  
Alan G. Jones

Accurate interpretation of magnetotelluric data requires an understanding of the directionality and dimensionality inherent in the data, and valid implementation of an appropriate method for removing the effects of shallow, small‐scale galvanic scatterers on the data to yield responses representative of regional‐scale structures. The galvanic distortion analysis approach advocated by Groom and Bailey has become the most adopted method, rightly so given that the approach decomposes the magnetotelluric impedance tensor into determinable and indeterminable parts, and tests statistically the validity of the galvanic distortion assumption. As proposed by Groom and Bailey, one must determine the appropriate frequency‐independent telluric distortion parameters and geoelectric strike by fitting the seven‐parameter model on a frequency‐by‐frequency and site‐by‐site basis independently. Although this approach has the attraction that one gains a more intimate understanding of the data set, it is rather time‐consuming and requires repetitive application. We propose an extension to Groom‐Bailey decomposition in which a global minimum is sought to determine the most appropriate strike direction and telluric distortion parameters for a range of frequencies and a set of sites. Also, we show how an analytically‐derived approximate Hessian of the objective function can reduce the required computing time. We illustrate application of the analysis to two synthetic data sets and to real data. Finally, we show how the analysis can be extended to cover the case of frequency‐dependent distortion caused by the magnetic effects of the galvanic charges.


2020 ◽  
Vol 223 (3) ◽  
pp. 1565-1583
Author(s):  
Hoël Seillé ◽  
Gerhard Visser

SUMMARY Bayesian inversion of magnetotelluric (MT) data is a powerful but computationally expensive approach to estimate the subsurface electrical conductivity distribution and associated uncertainty. Approximating the Earth subsurface with 1-D physics considerably speeds-up calculation of the forward problem, making the Bayesian approach tractable, but can lead to biased results when the assumption is violated. We propose a methodology to quantitatively compensate for the bias caused by the 1-D Earth assumption within a 1-D trans-dimensional Markov chain Monte Carlo sampler. Our approach determines site-specific likelihood functions which are calculated using a dimensionality discrepancy error model derived by a machine learning algorithm trained on a set of synthetic 3-D conductivity training images. This is achieved by exploiting known geometrical dimensional properties of the MT phase tensor. A complex synthetic model which mimics a sedimentary basin environment is used to illustrate the ability of our workflow to reliably estimate uncertainty in the inversion results, even in presence of strong 2-D and 3-D effects. Using this dimensionality discrepancy error model we demonstrate that on this synthetic data set the use of our workflow performs better in 80 per cent of the cases compared to the existing practice of using constant errors. Finally, our workflow is benchmarked against real data acquired in Queensland, Australia, and shows its ability to detect the depth to basement accurately.


2020 ◽  
Vol 222 (3) ◽  
pp. 1620-1638 ◽  
Author(s):  
M Moorkamp ◽  
A Avdeeva ◽  
Ahmet T Basokur ◽  
Erhan Erdogan

SUMMARY Galvanic distortion of magnetotelluric (MT) data is a common effect that can impede the reliable imaging of subsurface structures. Recently, we presented an inversion approach that includes a mathematical description of the effect of galvanic distortion as inversion parameters and demonstrated its efficiency with real data. We now systematically investigate the stability of this inversion approach with respect to different inversion strategies, starting models and model parametrizations. We utilize a data set of 310 MT sites that were acquired for geothermal exploration. In addition to impedance tensor estimates over a broad frequency range, the data set also comprises transient electromagnetic measurements to determine near surface conductivity and estimates of distortion at each site. We therefore can compare our inversion approach to these distortion estimates and the resulting inversion models. Our experiments show that inversion with distortion correction produces stable results for various inversion strategies and for different starting models. Compared to inversions without distortion correction, we can reproduce the observed data better and reduce subsurface artefacts. In contrast, shifting the impedance curves at high frequencies to match the transient electromagnetic measurements reduces the misfit of the starting model, but does not have a strong impact on the final results. Thus our results suggest that including a description of distortion in the inversion is more efficient and should become a standard approach for MT inversion.


Author(s):  
Raul E. Avelar ◽  
Karen Dixon ◽  
Boniphace Kutela ◽  
Sam Klump ◽  
Beth Wemple ◽  
...  

The calibration of safety performance functions (SPFs) is a mechanism included in the Highway Safety Manual (HSM) to adjust SPFs in the HSM for use in intended jurisdictions. Critically, the quality of the calibration procedure must be assessed before using the calibrated SPFs. Multiple resources to aid practitioners in calibrating SPFs have been developed in the years following the publication of the HSM 1st edition. Similarly, the literature suggests multiple ways to assess the goodness-of-fit (GOF) of a calibrated SPF to a data set from a given jurisdiction. This paper uses the calibration results of multiple intersection SPFs to a large Mississippi safety database to examine the relations between multiple GOF metrics. The goal is to develop a sensible single index that leverages the joint information from multiple GOF metrics to assess overall quality of calibration. A factor analysis applied to the calibration results revealed three underlying factors explaining 76% of the variability in the data. From these results, the authors developed an index and performed a sensitivity analysis. The key metrics were found to be, in descending order: the deviation of the cumulative residual (CURE) plot from the 95% confidence area, the mean absolute deviation, the modified R-squared, and the value of the calibration factor. This paper also presents comparisons between the index and alternative scoring strategies, as well as an effort to verify the results using synthetic data. The developed index is recommended to comprehensively assess the quality of the calibrated intersection SPFs.


Water ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 107
Author(s):  
Elahe Jamalinia ◽  
Faraz S. Tehrani ◽  
Susan C. Steele-Dunne ◽  
Philip J. Vardon

Climatic conditions and vegetation cover influence water flux in a dike, and potentially the dike stability. A comprehensive numerical simulation is computationally too expensive to be used for the near real-time analysis of a dike network. Therefore, this study investigates a random forest (RF) regressor to build a data-driven surrogate for a numerical model to forecast the temporal macro-stability of dikes. To that end, daily inputs and outputs of a ten-year coupled numerical simulation of an idealised dike (2009–2019) are used to create a synthetic data set, comprising features that can be observed from a dike surface, with the calculated factor of safety (FoS) as the target variable. The data set before 2018 is split into training and testing sets to build and train the RF. The predicted FoS is strongly correlated with the numerical FoS for data that belong to the test set (before 2018). However, the trained model shows lower performance for data in the evaluation set (after 2018) if further surface cracking occurs. This proof-of-concept shows that a data-driven surrogate can be used to determine dike stability for conditions similar to the training data, which could be used to identify vulnerable locations in a dike network for further examination.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2014 ◽  
Vol 7 (3) ◽  
pp. 781-797 ◽  
Author(s):  
P. Paatero ◽  
S. Eberly ◽  
S. G. Brown ◽  
G. A. Norris

Abstract. The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement of factor elements (BS-DISP). The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


2019 ◽  
Vol 217 (3) ◽  
pp. 1727-1741 ◽  
Author(s):  
D W Vasco ◽  
Seiji Nakagawa ◽  
Petr Petrov ◽  
Greg Newman

SUMMARY We introduce a new approach for locating earthquakes using arrival times derived from waveforms. The most costly computational step of the algorithm scales as the number of stations in the active seismographic network. In this approach, a variation on existing grid search methods, a series of full waveform simulations are conducted for all receiver locations, with sources positioned successively at each station. The traveltime field over the region of interest is calculated by applying a phase picking algorithm to the numerical wavefields produced from each simulation. An event is located by subtracting the stored traveltime field from the arrival time at each station. This provides a shifted and time-reversed traveltime field for each station. The shifted and time-reversed fields all approach the origin time of the event at the source location. The mean or median value at the source location thus approximates the event origin time. Measures of dispersion about this mean or median time at each grid point, such as the sample standard error and the average deviation, are minimized at the correct source position. Uncertainty in the event position is provided by the contours of standard error defined over the grid. An application of this technique to a synthetic data set indicates that the approach provides stable locations even when the traveltimes are contaminated by additive random noise containing a significant number of outliers and velocity model errors. It is found that the waveform-based method out-performs one based upon the eikonal equation for a velocity model with rapid spatial variations in properties due to layering. A comparison with conventional location algorithms in both a laboratory and field setting demonstrates that the technique performs at least as well as existing techniques.


Sign in / Sign up

Export Citation Format

Share Document