scholarly journals Bayesian inversion of magnetotelluric data considering dimensionality discrepancies

2020 ◽  
Vol 223 (3) ◽  
pp. 1565-1583
Author(s):  
Hoël Seillé ◽  
Gerhard Visser

SUMMARY Bayesian inversion of magnetotelluric (MT) data is a powerful but computationally expensive approach to estimate the subsurface electrical conductivity distribution and associated uncertainty. Approximating the Earth subsurface with 1-D physics considerably speeds-up calculation of the forward problem, making the Bayesian approach tractable, but can lead to biased results when the assumption is violated. We propose a methodology to quantitatively compensate for the bias caused by the 1-D Earth assumption within a 1-D trans-dimensional Markov chain Monte Carlo sampler. Our approach determines site-specific likelihood functions which are calculated using a dimensionality discrepancy error model derived by a machine learning algorithm trained on a set of synthetic 3-D conductivity training images. This is achieved by exploiting known geometrical dimensional properties of the MT phase tensor. A complex synthetic model which mimics a sedimentary basin environment is used to illustrate the ability of our workflow to reliably estimate uncertainty in the inversion results, even in presence of strong 2-D and 3-D effects. Using this dimensionality discrepancy error model we demonstrate that on this synthetic data set the use of our workflow performs better in 80 per cent of the cases compared to the existing practice of using constant errors. Finally, our workflow is benchmarked against real data acquired in Queensland, Australia, and shows its ability to detect the depth to basement accurately.

Geophysics ◽  
2001 ◽  
Vol 66 (1) ◽  
pp. 158-173 ◽  
Author(s):  
Gary W. McNeice ◽  
Alan G. Jones

Accurate interpretation of magnetotelluric data requires an understanding of the directionality and dimensionality inherent in the data, and valid implementation of an appropriate method for removing the effects of shallow, small‐scale galvanic scatterers on the data to yield responses representative of regional‐scale structures. The galvanic distortion analysis approach advocated by Groom and Bailey has become the most adopted method, rightly so given that the approach decomposes the magnetotelluric impedance tensor into determinable and indeterminable parts, and tests statistically the validity of the galvanic distortion assumption. As proposed by Groom and Bailey, one must determine the appropriate frequency‐independent telluric distortion parameters and geoelectric strike by fitting the seven‐parameter model on a frequency‐by‐frequency and site‐by‐site basis independently. Although this approach has the attraction that one gains a more intimate understanding of the data set, it is rather time‐consuming and requires repetitive application. We propose an extension to Groom‐Bailey decomposition in which a global minimum is sought to determine the most appropriate strike direction and telluric distortion parameters for a range of frequencies and a set of sites. Also, we show how an analytically‐derived approximate Hessian of the objective function can reduce the required computing time. We illustrate application of the analysis to two synthetic data sets and to real data. Finally, we show how the analysis can be extended to cover the case of frequency‐dependent distortion caused by the magnetic effects of the galvanic charges.


Geophysics ◽  
1990 ◽  
Vol 55 (12) ◽  
pp. 1613-1624 ◽  
Author(s):  
C. deGroot‐Hedlin ◽  
S. Constable

Magnetotelluric (MT) data are inverted for smooth 2-D models using an extension of the existing 1-D algorithm, Occam’s inversion. Since an MT data set consists of a finite number of imprecise data, an infinity of solutions to the inverse problem exists. Fitting field or synthetic electromagnetic data as closely as possible results in theoretical models with a maximum amount of roughness, or structure. However, by relaxing the misfit criterion only a small amount, models which are maximally smooth may be generated. Smooth models are less likely to result in overinterpretation of the data and reflect the true resolving power of the MT method. The models are composed of a large number of rectangular prisms, each having a constant conductivity. [Formula: see text] information, in the form of boundary locations only or both boundary locations and conductivity, may be included, providing a powerful tool for improving the resolving power of the data. Joint inversion of TE and TM synthetic data generated from known models allows comparison of smooth models with the true structure. In most cases, smoothed versions of the true structure may be recovered in 12–16 iterations. However, resistive features with a size comparable to depth of burial are poorly resolved. Real MT data present problems of non‐Gaussian data errors, the breakdown of the two‐dimensionality assumption and the large number of data in broadband soundings; nevertheless, real data can be inverted using the algorithm.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2019 ◽  
Vol 11 (3) ◽  
pp. 249 ◽  
Author(s):  
Pejman Rasti ◽  
Ali Ahmad ◽  
Salma Samiei ◽  
Etienne Belin ◽  
David Rousseau

In this article, we assess the interest of the recently introduced multiscale scattering transform for texture classification applied for the first time in plant science. Scattering transform is shown to outperform monoscale approaches (gray-level co-occurrence matrix, local binary patterns) but also multiscale approaches (wavelet decomposition) which do not include combinatory steps. The regime in which scatter transform also outperforms a standard CNN architecture in terms of data-set size is evaluated ( 10 4 instances). An approach on how to optimally design the scatter transform based on energy contrast is provided. This is illustrated on the hard and open problem of weed detection in culture crops of high density from the top view in intensity images. An annotated synthetic data-set available under the form of a data challenge and a simulator are proposed for reproducible science (https://uabox.univ-angers.fr/index.php/s/iuj0knyzOUgsUV9). Scatter transform only trained on synthetic data shows an accuracy of 85 % when tested on real data.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. M1-M10 ◽  
Author(s):  
Leonardo Azevedo ◽  
Ruben Nunes ◽  
Pedro Correia ◽  
Amílcar Soares ◽  
Luis Guerreiro ◽  
...  

Due to the nature of seismic inversion problems, there are multiple possible solutions that can equally fit the observed seismic data while diverging from the real subsurface model. Consequently, it is important to assess how inverse-impedance models are converging toward the real subsurface model. For this purpose, we evaluated a new methodology to combine the multidimensional scaling (MDS) technique with an iterative geostatistical elastic seismic inversion algorithm. The geostatistical inversion algorithm inverted partial angle stacks directly for acoustic and elastic impedance (AI and EI) models. It was based on a genetic algorithm in which the model perturbation at each iteration was performed recurring to stochastic sequential simulation. To assess the reliability and convergence of the inverted models at each step, the simulated models can be projected in a metric space computed by MDS. This projection allowed distinguishing similar from variable models and assessing the convergence of inverted models toward the real impedance ones. The geostatistical inversion results of a synthetic data set, in which the real AI and EI models are known, were plotted in this metric space along with the known impedance models. We applied the same principle to a real data set using a cross-validation technique. These examples revealed that the MDS is a valuable tool to evaluate the convergence of the inverse methodology and the impedance model variability among each iteration of the inversion process. Particularly for the geostatistical inversion algorithm we evaluated, it retrieves reliable impedance models while still producing a set of simulated models with considerable variability.


2011 ◽  
Vol 19 (4) ◽  
pp. 409-433 ◽  
Author(s):  
Francisco Cantú ◽  
Sebastián M. Saiegh

In this paper, we introduce an innovative method to diagnose electoral fraud using vote counts. Specifically, we use synthetic data to develop and train a fraud detection prototype. We employ a naive Bayes classifier as our learning algorithm and rely on digital analysis to identify the features that are most informative about class distinctions. To evaluate the detection capability of the classifier, we use authentic data drawn from a novel data set of district-level vote counts in the province of Buenos Aires (Argentina) between 1931 and 1941, a period with a checkered history of fraud. Our results corroborate the validity of our approach: The elections considered to be irregular (legitimate) by most historical accounts are unambiguously classified as fraudulent (clean) by the learner. More generally, our findings demonstrate the feasibility of generating and using synthetic data for training and testing an electoral fraud detection system.


Geophysics ◽  
1998 ◽  
Vol 63 (6) ◽  
pp. 2035-2041 ◽  
Author(s):  
Zhengping Liu ◽  
Jiaqi Liu

We present a data‐driven method of joint inversion of well‐log and seismic data, based on the power of adaptive mapping of artificial neural networks (ANNs). We use the ANN technique to find and approximate the inversion operator guided by the data set consisting of well data and seismic recordings near the wells. Then we directly map seismic recordings to well parameters, trace by trace, to extrapolate the wide‐band profiles of these parameters using the approximation operator. Compared to traditional inversions, which are based on a few prior theoretical operators, our inversion is novel because (1) it inverts for multiple parameters and (2) it is nonlinear with a high degree of complexity. We first test our algorithm with synthetic data and analyze its sensitivity and robustness. We then invert real data to obtain two extrapolation profiles of sonic log (DT) and shale content (SH), the latter a unique parameter of the inversion and significant for the detailed evaluation of stratigraphic traps. The high‐frequency components of the two profiles are significantly richer than those of the original seismic section.


2021 ◽  
Author(s):  
Muhammad Haris Naveed ◽  
Umair Hashmi ◽  
Nayab Tajved ◽  
Neha Sultan ◽  
Ali Imran

This paper explores whether Generative Adversarial Networks (GANs) can produce realistic network load data that can be utilized to train machine learning models in lieu of real data. In this regard, we evaluate the performance of three recent GAN architectures on the Telecom Italia data set across a set of qualitative and quantitative metrics. Our results show that GAN generated synthetic data is indeed similar to real data and forecasting models trained on this data achieve similar performance to those trained on real data.


Sign in / Sign up

Export Citation Format

Share Document