Tomographic filtering of mantle circulation models via the generalised inverse: A way to account for seismic data uncertainty

Author(s):  
Bernhard S.A. Schuberth ◽  
Roman Freissler ◽  
Christophe Zaroli ◽  
Sophie Lambotte

<p>For a comprehensive link between seismic tomography and geodynamic models, uncertainties in the seismic model space play a non-negligible role. More specifically, knowledge of the tomographic uncertainties is important for obtaining meaningful estimates of the present-day thermodynamic state of Earth's mantle, which form the basis of retrodictions of past mantle evolution using the geodynamic adjoint method. A standard tool in tomographic-geodynamic model comparisons nowadays is tomographic filtering of mantle circulation models using the resolution operator <em><strong>R</strong></em> associated with the particular seismic inversion of interest. However, in this classical approach it is not possible to consider tomographic uncertainties and their impact on the geodynamic interpretation. </p><p>Here, we present a new method for 'filtering' synthetic Earth models, which makes use of the generalised inverse operator <strong>G</strong><sup>†</sup>, instead of using <em><strong>R</strong></em>. In our case, <strong>G</strong><sup>†</sup> is taken from a recent global SOLA Backus–Gilbert <em>S</em>-wave tomography. In contrast to classical tomographic filtering, the 'imaged' model is constructed by computing the <em>Generalised-Inverse Projection</em> (GIP) of synthetic data calculated in an Earth model of choice. This way, it is possible to include the effects of noise in the seismic data and thus to analyse uncertainties in the resulting model parameters. In order to demonstrate the viability of the method, we compute a set of travel times in an existing mantle circulation model, add specific realisations of Gaussian, zero-mean seismic noise to the synthetic data and apply <strong>G</strong><sup>†</sup>. <br> <br>Our results show that the resulting GIP model without noise is equivalent to the mean model of all GIP realisations from the suite of synthetic 'noisy' data and also closely resembles the model tomographically filtered using <em><strong>R</strong></em>. Most important, GIP models that include noise in the data show a significant variability of the shape and amplitude of seismic anomalies in the mantle. The significant differences between the various GIP realisations highlight the importance of interpreting and assessing tomographic images in a prudent and cautious manner. With the GIP approach, we can moreover investigate the effect of systematic errors in the data, which we demonstrate by adding an extra term to the noise component that aims at mimicking the effects of uncertain crustal corrections. In our presentation, we will finally discuss ways to construct the model covariance matrix based on the GIP approach and point out possible research directions on how to make use of this information in future geodynamic modelling efforts.</p>

2020 ◽  
Vol 223 (1) ◽  
pp. 254-269
Author(s):  
Roman Freissler ◽  
Christophe Zaroli ◽  
Sophie Lambotte ◽  
Bernhard S A Schuberth

SUMMARY Tomographic-geodynamic model comparisons are a key component in studies of the present-day state and evolution of Earth’s mantle. To account for the limited seismic resolution, ‘tomographic filtering’ of the geodynamically predicted mantle structures is a standard processing step in this context. The filtered model provides valuable information on how heterogeneities are smeared and modified in amplitude given the available seismic data and underlying inversion strategy. An important aspect that has so far not been taken into account are the effects of data uncertainties. We present a new method for ‘tomographic filtering’ in which it is possible to include the effects of random and systematic errors in the seismic measurements and to analyse the associated uncertainties in the tomographic model space. The ‘imaged’ model is constructed by computing the generalized-inverse projection (GIP) of synthetic data calculated in an earth model of choice. An advantage of this approach is that a reparametrization onto the tomographic grid can be avoided, depending on how the synthetic data are calculated. To demonstrate the viability of the method, we compute traveltimes in an existing mantle circulation model (MCM), add specific realizations of random seismic ‘noise’ to the synthetic data and apply the generalized inverse operator of a recent Backus–Gilbert-type global S-wave tomography. GIP models based on different noise realizations show a significant variability of the shape and amplitude of seismic anomalies. This highlights the importance of interpreting tomographic images in a prudent and cautious manner. Systematic errors, such as event mislocation or imperfect crustal corrections, can be investigated by introducing an additional term to the noise component so that the resulting noise distributions are biased. In contrast to Gaussian zero-mean noise, this leads to a bias in model space; that is, the mean of all GIP realizations also is non-zero. Knowledge of the statistical properties of model uncertainties together with tomographic resolution is crucial for obtaining meaningful estimates of Earth’s present-day thermodynamic state. A practicable treatment of error propagation and uncertainty quantification will therefore be increasingly important, especially in view of geodynamic inversions that aim at ‘retrodicting’ past mantle evolution based on tomographic images.


Geophysics ◽  
2016 ◽  
Vol 81 (6) ◽  
pp. A17-A21 ◽  
Author(s):  
Juan I. Sabbione ◽  
Mauricio D. Sacchi

The coefficients that synthesize seismic data via the hyperbolic Radon transform (HRT) are estimated by solving a linear-inverse problem. In the classical HRT, the computational cost of the inverse problem is proportional to the size of the data and the number of Radon coefficients. We have developed a strategy that significantly speeds up the implementation of time-domain HRTs. For this purpose, we have defined a restricted model space of coefficients applying hard thresholding to an initial low-resolution Radon gather. Then, an iterative solver that operated on the restricted model space was used to estimate the group of coefficients that synthesized the data. The method is illustrated with synthetic data and tested with a marine data example.


2021 ◽  
Vol 40 (10) ◽  
pp. 751-758
Author(s):  
Fabien Allo ◽  
Jean-Philippe Coulon ◽  
Jean-Luc Formento ◽  
Romain Reboul ◽  
Laure Capar ◽  
...  

Deep neural networks (DNNs) have the potential to streamline the integration of seismic data for reservoir characterization by providing estimates of rock properties that are directly interpretable by geologists and reservoir engineers instead of elastic attributes like most standard seismic inversion methods. However, they have yet to be applied widely in the energy industry because training DNNs requires a large amount of labeled data that is rarely available. Training set augmentation, routinely used in other scientific fields such as image recognition, can address this issue and open the door to DNNs for geophysical applications. Although this approach has been explored in the past, creating realistic synthetic well and seismic data representative of the variable geology of a reservoir remains challenging. Recently introduced theory-guided techniques can help achieve this goal. A key step in these hybrid techniques is the use of theoretical rock-physics models to derive elastic pseudologs from variations of existing petrophysical logs. Rock-physics theories are already commonly relied on to generalize and extrapolate the relationship between rock and elastic properties. Therefore, they are a useful tool to generate a large catalog of alternative pseudologs representing realistic geologic variations away from the existing well locations. While not directly driven by rock physics, neural networks trained on such synthetic catalogs extract the intrinsic rock-physics relationships and are therefore capable of directly estimating rock properties from seismic amplitudes. Neural networks trained on purely synthetic data are applied to a set of 2D poststack seismic lines to characterize a geothermal reservoir located in the Dogger Formation northeast of Paris, France. The goal of the study is to determine the extent of porous and permeable layers encountered at existing geothermal wells and ultimately guide the location and design of future geothermal wells in the area.


Geosciences ◽  
2019 ◽  
Vol 9 (1) ◽  
pp. 45
Author(s):  
Marwan Charara ◽  
Christophe Barnes

Full-waveform inversion for borehole seismic data is an ill-posed problem and constraining the problem is crucial. Constraints can be imposed on the data and model space through covariance matrices. Usually, they are set to a diagonal matrix. For the data space, signal polarization information can be used to evaluate the data uncertainties. The inversion forces the synthetic data to fit the polarization of observed data. A synthetic inversion for a 2D-2C data estimating a 1D elastic model shows a clear improvement, especially at the level of the receivers. For the model space, horizontal and vertical spatial correlations using a Laplace distribution can be used to fill the model space covariance matrix. This approach reduces the degree of freedom of the inverse problem, which can be quantitatively evaluated. Strong horizontal spatial correlation distances favor a tabular geological model whenever it does not contradict the data. The relaxation of the spatial correlation distances from large to small during the iterative inversion process allows the recovery of geological objects of the same size, which regularizes the inverse problem. Synthetic constrained and unconstrained inversions for 2D-2C crosswell data show the clear improvement of the inversion results when constraints are used.


Geophysics ◽  
2021 ◽  
pp. 1-56
Author(s):  
Saber jahanjooy ◽  
Mohammad Ali Riahi ◽  
Hamed Ghanbarnejad Moghanloo

The acoustic impedance (AI) model is key data for seismic interpretation, usually obtained from its nonlinear relation with seismic reflectivity. Common approaches use initial geological and seismic information to constraint the AI model estimation. When no accurate prior information is available, these approaches may dictate false results at some parts of the model. The regularization of ill-posed underdetermined problems requires some constraints to restrict the possible results. Available seismic inversion methods mostly use Tikhonov or total variation (TV) regularizations with some adjustments. Tikhonov regularization assumes smooth variation in the AI model, and it is incurious about the rapid changes in the model. TV allows rapid changes, and it is more stable in presence of noisy data. In a detailed realistic earth model that AI changes gradually, TV creates a stair-casing effect, which could lead to misinterpretation. This could be avoided by using TV and Tikhonov regularization sequentially in the alternating direction method of multipliers (ADMM) and creating the AI model. The result of implementing the proposed algorithm (STTVR) on 2D synthetic and real seismic sections shows that the smaller details in the lithological variations are accounted for as well as the general trend. STTVR can calculate major AI variations without any additional low-frequency constraints. The temporal and spatial transition of the calculated AI in real seismic data is gradual and close to a real geological setting.


2015 ◽  
Vol 3 (3) ◽  
pp. SS65-SS71
Author(s):  
Rui Zhang ◽  
Thomas M. Daley ◽  
Donald Vasco

The In Salah carbon dioxide storage project in Algeria has injected more than 3 million tons of carbon dioxide into a thin water-filled tight-sand formation. Interferometric synthetic aperture radar range change data revealed a double-lobe pattern of surface uplift, which has been interpreted as the existence of a subvertical fracture, or damage, zone. The reflection seismic data found a subtle linear push-down feature located along the depression between the two lobes thought to be due to the injection of carbon dioxide. Understanding of the [Formula: see text] distribution within the injection interval and migration within the fracture zone requires a precise subsurface layer model from the injection interval to above the top of the fracture zone. To improve the resolution of the existing seismic model, we applied a sparse-layer seismic inversion, with basis pursuit decomposition on the 3D seismic data between 1.0 and 1.5 s. The inversion results, including reflection coefficients and band-limited impedance cubes, provided improved subsurface imaging for two key layers (seismic horizons) above the injection interval. These horizons could be used as part of a more detailed earth model to study the [Formula: see text] storage at In Salah.


2017 ◽  
Vol 5 (1) ◽  
pp. T1-T9 ◽  
Author(s):  
Rui Zhang ◽  
Kui Zhang ◽  
Jude E. Alekhue

More and more seismic surveys produce 3D seismic images in the depth domain by using prestack depth migration methods, which can present a direct subsurface structure in the depth domain rather than in the time domain. This leads to the increasing need for applications of seismic inversion on the depth-imaged seismic data for reservoir characterization. To address this issue, we have developed a depth-domain seismic inversion method by using the compressed sensing technique with output of reflectivity and band-limited impedance without conversion to the time domain. The formulations of the seismic inversion in the depth domain are similar to time-domain methods, but they implement all the elements in depth domain, for example, a depth-domain seismic well tie. The developed method was first tested on synthetic data, showing great improvement of the resolution on inverted reflectivity. We later applied the method on a depth-migrated field data with well-log data validated, showing a great fit between them and also improved resolution on the inversion results, which demonstrates the feasibility and reliability of the proposed method on depth-domain seismic data.


1989 ◽  
Vol 20 (2) ◽  
pp. 169
Author(s):  
J.A. Young

Diffraction tomography is an approach to seismic inversion which is analogous to f-k migration. It differs from f-k migration in that it attempts to obtain a more quantitative rather than qualitative image of the Earth's subsurface. Diffraction tomography is based on the generalized projection-slice theorem which relates the scattered wave field to the Fourier spectrum of the scatterer. Factors such as the survey geometry and the source bandwidth determine the data coverage in the spatial Fourier domain which in turn determines the image resolution. Limited view-angles result in regions of the spatial Fourier domain with no data coverage, causing the solution to the tomographic reconstruction problem to be nonunique. The simplistic approach is to assume the missing samples are zero and perform a standard reconstruction but this can result in images with severe artefacts. Additional a priori information can be introduced to the problem in order to reduce the nonuniqueness and increase the stability of the reconstruction. This is the standard approach used in ray tomography but it is not commonly used in diffraction tomography applied to seismic data.This paper shows the application of diffraction tomography to crosshole and VSP seismic data. Using synthetic data, the effects on image resolution of the survey geometry and the finite source bandwidth are examined and techniques for improving image quality are discussed.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA185-WA200
Author(s):  
Yuqing Chen ◽  
Gerard T. Schuster

We present a wave-equation inversion method that inverts skeletonized seismic data for the subsurface velocity model. The skeletonized representation of the seismic traces consists of the low-rank latent-space variables predicted by a well-trained autoencoder neural network. The input to the autoencoder consists of seismic traces, and the implicit function theorem is used to determine the Fréchet derivative, i.e., the perturbation of the skeletonized data with respect to the velocity perturbation. The gradient is computed by migrating the shifted observed traces weighted by the skeletonized data residual, and the final velocity model is the one that best predicts the observed latent-space parameters. We denote this as inversion by Newtonian machine learning (NML) because it inverts for the model parameters by combining the forward and backward modeling of Newtonian wave propagation with the dimensional reduction capability of machine learning. Empirical results suggest that inversion by NML can sometimes mitigate the cycle-skipping problem of conventional full-waveform inversion (FWI). Numerical tests with synthetic and field data demonstrate the success of NML inversion in recovering a low-wavenumber approximation to the subsurface velocity model. The advantage of this method over other skeletonized data methods is that no manual picking of important features is required because the skeletal data are automatically selected by the autoencoder. The disadvantage is that the inverted velocity model has less resolution compared with the FWI result, but it can serve as a good initial model for FWI. Our most significant contribution is that we provide a general framework for using wave-equation inversion to invert skeletal data generated by any type of neural network. In other words, we have combined the deterministic modeling of Newtonian physics and the pattern matching capabilities of machine learning to invert seismic data by NML.


Geophysics ◽  
2009 ◽  
Vol 74 (1) ◽  
pp. L7-L15 ◽  
Author(s):  
Mark Pilkington

I have developed an inversion approach that determines a 3D susceptibility distribution that produces a given magnetic anomaly. The subsurface model consists of a 3D, equally spaced array of dipoles. The inversion incorporates a model norm that enforces sparseness and depth weighting of the solution. Sparseness is imposed by using the Cauchy norm on model parameters. The inverse problem is posed in the data space, leading to a linear system of equations with dimensions based on the number of data, [Formula: see text]. This contrasts with the standard least-squares solution, derived through operations within the [Formula: see text]-dimensional model space ([Formula: see text] being the number of model parameters). Hence, the data-space method combined with a conjugate gradient algorithm leads to computational efficiency by dealing with an [Formula: see text] system versus an [Formula: see text] one, where [Formula: see text]. Tests on synthetic data show that sparse inversion produces a much more focused solution compared with a standard model-space, least-squares inversion. The inversion of aeromagnetic data collected over a Precambrian Shield area again shows that including the sparseness constraint leads to a simpler and better resolved solution. The degree of improvement in model resolution for the sparse case is quantified using the resolution matrix.


Sign in / Sign up

Export Citation Format

Share Document