Restricted model domain time Radon transforms

Geophysics ◽  
2016 ◽  
Vol 81 (6) ◽  
pp. A17-A21 ◽  
Author(s):  
Juan I. Sabbione ◽  
Mauricio D. Sacchi

The coefficients that synthesize seismic data via the hyperbolic Radon transform (HRT) are estimated by solving a linear-inverse problem. In the classical HRT, the computational cost of the inverse problem is proportional to the size of the data and the number of Radon coefficients. We have developed a strategy that significantly speeds up the implementation of time-domain HRTs. For this purpose, we have defined a restricted model space of coefficients applying hard thresholding to an initial low-resolution Radon gather. Then, an iterative solver that operated on the restricted model space was used to estimate the group of coefficients that synthesized the data. The method is illustrated with synthetic data and tested with a marine data example.

Geosciences ◽  
2019 ◽  
Vol 9 (1) ◽  
pp. 45
Author(s):  
Marwan Charara ◽  
Christophe Barnes

Full-waveform inversion for borehole seismic data is an ill-posed problem and constraining the problem is crucial. Constraints can be imposed on the data and model space through covariance matrices. Usually, they are set to a diagonal matrix. For the data space, signal polarization information can be used to evaluate the data uncertainties. The inversion forces the synthetic data to fit the polarization of observed data. A synthetic inversion for a 2D-2C data estimating a 1D elastic model shows a clear improvement, especially at the level of the receivers. For the model space, horizontal and vertical spatial correlations using a Laplace distribution can be used to fill the model space covariance matrix. This approach reduces the degree of freedom of the inverse problem, which can be quantitatively evaluated. Strong horizontal spatial correlation distances favor a tabular geological model whenever it does not contradict the data. The relaxation of the spatial correlation distances from large to small during the iterative inversion process allows the recovery of geological objects of the same size, which regularizes the inverse problem. Synthetic constrained and unconstrained inversions for 2D-2C crosswell data show the clear improvement of the inversion results when constraints are used.


Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. N15-N27 ◽  
Author(s):  
Carlos A. M. Assis ◽  
Henrique B. Santos ◽  
Jörg Schleicher

Acoustic impedance (AI) is a widely used seismic attribute in stratigraphic interpretation. Because of the frequency-band-limited nature of seismic data, seismic amplitude inversion cannot determine AI itself, but it can only provide an estimate of its variations, the relative AI (RAI). We have revisited and compared two alternative methods to transform stacked seismic data into RAI. One is colored inversion (CI), which requires well-log information, and the other is linear inversion (LI), which requires knowledge of the seismic source wavelet. We start by formulating the two approaches in a theoretically comparable manner. This allows us to conclude that both procedures are theoretically equivalent. We proceed to check whether the use of the CI results as the initial solution for LI can improve the RAI estimation. In our experiments, combining CI and LI cannot provide superior RAI results to those produced by each approach applied individually. Then, we analyze the LI performance with two distinct solvers for the associated linear system. Moreover, we investigate the sensitivity of both methods regarding the frequency content present in synthetic data. The numerical tests using the Marmousi2 model demonstrate that the CI and LI techniques can provide an RAI estimate of similar accuracy. A field-data example confirms the analysis using synthetic-data experiments. Our investigations confirm the theoretical and practical similarities of CI and LI regardless of the numerical strategy used in LI. An important result of our tests is that an increase in the low-frequency gap in the data leads to slightly deteriorated CI quality. In this case, LI required more iterations for the conjugate-gradient least-squares solver, but the final results were not much affected. Both methodologies provided interesting RAI profiles compared with well-log data, at low computational cost and with a simple parameterization.


2020 ◽  
Author(s):  
Bernhard S.A. Schuberth ◽  
Roman Freissler ◽  
Christophe Zaroli ◽  
Sophie Lambotte

<p>For a comprehensive link between seismic tomography and geodynamic models, uncertainties in the seismic model space play a non-negligible role. More specifically, knowledge of the tomographic uncertainties is important for obtaining meaningful estimates of the present-day thermodynamic state of Earth's mantle, which form the basis of retrodictions of past mantle evolution using the geodynamic adjoint method. A standard tool in tomographic-geodynamic model comparisons nowadays is tomographic filtering of mantle circulation models using the resolution operator <em><strong>R</strong></em> associated with the particular seismic inversion of interest. However, in this classical approach it is not possible to consider tomographic uncertainties and their impact on the geodynamic interpretation. </p><p>Here, we present a new method for 'filtering' synthetic Earth models, which makes use of the generalised inverse operator <strong>G</strong><sup>†</sup>, instead of using <em><strong>R</strong></em>. In our case, <strong>G</strong><sup>†</sup> is taken from a recent global SOLA Backus–Gilbert <em>S</em>-wave tomography. In contrast to classical tomographic filtering, the 'imaged' model is constructed by computing the <em>Generalised-Inverse Projection</em> (GIP) of synthetic data calculated in an Earth model of choice. This way, it is possible to include the effects of noise in the seismic data and thus to analyse uncertainties in the resulting model parameters. In order to demonstrate the viability of the method, we compute a set of travel times in an existing mantle circulation model, add specific realisations of Gaussian, zero-mean seismic noise to the synthetic data and apply <strong>G</strong><sup>†</sup>. <br> <br>Our results show that the resulting GIP model without noise is equivalent to the mean model of all GIP realisations from the suite of synthetic 'noisy' data and also closely resembles the model tomographically filtered using <em><strong>R</strong></em>. Most important, GIP models that include noise in the data show a significant variability of the shape and amplitude of seismic anomalies in the mantle. The significant differences between the various GIP realisations highlight the importance of interpreting and assessing tomographic images in a prudent and cautious manner. With the GIP approach, we can moreover investigate the effect of systematic errors in the data, which we demonstrate by adding an extra term to the noise component that aims at mimicking the effects of uncertain crustal corrections. In our presentation, we will finally discuss ways to construct the model covariance matrix based on the GIP approach and point out possible research directions on how to make use of this information in future geodynamic modelling efforts.</p>


Geosciences ◽  
2018 ◽  
Vol 8 (12) ◽  
pp. 497
Author(s):  
Fedor Krasnov ◽  
Alexander Butorin

Sparse spikes deconvolution is one of the oldest inverse problems, which is a stylized version of recovery in seismic imaging. The goal of sparse spike deconvolution is to recover an approximation of a given noisy measurement T = W ∗ r + W 0 . Since the convolution destroys many low and high frequencies, this requires some prior information to regularize the inverse problem. In this paper, the authors continue to study the problem of searching for positions and amplitudes of the reflection coefficients of the medium (SP&ARCM). In previous research, the authors proposed a practical algorithm for solving the inverse problem of obtaining geological information from the seismic trace, which was named A 0 . In the current paper, the authors improved the method of the A 0 algorithm and applied it to the real (non-synthetic) data. Firstly, the authors considered the matrix approach and Differential Evolution approach to the SP&ARCM problem and showed that their efficiency is limited in the case. Secondly, the authors showed that the course to improve the A 0 lays in the direction of optimization with sequential regularization. The authors presented calculations for the accuracy of the A 0 for that case and experimental results of the convergence. The authors also considered different initialization parameters of the optimization process from the point of the acceleration of the convergence. Finally, the authors carried out successful approbation of the algorithm A 0 on synthetic and real data. Further practical development of the algorithm A 0 will be aimed at increasing the robustness of its operation, as well as in application in more complex models of real seismic data. The practical value of the research is to increase the resolving power of the wave field by reducing the contribution of interference, which gives new information for seismic-geological modeling.


Geophysics ◽  
1993 ◽  
Vol 58 (6) ◽  
pp. 873-882 ◽  
Author(s):  
Roelof Jan Versteeg

To get a correct earth image from seismic data acquired over complex structures it is essential to use prestack depth migration. A necessary condition for obtaining a correct image is that the prestack depth migration is done with an accurate velocity model. In cases where we need to use prestack depth migration determination of such a model using conventional methods does not give satisfactory results. Thus, new iterative methods for velocity model determination have been developed. The convergence of these methods can be accelerated by defining constraints on the model in such a way that the method only looks for those components of the true earth velocity field that influence the migrated image. In order to determine these components, the sensitivity of the prestack depth migration result to the velocity model is examined using a complex synthetic data set (the Marmousi data set) for which the exact model is known. The images obtained with increasingly smoothed versions of the true model are compared, and it is shown that the minimal spatial wavelength that needs to be in the model to obtain an accurate depth image from the data set is of the order of 200 m. The model space that has to be examined to find an accurate velocity model from complex seismic data can thus be constrained. This will increase the speed and probability of convergence of iterative velocity model determination methods.


Geophysics ◽  
2003 ◽  
Vol 68 (2) ◽  
pp. 641-655 ◽  
Author(s):  
Anders Sollid ◽  
Bjørn Ursin

Scattering‐angle migration maps seismic prestack data directly into angle‐dependent reflectivity at the image point. The method automatically accounts for triplicated rayfields and is easily extended to handle anisotropy. We specify scattering‐angle migration integrals for PP and PS ocean‐bottom seismic (OBS) data in 3D and 2.5D elastic media exhibiting weak contrasts and weak anisotropy. The derivation is based on the anisotropic elastic Born‐Kirchhoff‐Helmholtz surface scattering integral. The true‐amplitude weights are chosen such that the amplitude versus angle (AVA) response of the angle gather is equal to the Born scattering coefficient or, alternatively, the linearized reflection coefficient. We implement scattering‐angle migration by shooting a fan of rays from the subsurface point to the acquisition surface, followed by integrating the phase‐ and amplitude‐corrected seismic data over the migration dip at the image point while keeping the scattering‐angle fixed. A dense summation over migration dip only adds a minor additional cost and enhances the coherent signal in the angle gathers. The 2.5D scattering‐angle migration is demonstrated on synthetic data and on real PP and PS data from the North Sea. In the real data example we use a transversely isotropic (TI) background model to obtain depth‐consistent PP and PS images. The aim of the succeeding AVA analysis is to predict the fluid type in the reservoir sand. Specifically, the PS stack maps the contrasts in lithology while being insensitive to the fluid fill. The PP large‐angle stack maps the oil‐filled sand but shows no response in the brine‐filled zones. A comparison to common‐offset Kirchhoff migration demonstrates that, for the same computational cost, scattering‐angle migration provides common image gathers with less noise and fewer artifacts.


Geophysics ◽  
2017 ◽  
Vol 82 (6) ◽  
pp. U87-U97 ◽  
Author(s):  
Mohammad Javad Khoshnavaz

Oriented time-domain imaging can be orders of magnitude faster than the routine techniques, which rely on velocity analysis. The term “oriented” refers to those techniques that use the information carried by local slopes. Time-domain dip moveout (DMO) correction, which has often been ignored by the seismic imaging community, has been coming back to attention within the last few years. I have developed an oriented time-domain DMO correction workflow that does not face the problematic loop between the dip-dependent and/or dip-independent velocities existing in the classic DMO correction algorithms. Use of the proposed approach is also advantageous over the previous oriented techniques; the proposed technique is independent of the wavefront curvature, and the input seismic data do not need to be sorted in two different domains. The application of the technique is limited to reflectors with a small curvature. The theory of the proposed technique is investigated on a simple synthetic data example and then applied to a 2D marine data set.


Geophysics ◽  
2001 ◽  
Vol 66 (3) ◽  
pp. 871-882 ◽  
Author(s):  
D. Lebrun ◽  
V. Richard ◽  
D. Mace ◽  
M. Cuer

Acquisition of the full elastic response (compressional and shear) of the subsurface is an important technology in the seismic industry because of its potential to improve the quality of seismic data and to infer accurate information about rock properties (fluid type and rock lithology). In the framework of 3-D propagation in 1-D media, we propose a computational tool to analyze the information about elastic parameters contained in the amplitudes of reflected waves with offset. The approach is based on singular value decomposition (SVD) analysis of the linearized elastic inversion problem and can be applied to any particular seismic data. We applied this tool to examine the type of information in the model space that can be retrieved from sea‐bottom multicomponent measurements. The results are compared with those obtained from conventional streamer acquisition techniques. We also present multiparameter linearized inversion results obtained from synthetic data that illustrate the resolution of elastic parameters. This approach allows us to investigate the reliability of the elastic parameters estimated for different offset ranges, wave modes, data types, and noise levels involved in data space.


2020 ◽  
Vol 223 (1) ◽  
pp. 254-269
Author(s):  
Roman Freissler ◽  
Christophe Zaroli ◽  
Sophie Lambotte ◽  
Bernhard S A Schuberth

SUMMARY Tomographic-geodynamic model comparisons are a key component in studies of the present-day state and evolution of Earth’s mantle. To account for the limited seismic resolution, ‘tomographic filtering’ of the geodynamically predicted mantle structures is a standard processing step in this context. The filtered model provides valuable information on how heterogeneities are smeared and modified in amplitude given the available seismic data and underlying inversion strategy. An important aspect that has so far not been taken into account are the effects of data uncertainties. We present a new method for ‘tomographic filtering’ in which it is possible to include the effects of random and systematic errors in the seismic measurements and to analyse the associated uncertainties in the tomographic model space. The ‘imaged’ model is constructed by computing the generalized-inverse projection (GIP) of synthetic data calculated in an earth model of choice. An advantage of this approach is that a reparametrization onto the tomographic grid can be avoided, depending on how the synthetic data are calculated. To demonstrate the viability of the method, we compute traveltimes in an existing mantle circulation model (MCM), add specific realizations of random seismic ‘noise’ to the synthetic data and apply the generalized inverse operator of a recent Backus–Gilbert-type global S-wave tomography. GIP models based on different noise realizations show a significant variability of the shape and amplitude of seismic anomalies. This highlights the importance of interpreting tomographic images in a prudent and cautious manner. Systematic errors, such as event mislocation or imperfect crustal corrections, can be investigated by introducing an additional term to the noise component so that the resulting noise distributions are biased. In contrast to Gaussian zero-mean noise, this leads to a bias in model space; that is, the mean of all GIP realizations also is non-zero. Knowledge of the statistical properties of model uncertainties together with tomographic resolution is crucial for obtaining meaningful estimates of Earth’s present-day thermodynamic state. A practicable treatment of error propagation and uncertainty quantification will therefore be increasingly important, especially in view of geodynamic inversions that aim at ‘retrodicting’ past mantle evolution based on tomographic images.


Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 599
Author(s):  
Danilo Cruz ◽  
João de Araújo ◽  
Carlos da Costa ◽  
Carlos da Silva

Full waveform inversion is an advantageous technique for obtaining high-resolution subsurface information. In the petroleum industry, mainly in reservoir characterisation, it is common to use information from wells as previous information to decrease the ambiguity of the obtained results. For this, we propose adding a relative entropy term to the formalism of the full waveform inversion. In this context, entropy will be just a nomenclature for regularisation and will have the role of helping the converge to the global minimum. The application of entropy in inverse problems usually involves formulating the problem, so that it is possible to use statistical concepts. To avoid this step, we propose a deterministic application to the full waveform inversion. We will discuss some aspects of relative entropy and show three different ways of using them to add prior information through entropy in the inverse problem. We use a dynamic weighting scheme to add prior information through entropy. The idea is that the prior information can help to find the path of the global minimum at the beginning of the inversion process. In all cases, the prior information can be incorporated very quickly into the full waveform inversion and lead the inversion to the desired solution. When we include the logarithmic weighting that constitutes entropy to the inverse problem, we will suppress the low-intensity ripples and sharpen the point events. Thus, the addition of entropy relative to full waveform inversion can provide a result with better resolution. In regions where salt is present in the BP 2004 model, we obtained a significant improvement by adding prior information through the relative entropy for synthetic data. We will show that the prior information added through entropy in full-waveform inversion formalism will prove to be a way to avoid local minimums.


Sign in / Sign up

Export Citation Format

Share Document