SVD for multioffset linearized inversion: Resolution analysis in multicomponent acquisition

Geophysics ◽  
2001 ◽  
Vol 66 (3) ◽  
pp. 871-882 ◽  
Author(s):  
D. Lebrun ◽  
V. Richard ◽  
D. Mace ◽  
M. Cuer

Acquisition of the full elastic response (compressional and shear) of the subsurface is an important technology in the seismic industry because of its potential to improve the quality of seismic data and to infer accurate information about rock properties (fluid type and rock lithology). In the framework of 3-D propagation in 1-D media, we propose a computational tool to analyze the information about elastic parameters contained in the amplitudes of reflected waves with offset. The approach is based on singular value decomposition (SVD) analysis of the linearized elastic inversion problem and can be applied to any particular seismic data. We applied this tool to examine the type of information in the model space that can be retrieved from sea‐bottom multicomponent measurements. The results are compared with those obtained from conventional streamer acquisition techniques. We also present multiparameter linearized inversion results obtained from synthetic data that illustrate the resolution of elastic parameters. This approach allows us to investigate the reliability of the elastic parameters estimated for different offset ranges, wave modes, data types, and noise levels involved in data space.

Geophysics ◽  
2016 ◽  
Vol 81 (6) ◽  
pp. A17-A21 ◽  
Author(s):  
Juan I. Sabbione ◽  
Mauricio D. Sacchi

The coefficients that synthesize seismic data via the hyperbolic Radon transform (HRT) are estimated by solving a linear-inverse problem. In the classical HRT, the computational cost of the inverse problem is proportional to the size of the data and the number of Radon coefficients. We have developed a strategy that significantly speeds up the implementation of time-domain HRTs. For this purpose, we have defined a restricted model space of coefficients applying hard thresholding to an initial low-resolution Radon gather. Then, an iterative solver that operated on the restricted model space was used to estimate the group of coefficients that synthesized the data. The method is illustrated with synthetic data and tested with a marine data example.


2021 ◽  
Vol 40 (10) ◽  
pp. 751-758
Author(s):  
Fabien Allo ◽  
Jean-Philippe Coulon ◽  
Jean-Luc Formento ◽  
Romain Reboul ◽  
Laure Capar ◽  
...  

Deep neural networks (DNNs) have the potential to streamline the integration of seismic data for reservoir characterization by providing estimates of rock properties that are directly interpretable by geologists and reservoir engineers instead of elastic attributes like most standard seismic inversion methods. However, they have yet to be applied widely in the energy industry because training DNNs requires a large amount of labeled data that is rarely available. Training set augmentation, routinely used in other scientific fields such as image recognition, can address this issue and open the door to DNNs for geophysical applications. Although this approach has been explored in the past, creating realistic synthetic well and seismic data representative of the variable geology of a reservoir remains challenging. Recently introduced theory-guided techniques can help achieve this goal. A key step in these hybrid techniques is the use of theoretical rock-physics models to derive elastic pseudologs from variations of existing petrophysical logs. Rock-physics theories are already commonly relied on to generalize and extrapolate the relationship between rock and elastic properties. Therefore, they are a useful tool to generate a large catalog of alternative pseudologs representing realistic geologic variations away from the existing well locations. While not directly driven by rock physics, neural networks trained on such synthetic catalogs extract the intrinsic rock-physics relationships and are therefore capable of directly estimating rock properties from seismic amplitudes. Neural networks trained on purely synthetic data are applied to a set of 2D poststack seismic lines to characterize a geothermal reservoir located in the Dogger Formation northeast of Paris, France. The goal of the study is to determine the extent of porous and permeable layers encountered at existing geothermal wells and ultimately guide the location and design of future geothermal wells in the area.


Geosciences ◽  
2019 ◽  
Vol 9 (1) ◽  
pp. 45
Author(s):  
Marwan Charara ◽  
Christophe Barnes

Full-waveform inversion for borehole seismic data is an ill-posed problem and constraining the problem is crucial. Constraints can be imposed on the data and model space through covariance matrices. Usually, they are set to a diagonal matrix. For the data space, signal polarization information can be used to evaluate the data uncertainties. The inversion forces the synthetic data to fit the polarization of observed data. A synthetic inversion for a 2D-2C data estimating a 1D elastic model shows a clear improvement, especially at the level of the receivers. For the model space, horizontal and vertical spatial correlations using a Laplace distribution can be used to fill the model space covariance matrix. This approach reduces the degree of freedom of the inverse problem, which can be quantitatively evaluated. Strong horizontal spatial correlation distances favor a tabular geological model whenever it does not contradict the data. The relaxation of the spatial correlation distances from large to small during the iterative inversion process allows the recovery of geological objects of the same size, which regularizes the inverse problem. Synthetic constrained and unconstrained inversions for 2D-2C crosswell data show the clear improvement of the inversion results when constraints are used.


2020 ◽  
Author(s):  
Bernhard S.A. Schuberth ◽  
Roman Freissler ◽  
Christophe Zaroli ◽  
Sophie Lambotte

<p>For a comprehensive link between seismic tomography and geodynamic models, uncertainties in the seismic model space play a non-negligible role. More specifically, knowledge of the tomographic uncertainties is important for obtaining meaningful estimates of the present-day thermodynamic state of Earth's mantle, which form the basis of retrodictions of past mantle evolution using the geodynamic adjoint method. A standard tool in tomographic-geodynamic model comparisons nowadays is tomographic filtering of mantle circulation models using the resolution operator <em><strong>R</strong></em> associated with the particular seismic inversion of interest. However, in this classical approach it is not possible to consider tomographic uncertainties and their impact on the geodynamic interpretation. </p><p>Here, we present a new method for 'filtering' synthetic Earth models, which makes use of the generalised inverse operator <strong>G</strong><sup>†</sup>, instead of using <em><strong>R</strong></em>. In our case, <strong>G</strong><sup>†</sup> is taken from a recent global SOLA Backus–Gilbert <em>S</em>-wave tomography. In contrast to classical tomographic filtering, the 'imaged' model is constructed by computing the <em>Generalised-Inverse Projection</em> (GIP) of synthetic data calculated in an Earth model of choice. This way, it is possible to include the effects of noise in the seismic data and thus to analyse uncertainties in the resulting model parameters. In order to demonstrate the viability of the method, we compute a set of travel times in an existing mantle circulation model, add specific realisations of Gaussian, zero-mean seismic noise to the synthetic data and apply <strong>G</strong><sup>†</sup>. <br> <br>Our results show that the resulting GIP model without noise is equivalent to the mean model of all GIP realisations from the suite of synthetic 'noisy' data and also closely resembles the model tomographically filtered using <em><strong>R</strong></em>. Most important, GIP models that include noise in the data show a significant variability of the shape and amplitude of seismic anomalies in the mantle. The significant differences between the various GIP realisations highlight the importance of interpreting and assessing tomographic images in a prudent and cautious manner. With the GIP approach, we can moreover investigate the effect of systematic errors in the data, which we demonstrate by adding an extra term to the noise component that aims at mimicking the effects of uncertain crustal corrections. In our presentation, we will finally discuss ways to construct the model covariance matrix based on the GIP approach and point out possible research directions on how to make use of this information in future geodynamic modelling efforts.</p>


Geophysics ◽  
1993 ◽  
Vol 58 (6) ◽  
pp. 873-882 ◽  
Author(s):  
Roelof Jan Versteeg

To get a correct earth image from seismic data acquired over complex structures it is essential to use prestack depth migration. A necessary condition for obtaining a correct image is that the prestack depth migration is done with an accurate velocity model. In cases where we need to use prestack depth migration determination of such a model using conventional methods does not give satisfactory results. Thus, new iterative methods for velocity model determination have been developed. The convergence of these methods can be accelerated by defining constraints on the model in such a way that the method only looks for those components of the true earth velocity field that influence the migrated image. In order to determine these components, the sensitivity of the prestack depth migration result to the velocity model is examined using a complex synthetic data set (the Marmousi data set) for which the exact model is known. The images obtained with increasingly smoothed versions of the true model are compared, and it is shown that the minimal spatial wavelength that needs to be in the model to obtain an accurate depth image from the data set is of the order of 200 m. The model space that has to be examined to find an accurate velocity model from complex seismic data can thus be constrained. This will increase the speed and probability of convergence of iterative velocity model determination methods.


2019 ◽  
Vol 7 (3) ◽  
pp. SE161-SE174 ◽  
Author(s):  
Reetam Biswas ◽  
Mrinal K. Sen ◽  
Vishal Das ◽  
Tapan Mukerji

An inversion algorithm is commonly used to estimate the elastic properties, such as P-wave velocity ([Formula: see text]), S-wave velocity ([Formula: see text]), and density ([Formula: see text]) of the earth’s subsurface. Generally, the seismic inversion problem is solved using one of the traditional optimization algorithms. These algorithms start with a given model and update the model at each iteration, following a physics-based rule. The algorithm is applied at each common depth point (CDP) independently to estimate the elastic parameters. Here, we have developed a technique using the convolutional neural network (CNN) to solve the same problem. We perform two critical steps to take advantage of the generalization capability of CNN and the physics to generate synthetic data for a meaningful representation of the subsurface. First, rather than using CNN as in a classification type of problem, which is the standard approach, we modified the CNN to solve a regression problem to estimate the elastic properties. Second, again unlike the conventional CNN, which is trained by supervised learning with predetermined label (elastic parameter) values, we use the physics of our forward problem to train the weights. There are two parts of the network: The first is the convolution network, which takes the input as seismic data to predict the elastic parameters, which is the desired intermediate result. In the second part of the network, we use wave-propagation physics and we use the output of the CNN to generate the predicted seismic data for comparison with the actual data and calculation of the error. This error between the true and predicted seismograms is then used to calculate gradients, and update the weights in the CNN. After the network is trained, only the first part of the network can be used to estimate elastic properties at remaining CDPs directly. We determine the application of physics-guided CNN on prestack and poststack inversion problems. To explain how the algorithm works, we examine it using a conventional CNN workflow without any physics guidance. We first implement the algorithm on a synthetic data set for prestack and poststack data and then apply it to a real data set from the Cana field. In all the training examples, we use a maximum of 20% of data. Our approach offers a distinct advantage over a conventional machine-learning approach in that we circumvent the need for labeled data sets for training.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. V227-V233
Author(s):  
Jitao Ma ◽  
Xiaohong Chen ◽  
Mrinal K. Sen ◽  
Yaru Xue

Blended data sets are now being acquired because of improved efficiency and reduction in cost compared with conventional seismic data acquisition. We have developed two methods for blended data free-surface multiple attenuation. The first method is based on an extension of surface-related multiple elimination (SRME) theory, in which free-surface multiples of the blended data can be predicted by a multidimensional convolution of the seismic data with the inverse of the blending operator. A least-squares inversion method is used, which indicates that crosstalk noise existed in the prediction result due to the approximate inversion. An adaptive subtraction procedure similar to that used in conventional SRME is then applied to obtain the blended primary — this can damage the energy of primaries. The second method is based on inverse data processing (IDP) theory adapted to blended data. We derived a formula similar to that used in conventional IDP, and we attenuated free-surface multiples by simple muting of the focused points in the inverse data space (IDS). The location of the focused points in the IDS for blended data, which can be calculated, is also related to the blending operator. We chose a singular value decomposition-based inversion algorithm to stabilize the inversion in the IDP method. The advantage of IDP compared with SRME is that, it does not have crosstalk noise and is able to better preserve the primary energy. The outputs of our methods are all blended primaries, and they can be further processed using blended data-based algorithms. Synthetic data examples show that the SRME and IDP algorithms for blended data are successful in attenuating free-surface multiples.


2020 ◽  
pp. 36-52
Author(s):  
I. A. Kopysova ◽  
A. S. Shirokov ◽  
D. V. Grandov ◽  
S. A. Eremin ◽  
E. N. Zhilin

The use of the method of seismic data acoustic inversion, in the presence of thick gas cap, can lead to difficulties when building background models of elastic parameters. In this regard, in the conditions of acoustically contrast thin environments within the perimeter of the Russkoye oil and gas condensate field, in addition to the standard version based on the well data, the authors considered a number of modified techniques ("block", "flat", and background models). The use of these background models provided the best results and made it possible to significantly improve the quality of predicting rock properties; based on the drilling results, effective penetration was ensured at 66 %, which was 102 % of the plan. Also, based on the inversion results, it became possible to predict reservoir properties using the Bayesian lithotype classification method.


2020 ◽  
Vol 223 (1) ◽  
pp. 254-269
Author(s):  
Roman Freissler ◽  
Christophe Zaroli ◽  
Sophie Lambotte ◽  
Bernhard S A Schuberth

SUMMARY Tomographic-geodynamic model comparisons are a key component in studies of the present-day state and evolution of Earth’s mantle. To account for the limited seismic resolution, ‘tomographic filtering’ of the geodynamically predicted mantle structures is a standard processing step in this context. The filtered model provides valuable information on how heterogeneities are smeared and modified in amplitude given the available seismic data and underlying inversion strategy. An important aspect that has so far not been taken into account are the effects of data uncertainties. We present a new method for ‘tomographic filtering’ in which it is possible to include the effects of random and systematic errors in the seismic measurements and to analyse the associated uncertainties in the tomographic model space. The ‘imaged’ model is constructed by computing the generalized-inverse projection (GIP) of synthetic data calculated in an earth model of choice. An advantage of this approach is that a reparametrization onto the tomographic grid can be avoided, depending on how the synthetic data are calculated. To demonstrate the viability of the method, we compute traveltimes in an existing mantle circulation model (MCM), add specific realizations of random seismic ‘noise’ to the synthetic data and apply the generalized inverse operator of a recent Backus–Gilbert-type global S-wave tomography. GIP models based on different noise realizations show a significant variability of the shape and amplitude of seismic anomalies. This highlights the importance of interpreting tomographic images in a prudent and cautious manner. Systematic errors, such as event mislocation or imperfect crustal corrections, can be investigated by introducing an additional term to the noise component so that the resulting noise distributions are biased. In contrast to Gaussian zero-mean noise, this leads to a bias in model space; that is, the mean of all GIP realizations also is non-zero. Knowledge of the statistical properties of model uncertainties together with tomographic resolution is crucial for obtaining meaningful estimates of Earth’s present-day thermodynamic state. A practicable treatment of error propagation and uncertainty quantification will therefore be increasingly important, especially in view of geodynamic inversions that aim at ‘retrodicting’ past mantle evolution based on tomographic images.


2021 ◽  
Vol 11 (11) ◽  
pp. 4874
Author(s):  
Milan Brankovic ◽  
Eduardo Gildin ◽  
Richard L. Gibson ◽  
Mark E. Everett

Seismic data provides integral information in geophysical exploration, for locating hydrocarbon rich areas as well as for fracture monitoring during well stimulation. Because of its high frequency acquisition rate and dense spatial sampling, distributed acoustic sensing (DAS) has seen increasing application in microseimic monitoring. Given large volumes of data to be analyzed in real-time and impractical memory and storage requirements, fast compression and accurate interpretation methods are necessary for real-time monitoring campaigns using DAS. In response to the developments in data acquisition, we have created shifted-matrix decomposition (SMD) to compress seismic data by storing it into pairs of singular vectors coupled with shift vectors. This is achieved by shifting the columns of a matrix of seismic data before applying singular value decomposition (SVD) to it to extract a pair of singular vectors. The purpose of SMD is data denoising as well as compression, as reconstructing seismic data from its compressed form creates a denoised version of the original data. By analyzing the data in its compressed form, we can also run signal detection and velocity estimation analysis. Therefore, the developed algorithm can simultaneously compress and denoise seismic data while also analyzing compressed data to estimate signal presence and wave velocities. To show its efficiency, we compare SMD to local SVD and structure-oriented SVD, which are similar SVD-based methods used only for denoising seismic data. While the development of SMD is motivated by the increasing use of DAS, SMD can be applied to any seismic data obtained from a large number of receivers. For example, here we present initial applications of SMD to readily available marine seismic data.


Sign in / Sign up

Export Citation Format

Share Document