scholarly journals Semisupervised sequence modeling for elastic impedance inversion

2019 ◽  
Vol 7 (3) ◽  
pp. SE237-SE249 ◽  
Author(s):  
Motaz Alfarraj ◽  
Ghassan AlRegib

Recent applications of machine learning algorithms in the seismic domain have shown great potential in different areas such as seismic inversion and interpretation. However, such algorithms rarely enforce geophysical constraints — the lack of which might lead to undesirable results. To overcome this issue, we have developed a semisupervised sequence modeling framework based on recurrent neural networks for elastic impedance inversion from multiangle seismic data. Specifically, seismic traces and elastic impedance (EI) traces are modeled as a time series. Then, a neural-network-based inversion model comprising convolutional and recurrent neural layers is used to invert seismic data for EI. The proposed workflow uses well-log data to guide the inversion. In addition, it uses seismic forward modeling to regularize the training and to serve as a geophysical constraint for the inversion. The proposed workflow achieves an average correlation of 98% between the estimated and target EI using 10 well logs for training on a synthetic data set.

Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. R173-R187 ◽  
Author(s):  
Huaizhen Chen ◽  
Kristopher A. Innanen ◽  
Tiansheng Chen

P- and S-wave inverse quality factors quantify seismic wave attenuation, which is related to several key reservoir parameters (porosity, saturation, and viscosity). Estimating the inverse quality factors from observed seismic data provides additional and useful information during gas-bearing reservoir prediction. First, we have developed an approximate reflection coefficient and attenuative elastic impedance (QEI) in terms of the inverse quality factors, and then we established an approach to estimate elastic properties (P- and S-wave impedances, and density) and attenuation (P- and S-wave inverse quality factors) from seismic data at different incidence angles and frequencies. The approach is implemented as a two-step inversion: a model-based and damped least-squares inversion for QEI, and a Bayesian Markov chain Monte Carlo inversion for the inverse quality factors. Synthetic data tests confirm that P- and S-wave impedances and inverse quality factors are reasonably estimated in the case of moderate data error or noise. Applying the established approach to a real data set is suggestive of the robustness of the approach, and furthermore that physically meaningful inverse quality factors can be estimated from seismic data acquired over a gas-bearing reservoir.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. R199-R217 ◽  
Author(s):  
Xintao Chai ◽  
Shangxu Wang ◽  
Genyang Tang

Seismic data are nonstationary due to subsurface anelastic attenuation and dispersion effects. These effects, also referred to as the earth’s [Formula: see text]-filtering effects, can diminish seismic resolution. We previously developed a method of nonstationary sparse reflectivity inversion (NSRI) for resolution enhancement, which avoids the intrinsic instability associated with inverse [Formula: see text] filtering and generates superior [Formula: see text] compensation results. Applying NSRI to data sets that contain multiples (addressing surface-related multiples only) requires a demultiple preprocessing step because NSRI cannot distinguish primaries from multiples and will treat them as interference convolved with incorrect [Formula: see text] values. However, multiples contain information about subsurface properties. To use information carried by multiples, with the feedback model and NSRI theory, we adapt NSRI to the context of nonstationary seismic data with surface-related multiples. Consequently, not only are the benefits of NSRI (e.g., circumventing the intrinsic instability associated with inverse [Formula: see text] filtering) extended, but also multiples are considered. Our method is limited to be a 1D implementation. Theoretical and numerical analyses verify that given a wavelet, the input [Formula: see text] values primarily affect the inverted reflectivities and exert little effect on the estimated multiples; i.e., multiple estimation need not consider [Formula: see text] filtering effects explicitly. However, there are benefits for NSRI considering multiples. The periodicity and amplitude of the multiples imply the position of the reflectivities and amplitude of the wavelet. Multiples assist in overcoming scaling and shifting ambiguities of conventional problems in which multiples are not considered. Experiments using a 1D algorithm on a synthetic data set, the publicly available Pluto 1.5 data set, and a marine data set support the aforementioned findings and reveal the stability, capabilities, and limitations of the proposed method.


2021 ◽  
Vol 40 (10) ◽  
pp. 751-758
Author(s):  
Fabien Allo ◽  
Jean-Philippe Coulon ◽  
Jean-Luc Formento ◽  
Romain Reboul ◽  
Laure Capar ◽  
...  

Deep neural networks (DNNs) have the potential to streamline the integration of seismic data for reservoir characterization by providing estimates of rock properties that are directly interpretable by geologists and reservoir engineers instead of elastic attributes like most standard seismic inversion methods. However, they have yet to be applied widely in the energy industry because training DNNs requires a large amount of labeled data that is rarely available. Training set augmentation, routinely used in other scientific fields such as image recognition, can address this issue and open the door to DNNs for geophysical applications. Although this approach has been explored in the past, creating realistic synthetic well and seismic data representative of the variable geology of a reservoir remains challenging. Recently introduced theory-guided techniques can help achieve this goal. A key step in these hybrid techniques is the use of theoretical rock-physics models to derive elastic pseudologs from variations of existing petrophysical logs. Rock-physics theories are already commonly relied on to generalize and extrapolate the relationship between rock and elastic properties. Therefore, they are a useful tool to generate a large catalog of alternative pseudologs representing realistic geologic variations away from the existing well locations. While not directly driven by rock physics, neural networks trained on such synthetic catalogs extract the intrinsic rock-physics relationships and are therefore capable of directly estimating rock properties from seismic amplitudes. Neural networks trained on purely synthetic data are applied to a set of 2D poststack seismic lines to characterize a geothermal reservoir located in the Dogger Formation northeast of Paris, France. The goal of the study is to determine the extent of porous and permeable layers encountered at existing geothermal wells and ultimately guide the location and design of future geothermal wells in the area.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA41-WA52 ◽  
Author(s):  
Dario Grana ◽  
Leonardo Azevedo ◽  
Mingliang Liu

Among the large variety of mathematical and computational methods for estimating reservoir properties such as facies and petrophysical variables from geophysical data, deep machine-learning algorithms have gained significant popularity for their ability to obtain accurate solutions for geophysical inverse problems in which the physical models are partially unknown. Solutions of classification and inversion problems are generally not unique, and uncertainty quantification studies are required to quantify the uncertainty in the model predictions and determine the precision of the results. Probabilistic methods, such as Monte Carlo approaches, provide a reliable approach for capturing the variability of the set of possible models that match the measured data. Here, we focused on the classification of facies from seismic data and benchmarked the performance of three different algorithms: recurrent neural network, Monte Carlo acceptance/rejection sampling, and Markov chain Monte Carlo. We tested and validated these approaches at the well locations by comparing classification predictions to the reference facies profile. The accuracy of the classification results is defined as the mismatch between the predictions and the log facies profile. Our study found that when the training data set of the neural network is large enough and the prior information about the transition probabilities of the facies in the Monte Carlo approach is not informative, machine-learning methods lead to more accurate solutions; however, the uncertainty of the solution might be underestimated. When some prior knowledge of the facies model is available, for example, from nearby wells, Monte Carlo methods provide solutions with similar accuracy to the neural network and allow a more robust quantification of the uncertainty, of the solution.


Geophysics ◽  
2018 ◽  
Vol 83 (3) ◽  
pp. MR187-MR198 ◽  
Author(s):  
Yi Shen ◽  
Jack Dvorkin ◽  
Yunyue Li

Our goal is to accurately estimate attenuation from seismic data using model regularization in the seismic inversion workflow. One way to achieve this goal is by finding an analytical relation linking [Formula: see text] to [Formula: see text]. We derive an approximate closed-form solution relating [Formula: see text] to [Formula: see text] using rock-physics modeling. This relation is tested on well data from a clean clastic gas reservoir, of which the [Formula: see text] values are computed from the log data. Next, we create a 2D synthetic gas-reservoir section populated with [Formula: see text] and [Formula: see text] and generate respective synthetic seismograms. Now, the goal is to invert this synthetic seismic section for [Formula: see text]. If we use standard seismic inversion based solely on seismic data, the inverted attenuation model has low resolution and incorrect positioning, and it is distorted. However, adding our relation between velocity and attenuation, we obtain an attenuation model very close to the original section. This method is tested on a 2D field seismic data set from Gulf of Mexico. The resulting [Formula: see text] model matches the geologic shape of an absorption body interpreted from the seismic section. Using this [Formula: see text] model in seismic migration, we make the seismic events below the high-absorption layer clearly visible, with improved frequency content and coherency of the events.


2017 ◽  
Vol 5 (3) ◽  
pp. SJ81-SJ90 ◽  
Author(s):  
Kainan Wang ◽  
Jesse Lomask ◽  
Felix Segovia

Well-log-to-seismic tying is a key step in many interpretation workflows for oil and gas exploration. Synthetic seismic traces from the wells are often manually tied to seismic data; this process can be very time consuming and, in some cases, inaccurate. Automatic methods, such as dynamic time warping (DTW), can match synthetic traces to seismic data. Although these methods are extremely fast, they tend to create interval velocities that are not geologically realistic. We have described the modification of DTW to create a blocked dynamic warping (BDW) method. BDW generates an automatic, optimal well tie that honors geologically consistent velocity constraints. Consequently, it results in updated velocities that are more realistic than other methods. BDW constrains the updated velocity to be constant or linearly variable inside each geologic layer. With an optimal correlation between synthetic seismograms and surface seismic data, this algorithm returns an automatically updated time-depth curve and an updated interval velocity model that still retains the original geologic velocity boundaries. In other words, the algorithm finds the optimal solution for tying the synthetic to the seismic data while restricting the interval velocity changes to coincide with the initial input blocking. We have determined the application of the BDW technique on a synthetic data example and field data set.


2020 ◽  
Vol 8 (1) ◽  
pp. T141-T149
Author(s):  
Ritesh Kumar Sharma ◽  
Satinder Chopra ◽  
Larry R. Lines

Multicomponent seismic data offer several advantages for characterizing reservoirs with the use of the vertical component (PP) and mode-converted (PS) data. Joint impedance inversion inverts both of these data sets simultaneously; hence, it is considered superior to simultaneous impedance inversion. However, the success of joint impedance inversion depends on how accurately the PS data are mapped on the PP time domain. Normally, this is attempted by performing well-to-seismic ties for PP and PS data sets and matching different horizons picked on PP and PS data. Although it seems to be a straightforward approach, there are a few issues associated with it. One of them is the lower resolution of the PS data compared with the PP data that presents difficulties in the correlation of the equivalent reflection events on both the data sets. Even after a few consistent horizons get tracked, the horizon matching process introduces some artifacts on the PS data when mapped into PP time. We have evaluated such challenges using a data set from the Western Canadian Sedimentary Basin and then develop a novel workflow for addressing them. The importance of our workflow was determined by comparing data examples generated with and without its adoption.


Geophysics ◽  
1989 ◽  
Vol 54 (2) ◽  
pp. 181-190 ◽  
Author(s):  
Jakob B. U. Haldorsen ◽  
Paul A. Farmer

Occasionally, seismic data contain transient noise that can range from being a nuisance to becoming intolerable when several seismic vessels try simultaneously to collect data in an area. The traditional approach to solving this problem has been to allocate time slots to the different acquisition crews; the procedure, although effective, is very expensive. In this paper a statistical method called “trimmed mean stack” is evaluated as a tool for reducing the detrimental effects of noise from interfering seismic crews. Synthetic data, as well as field data, are used to illustrate the efficacy of the technique. Although a conventional stack gives a marginally better signal‐to‐noise ratio (S/N) for data without interference noise, typical usage of the trimmed mean stack gives a reduced S/N equivalent to a fold reduction of about 1 or 2 percent. On the other hand, for a data set containing high‐energy transient noise, trimming produces stacked sections without visible high‐amplitude contaminating energy. Equivalent sections produced with conventional processing techniques would be totally unacceptable. The application of a trimming procedure could mean a significant reduction in the costs of data acquisition by allowing several seismic crews to work simultaneously.


Sign in / Sign up

Export Citation Format

Share Document