scholarly journals Simulating migrated and inverted seismic data by filtering a geologic model

Geophysics ◽  
2008 ◽  
Vol 73 (2) ◽  
pp. T1-T10 ◽  
Author(s):  
Gerrit Toxopeus ◽  
Jan Thorbecke ◽  
Kees Wapenaar ◽  
Steen Petersen ◽  
Evert Slob ◽  
...  

The simulation of migrated and inverted data is hampered by the high computational cost of generating 3D synthetic data, followed by processes of migration and inversion. For example, simulating the migrated seismic signature of subtle stratigraphic traps demands the expensive exercise of 3D forward modeling, followed by 3D migration of the synthetic seismograms. This computational cost can be overcome using a strategy for simulating migrated and inverted data by filtering a geologic model with 3D spatial-resolution and angle filters, respectively. A key property of the approach is this: The geologic model that describes a target zone is decoupled from the macrovelocity model used to compute the filters. The process enables a target-orientedapproach, by which a geologically detailed earth model describing a reservoir is adjusted without having to recalculate the filters. Because a spatial-resolution filter combines the results of the modeling and migration operators, the simulated images can be compared directly to a real migration image. We decompose the spatial-resolution filter into two parts and show that applying one of those parts produces output directly comparable to 1D inverted real data. Two-dimensional synthetic examples that include seismic uncertainties demonstrate the usefulness of the approach. Results from a real data example show that horizontal smearing, which is not simulated by the 1D convolution model result, is essential to understand the seismic expression of the deformation related to sulfate dissolution and karst collapse.

Geophysics ◽  
1984 ◽  
Vol 49 (3) ◽  
pp. 250-264 ◽  
Author(s):  
L. R. Lines ◽  
A. Bourgeois ◽  
J. D. Covey

Traveltimes from an offset vertical seismic profile (VSP) are used to estimate subsurface two‐dimensional dip by applying an iterative least‐squares inverse method. Tests on synthetic data demonstrate that inversion techniques are capable of estimating dips in the vicinity of a wellbore by using the traveltimes of the direct arrivals and the primary reflections. The inversion method involves a “layer stripping” approach in which the dips of the shallow layers are estimated before proceeding to estimate deeper dips. Examples demonstrate that the primary reflections become essential whenever the ratio of source offset to layer depth becomes small. Traveltime inversion also requires careful estimation of layer velocities and proper statics corrections. Aside from these difficulties and the ubiquitous nonuniqueness problem, the VSP traveltime inversion was able to produce a valid earth model for tests on a real data case.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. V183-V197 ◽  
Author(s):  
Tim T.Y. Lin ◽  
Felix J. Herrmann

We have solved the estimation of primaries by sparse inversion problem for a seismic record with large near-offset gaps and other contiguous holes in the acquisition grid without relying on explicit reconstruction of the missing data. Eliminating the unknown data as an explicit inversion variable is desirable because it sidesteps possible issues arising from overfitting the primary model to the estimated data. Instead, we have simulated their multiple contributions by augmenting the forward prediction model for the total wavefield with a scattering series that mimics the action of the free surface reflector within the area of the unobserved trace locations. Each term in this scattering series involves convolution of the total predicted wavefield once more with the current estimated Green’s function for a medium without the free surface at these unobserved locations. It is important to note that our method cannot by itself mitigate regular undersampling issues that result in significant aliases when computing the multiple contributions, such as source-receiver sampling differences or crossline spacing issues in 3D acquisition. We have investigated algorithms that handle the nonlinearity in the modeling operator due to the scattering terms, and we also determined that just a few of the terms can be enough to satisfactorily mitigate the effects of near-offset data gaps during the inversion process. Numerical experiments on synthetic data found that the final derived method can significantly outperform explicit data reconstruction for large near-offset gaps, with a similar computational cost and better memory efficiency. We have also found on real data that our scheme outperforms the unmodified primary estimation method that uses an existing Radon-based interpolation of the near-offset gap.


Geophysics ◽  
2016 ◽  
Vol 81 (1) ◽  
pp. R15-R27 ◽  
Author(s):  
Hassan Khaniani ◽  
John C. Bancroft ◽  
Eric von Lunen

We have studied elastic wave scattering and iterative inversion in the context of the Kirchhoff approximation. The approach is more consistent with the weak-contrast reflectivity functions of Zoeppritz equations as compared to the Born approximation. To reduce the computational cost associated with inversion, we demonstrated the use of amplitude-variation-with-offset (AVO) analysis, prestack time migrations (PSTMs), and the corresponding forward modeling in an iterative scheme. Forward modeling and migration/inversion operators are based on the double-square-root (DSR) equations of PSTM and linearized reflectivity functions. All operators involved in the inversion, including the background model for DSR and AVO, are defined in P-to-P traveltime and are updated at each iteration. Our method is practical for real data applications because all operators of the inversion are known to be applicable for standard methods. We have evaluated the inversion on synthetic and real data using the waveform characteristics of P-to-P and P-to-S data.


Geophysics ◽  
1999 ◽  
Vol 64 (2) ◽  
pp. 494-503 ◽  
Author(s):  
Wenjie Dong

The [Formula: see text] of hydrocarbon‐bearing sediments normally deviates from the [Formula: see text] trend of the background rocks. This causes anomalous reflection amplitude variation with offset (AVO) in the seismic data. The estimation of these AVOs is inevitably affected by wave propagation effects and inversion algorithm limitations, such as thin‐bed tuning and migration stretch. A logical point is to determine the minimum [Formula: see text] change required for an anomalous AVO to be detectable beyond the background tuning and stretching effects. Assuming Ricker wavelet for the seismic data, this study addresses this point by quantifying the errors in the intercept/slope estimate. Using these results, two detectability conditions are derived. Denoting the background [Formula: see text] by γ and its variation by δγ, the thin‐bed parameter (thickness/wavelength) by ξ, the maximum background intercept closest to the AVO by |A|max, and the thin‐bed intercept value by |A|thin the two conditions are [Formula: see text] [Formula: see text] for detectability against stretching and tuning plus stretching, respectively. Tests on synthetic data confirm their validity and accuracy. These conditions provide a quantitative guideline for evaluating AVO applicability and effectiveness in seismic exploration. They can eliminate some of the subjectivity when interpreting AVO results in different attribute spaces. To improve AVO detectability, a procedure is suggested for removing the tuning and stretching effects.


Geophysics ◽  
2000 ◽  
Vol 65 (5) ◽  
pp. 1364-1371 ◽  
Author(s):  
Shuki Ronen ◽  
Christopher L. Liner

Conventional processing, such as Kirchhoff dip moveout (DMO) and prestack full migration, are based on independent imaging of subsets of the data before stacking or amplitude variation with offset (AVO) analysis. Least‐squares DMO (LSDMO) and least‐squares migration (LSMig) are a family of developing processing methods which are based on inversion of reverse DMO and demigration operators. LSDMO and LSMig find the earth model that best fits the data and a priori assumptions which can be imposed as constraints. Such inversions are more computer intensive, but have significant advantages compared to conventional processing when applied to irregularly sampled data. Various conventional processes are approximations of the inversions in LSDMO and LSMig. Often, processing is equivalent to using the transpose of a matrix which LSDMO/LSMig inverts. Such transpose processing is accurate when the data sampling is adequate. In practice, costly survey design, real‐time coverage quality control, in‐fill acquisition, redundancy editing, and prestack interpolation, are used to create a survey geometry such that the transpose is a good approximation of the inverse. Normalized DMO and migration are approximately equivalent to following the application of the above transpose processing by a diagonal correction. However, in most cases, the required correction is not actually diagonal. In such cases LSDMO and LSMig can produce earth models with higher resolution and higher fidelity than normalized DMO and migration. The promise of LSMig and LSDMO is reduced acquisition cost, improved resolution, and reduced acquisition footprint. The computational cost, and more importantly turn‐around time, is a major factor in the commercialization of these methods. With parallel computing, these methods are now becoming practical.


Geophysics ◽  
2003 ◽  
Vol 68 (2) ◽  
pp. 641-655 ◽  
Author(s):  
Anders Sollid ◽  
Bjørn Ursin

Scattering‐angle migration maps seismic prestack data directly into angle‐dependent reflectivity at the image point. The method automatically accounts for triplicated rayfields and is easily extended to handle anisotropy. We specify scattering‐angle migration integrals for PP and PS ocean‐bottom seismic (OBS) data in 3D and 2.5D elastic media exhibiting weak contrasts and weak anisotropy. The derivation is based on the anisotropic elastic Born‐Kirchhoff‐Helmholtz surface scattering integral. The true‐amplitude weights are chosen such that the amplitude versus angle (AVA) response of the angle gather is equal to the Born scattering coefficient or, alternatively, the linearized reflection coefficient. We implement scattering‐angle migration by shooting a fan of rays from the subsurface point to the acquisition surface, followed by integrating the phase‐ and amplitude‐corrected seismic data over the migration dip at the image point while keeping the scattering‐angle fixed. A dense summation over migration dip only adds a minor additional cost and enhances the coherent signal in the angle gathers. The 2.5D scattering‐angle migration is demonstrated on synthetic data and on real PP and PS data from the North Sea. In the real data example we use a transversely isotropic (TI) background model to obtain depth‐consistent PP and PS images. The aim of the succeeding AVA analysis is to predict the fluid type in the reservoir sand. Specifically, the PS stack maps the contrasts in lithology while being insensitive to the fluid fill. The PP large‐angle stack maps the oil‐filled sand but shows no response in the brine‐filled zones. A comparison to common‐offset Kirchhoff migration demonstrates that, for the same computational cost, scattering‐angle migration provides common image gathers with less noise and fewer artifacts.


2019 ◽  
Vol 214 ◽  
pp. 06003 ◽  
Author(s):  
Kamil Deja ◽  
Tomasz Trzcin´ski ◽  
Łukasz Graczykowski

Simulating the detector response is a key component of every highenergy physics experiment. The methods used currently for this purpose provide high-fidelity results. However, this precision comes at a price of a high computational cost. In this work, we introduce our research aiming at fast generation of the possible responses of detector clusters to particle collisions. We present the results for the real-life example of the Time Projection Chamber in the ALICE experiment at CERN. The essential component of our solution is a generative model that allows to simulate synthetic data points that bear high similarity to the real data. Leveraging recent advancements in machine learning, we propose to use conditional Generative Adversarial Networks. In this work we present a method to simulate data samples possible to record in the detector based on the initial information about particles. We propose and evaluate several models based on convolutional or recursive networks. The main advantage offered by the proposed method is a significant speed-up in the execution time, reaching up to the factor of 102 with respect to the currently used simulation tool. Nevertheless, this speed-up comes at a price of a lower simulation quality. In this work we adapt available methods and show their quantitative and qualitative limitations.


2020 ◽  
Author(s):  
Brydon Lowney ◽  
Ivan Lokmer ◽  
Gareth Shane O'Brien ◽  
Christopher Bean

<p>Diffractions are a useful aspect of the seismic wavefield and are often underutilised. By separating the diffractions from the rest of the wavefield they can be used for various applications such as velocity analysis, structural imaging, and wavefront tomography. However, separating the diffractions is a challenging task due to the comparatively low amplitudes of diffractions as well as the overlap between reflection and diffraction energy. Whilst there are existing analytical methods for separation, these act to remove reflections, leaving a volume which contains diffractions and noise. On top of this, analytical separation techniques can be costly computationally as well as requiring manual parameterisation. To alleviate these issues, a deep neural network has been trained to automatically identify and separate diffractions from reflections and noise on pre-migration data.</p><p>Here, a Generative Adversarial Network (GAN) has been trained for the automated separation. This is a type of deep neural network architecture which contains two neural networks which compete against one another. One neural network acts as a generator, creating new data which appears visually similar to the real data, while a second neural network acts as a discriminator, trying to identify whether the given data is real or fake. As the generator improves, so too does the discriminator, giving a deeper understanding of the data. To avoid overfitting to a specific dataset as well as to improve the cross-data applicability of the network, data from several different seismic datasets from geologically distinct locations has been used in training. When comparing a network trained on a single dataset compared to one trained on several datasets, it is seen that providing additional data improves the separation on both the original and new datasets.</p><p>The automatic separation technique is then compared with a conventional, analytical, separation technique; plane-wave destruction (PWD). The computational cost of the GAN separation is vastly superior to that of PWD, performing a separation in minutes on a 3-D dataset in comparison to hours. Although in some complex areas the GAN separation is of a higher quality than the PWD separation, as it does not rely on the dip, there are also areas where the PWD outperforms the GAN separation. The GAN may be enhanced by adding more training data as well as by improving the initial separation used to create the training data, which is based around PWD and thus is imperfect and can introduce bias into the network. A potential for this is training the GAN entirely using synthetic data, which allows for a perfect separation as the points are known, however, it must be of sufficient volume for training and sufficient quality for real data applicability.</p>


Geophysics ◽  
2003 ◽  
Vol 68 (6) ◽  
pp. 1984-1999 ◽  
Author(s):  
M. M. Saggaf ◽  
M. Nafi Toksöz ◽  
M. I. Marhoon

We present an approach based on competitive neural networks for the classification and identification of reservoir facies from seismic data. This approach can be adapted to perform either classification of the seismic facies based entirely on the characteristics of the seismic response, without requiring the use of any well information, or automatic identification and labeling of the facies where well information is available. The former is of prime use for oil prospecting in new regions, where few or no wells have been drilled, whereas the latter is most useful in development fields, where the information gained at the wells can be conveniently extended to the interwell regions. Cross‐validation tests on synthetic and real seismic data demonstrated that the method can be an effective means of mapping the reservoir heterogeneity. For synthetic data, the output of the method showed considerable agreement with the actual geologic model used to generate the seismic data; for the real data application, the predicted facies accurately matched those observed at the wells. Moreover, the resulting map corroborates our existing understanding of the reservoir and shows substantial similarity to the low‐frequency geologic model constructed by interpolating the well information, while adding significant detail and enhanced resolution to that model.


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


Sign in / Sign up

Export Citation Format

Share Document