Nonhyperbolic normal moveout stretch correction with deep learning automation

Geophysics ◽  
2021 ◽  
pp. 1-60
Author(s):  
Mohammad Mahdi Abedi ◽  
David Pardo

Normal moveout (NMO) correction is a fundamental step in seismic data processing. It consists of mapping seismic data from recorded traveltimes to corresponding zero-offset times. This process produces wavelet stretching as an undesired byproduct. We address the NMO stretching problem with two methods: 1) an exact stretch-free NMO correction that prevents the stretching of primary reflections, and 2) an approximate post-NMO stretch correction. Our stretch-free NMO produces parallel moveout trajectories for primary reflections. Our post-NMO stretch correction calculates the moveout of stretched wavelets as a function of offset. Both methods are based on the generalized moveout approximation and are suitable for application in complex anisotropic or heterogeneous environments. We use new moveout equations and modify the original parameter functions to be constant over the primary reflections, and then interpolate the seismogram amplitudes at the calculated traveltimes. For fast and automatic modification of the parameter functions, we use deep learning. We design a deep neural network (DNN) using convolutional layers and residual blocks. To train the DNN, we generate a set of 40,000 synthetic NMO corrected common midpoint gathers and the corresponding desired outputs of the DNN. The data set is generated using different velocity profiles, wavelets, and offset vectors, and includes multiples, ground roll, and band-limited random noise. The simplicity of the DNN task –a 1D identification of primary reflections– improves the generalization in practice. We use the trained DNN and show successful applications of our stretch-correction method on synthetic and different real data sets.

Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Electronics ◽  
2019 ◽  
Vol 8 (9) ◽  
pp. 944 ◽  
Author(s):  
Heesin Lee ◽  
Joonwhoan Lee

X-ray scattering significantly limits image quality. Conventional strategies for scatter reduction based on physical equipment or measurements inevitably increase the dose to improve the image quality. In addition, scatter reduction based on a computational algorithm could take a large amount of time. We propose a deep learning-based scatter correction method, which adopts a convolutional neural network (CNN) for restoration of degraded images. Because it is hard to obtain real data from an X-ray imaging system for training the network, Monte Carlo (MC) simulation was performed to generate the training data. For simulating X-ray images of a human chest, a cone beam CT (CBCT) was designed and modeled as an example. Then, pairs of simulated images, which correspond to scattered and scatter-free images, respectively, were obtained from the model with different doses. The scatter components, calculated by taking the differences of the pairs, were used as targets to train the weight parameters of the CNN. Compared with the MC-based iterative method, the proposed one shows better results in projected images, with as much as 58.5% reduction in root-mean-square error (RMSE), and 18.1% and 3.4% increases in peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM), on average, respectively.


2019 ◽  
Vol 38 (11) ◽  
pp. 872a1-872a9 ◽  
Author(s):  
Mauricio Araya-Polo ◽  
Stuart Farris ◽  
Manuel Florez

Exploration seismic data are heavily manipulated before human interpreters are able to extract meaningful information regarding subsurface structures. This manipulation adds modeling and human biases and is limited by methodological shortcomings. Alternatively, using seismic data directly is becoming possible thanks to deep learning (DL) techniques. A DL-based workflow is introduced that uses analog velocity models and realistic raw seismic waveforms as input and produces subsurface velocity models as output. When insufficient data are used for training, DL algorithms tend to overfit or fail. Gathering large amounts of labeled and standardized seismic data sets is not straightforward. This shortage of quality data is addressed by building a generative adversarial network (GAN) to augment the original training data set, which is then used by DL-driven seismic tomography as input. The DL tomographic operator predicts velocity models with high statistical and structural accuracy after being trained with GAN-generated velocity models. Beyond the field of exploration geophysics, the use of machine learning in earth science is challenged by the lack of labeled data or properly interpreted ground truth, since we seldom know what truly exists beneath the earth's surface. The unsupervised approach (using GANs to generate labeled data)illustrates a way to mitigate this problem and opens geology, geophysics, and planetary sciences to more DL applications.


Geophysics ◽  
2012 ◽  
Vol 77 (1) ◽  
pp. A5-A8 ◽  
Author(s):  
David Bonar ◽  
Mauricio Sacchi

The nonlocal means algorithm is a noise attenuation filter that was originally developed for the purposes of image denoising. This algorithm denoises each sample or pixel within an image by utilizing other similar samples or pixels regardless of their spatial proximity, making the process nonlocal. Such a technique places no assumptions on the data except that structures within the data contain a degree of redundancy. Because this is generally true for reflection seismic data, we propose to adopt the nonlocal means algorithm to attenuate random noise in seismic data. Tests with synthetic and real data sets demonstrate that the nonlocal means algorithm does not smear seismic energy across sharp discontinuities or curved events when compared to seismic denoising methods such as f-x deconvolution.


Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. V51-V60 ◽  
Author(s):  
Ramesh (Neelsh) Neelamani ◽  
Anatoly Baumstein ◽  
Warren S. Ross

We propose a complex-valued curvelet transform-based (CCT-based) algorithm that adaptively subtracts from seismic data those noises for which an approximate template is available. The CCT decomposes a geophysical data set in terms of small reflection pieces, with each piece having a different characteristic frequency, location, and dip. One can precisely change the amplitude and shift the location of each seismic reflection piece in a template by controlling the amplitude and phase of the template's CCT coefficients. Based on these insights, our approach uses the phase and amplitude of the data's and template's CCT coefficients to correct misalignment and amplitude errors in the noise template, thereby matching the adapted template with the actual noise in the seismic data, reflection event-by-event. We also extend our approach to subtract noises that require several templates to be approximated. By itself, the method can only correct small misalignment errors ([Formula: see text] in [Formula: see text] data) in the template; it relies on conventional least-squares (LS) adaptation to correct large-scale misalignment errors, such as wavelet mismatches and bulk shifts. Synthetic and real-data results illustrate that the CCT-based approach improves upon the LS approach and a curvelet-based approach described by Herrmann and Verschuur.


Geophysics ◽  
1992 ◽  
Vol 57 (12) ◽  
pp. 1623-1632 ◽  
Author(s):  
Richard E. Duren ◽  
Stan V. Morris

Null steering refers to the removal (or zeroing) of interferences at specified dips by creating receiving patterns with nulls that are aligned on the interferences. This type of beamforming is more effective than forming a simple crossline array and can be applied to both multistreamer and swath data for reducing out‐of‐plane interferences (sideswipe, boat interference, etc.) that corrupt two‐dimensional (2-D) data (the desired signal). Many beamforming techniques lead to signal cancellation when the interferences are correlated with the desired signal. However, a beamforming technique that has been developed is effective in the presence of signal correlated interferences. The technique can be effectively extended to prestack and poststack seismic data. The number of interferences and their dips are identified by a visual examination of the plotted data. This information can be used to design filters that are applied to the total data set. The resulting 2-D data set is free from the crossline interferences with the inline 2-D data remaining unaltered. Model and real data comparisons between null steering and simple crossline array summation show that: (1) null steering significantly attenuates crossline interference, and (2) 2-D inline data, masked by sideswipe, can be revealed once sideswipe is attenuated by null steering. The real data examples show the identification and effective attenuation of interferences that could easily be interpreted as inline 2-D data: (1) an apparent steeply dipping event, and (2) an apparent flat “bright spot.”


Author(s):  
Jian Zhang ◽  
Jingye Li ◽  
Xiaohong Chen ◽  
Yuanqiang Li ◽  
Guangtan Huang ◽  
...  

Summary Seismic inversion is one of the most commonly used methods in the oil and gas industry for reservoir characterization from observed seismic data. Deep learning (DL) is emerging as a data-driven approach that can effectively solve the inverse problem. However, existing deep learning-based methods for seismic inversion utilize only seismic data as input, which often leads to poor stability of the inversion results. Besides, it has always been challenging to train a robust network since the real survey has limited labeled data pairs. To partially overcome these issues, we develop a neural network framework with a priori initial model constraint to perform seismic inversion. Our network uses two parts as one input for training. One is the seismic data, and the other is the subsurface background model. The labels for each input are the actual model. The proposed method is performed by log-to-log strategy. The training dataset is firstly generated based on forward modeling. The network is then pre-trained using the synthetic training dataset, which is further validated using synthetic data that has not been used in the training step. After obtaining the pre-trained network, we introduce the transfer learning strategy to fine-tune the pre-trained network using labeled data pairs from a real survey to acquire better inversion results in the real survey. The validity of the proposed framework is demonstrated using synthetic 2D data including both post-stack and pre-stack examples, as well as a real 3D post-stack seismic data set from the western Canadian sedimentary basin.


2022 ◽  
Vol 14 (2) ◽  
pp. 263
Author(s):  
Haixia Zhao ◽  
Tingting Bai ◽  
Zhiqiang Wang

Seismic field data are usually contaminated by random or complex noise, which seriously affect the quality of seismic data contaminating seismic imaging and seismic interpretation. Improving the signal-to-noise ratio (SNR) of seismic data has always been a key step in seismic data processing. Deep learning approaches have been successfully applied to suppress seismic random noise. The training examples are essential in deep learning methods, especially for the geophysical problems, where the complete training data are not easy to be acquired due to high cost of acquisition. In this work, we propose a natural images pre-trained deep learning method to suppress seismic random noise through insight of the transfer learning. Our network contains pre-trained and post-trained networks: the former is trained by natural images to obtain the preliminary denoising results, while the latter is trained by a small amount of seismic images to fine-tune the denoising effects by semi-supervised learning to enhance the continuity of geological structures. The results of four types of synthetic seismic data and six field data demonstrate that our network has great performance in seismic random noise suppression in terms of both quantitative metrics and intuitive effects.


Geophysics ◽  
2021 ◽  
pp. 1-67
Author(s):  
Hossein Jodeiri Akbari Fam ◽  
Mostafa Naghizadeh ◽  
Oz Yilmaz

Two-dimensional seismic surveys often are conducted along crooked line traverses due to the inaccessibility of rugged terrains, logistical and environmental restrictions, and budget limitations. The crookedness of line traverses, irregular topography, and complex subsurface geology with steeply dipping and curved interfaces could adversely affect the signal-to-noise ratio of the data. The crooked-line geometry violates the assumption of a straight-line survey that is a basic principle behind the 2D multifocusing (MF) method and leads to crossline spread of midpoints. Additionally, the crooked-line geometry can give rise to potential pitfalls and artifacts, thus, leads to difficulties in imaging and velocity-depth model estimation. We develop a novel multifocusing algorithm for crooked-line seismic data and revise the traveltime equation accordingly to achieve better signal alignment before stacking. Specifically, we present a 2.5D multifocusing reflection traveltime equation, which explicitly takes into account the midpoint dispersion and cross-dip effects. The new formulation corrects for normal, inline, and crossline dip moveouts simultaneously, which is significantly more accurate than removing these effects sequentially. Applying NMO, DMO, and CDMO separately tends to result in significant errors, especially for large offsets. The 2.5D multifocusing method can perform automatically with a coherence-based global optimization search on data. We investigated the accuracy of the new formulation by testing it on different synthetic models and a real seismic data set. Applying the proposed approach to the real data led to a high-resolution seismic image with a significant quality improvement compared to the conventional method. Numerical tests show that the new formula can accurately focus the primary reflections at their correct location, remove anomalous dip-dependent velocities, and extract true dips from seismic data for structural interpretation. The proposed method efficiently projects and extracts valuable 3D structural information when applied to crooked-line seismic surveys.


Sign in / Sign up

Export Citation Format

Share Document