scholarly journals Can learning from natural image denoising be used for seismic data interpolation?

Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA115-WA136 ◽  
Author(s):  
Hao Zhang ◽  
Xiuyan Yang ◽  
Jianwei Ma

We have developed an interpolation method based on the denoising convolutional neural network (CNN) for seismic data. It provides a simple and efficient way to break through the problem of the scarcity of geophysical training labels that are often required by deep learning methods. This new method consists of two steps: (1) training a set of CNN denoisers to learn denoising from natural image noisy-clean pairs and (2) integrating the trained CNN denoisers into the project onto convex set (POCS) framework to perform seismic data interpolation. We call it the CNN-POCS method. This method alleviates the demands of seismic data that require shared similar features in the applications of end-to-end deep learning for seismic data interpolation. Additionally, the adopted method is flexible and applicable for different types of missing traces because the missing or down-sampling locations are not involved in the training step; thus, it is of a plug-and-play nature. These indicate the high generalizability of the proposed method and a reduction in the necessity of problem-specific training. The primary results of synthetic and field data show promising interpolation performances of the adopted CNN-POCS method in terms of the signal-to-noise ratio, dealiasing, and weak-feature reconstruction, in comparison with the traditional [Formula: see text]-[Formula: see text] prediction filtering, curvelet transform, and block-matching 3D filtering methods.

2020 ◽  
Vol 172 ◽  
pp. 103894
Author(s):  
Hua Zhang ◽  
Hengqi Zhang ◽  
Junhu Zhang ◽  
Yaju Hao ◽  
Benfeng Wang

Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. R989-R1001 ◽  
Author(s):  
Oleg Ovcharenko ◽  
Vladimir Kazei ◽  
Mahesh Kalita ◽  
Daniel Peter ◽  
Tariq Alkhalifah

Low-frequency seismic data are crucial for convergence of full-waveform inversion (FWI) to reliable subsurface properties. However, it is challenging to acquire field data with an appropriate signal-to-noise ratio in the low-frequency part of the spectrum. We have extrapolated low-frequency data from the respective higher frequency components of the seismic wavefield by using deep learning. Through wavenumber analysis, we find that extrapolation per shot gather has broader applicability than per-trace extrapolation. We numerically simulate marine seismic surveys for random subsurface models and train a deep convolutional neural network to derive a mapping between high and low frequencies. The trained network is then tested on sections from the BP and SEAM Phase I benchmark models. Our results indicate that we are able to recover 0.25 Hz data from the 2 to 4.5 Hz frequencies. We also determine that the extrapolated data are accurate enough for FWI application.


Geophysics ◽  
2013 ◽  
Vol 78 (1) ◽  
pp. A1-A5 ◽  
Author(s):  
Mostafa Naghizadeh ◽  
Mauricio Sacchi

We tested a strategy for beyond-alias interpolation of seismic data using Cadzow reconstruction. The strategy enables Cadzow reconstruction to be used for interpolation of regularly sampled seismic records. First, in the frequency-space ([Formula: see text]) domain, we generated a Hankel matrix from the spatial samples of the low frequencies. To perform interpolation at a given frequency, the spatial samples were interlaced with zero samples and another Hankel matrix was generated from the zero-interlaced data. Next, the rank-reduced eigen-decomposition of the Hankel matrix at low frequencies was used for beyond-alias preconditioning of the Hankel matrix at a given frequency. Finally, antidiagonal averaging of the conditioned Hankel matrix produced the final interpolated data. In addition, the multidimensional extension of the proposed algorithm was explained. The proposed method provides a unifying thread between reduced-rank Cadzow reconstruction and beyond alias [Formula: see text] prediction error interpolation. Synthetic and real data examples were provided to examine the performance of the proposed interpolation method.


Geophysics ◽  
2018 ◽  
Vol 83 (5) ◽  
pp. V283-V292 ◽  
Author(s):  
Chao Zhang ◽  
Mirko van der Baan

Microseismic and seismic data with a low signal-to-noise ratio affect the accuracy and reliability of processing results and their subsequent interpretation. Thus, denoising is of great importance. We have developed an effective denoising framework for surface (micro)-seismic data using block matching. The novel idea of the proposed framework is to enhance coherent features by grouping similar 2D data blocks into 3D data arrays. The high similarities in the 3D data arrays benefit any filtering strategy suitable for multidimensional noise suppression. We test the performance of this framework on synthetic and field data with different noise levels. The results demonstrate that the block-matching-based framework achieves state-of-the-art denoising performance in terms of incoherent-noise attenuation and signal preservation.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB189-WB202 ◽  
Author(s):  
Mostafa Naghizadeh ◽  
Mauricio D. Sacchi

We propose a robust interpolation scheme for aliased regularly sampled seismic data that uses the curvelet transform. In a first pass, the curvelet transform is used to compute the curvelet coefficients of the aliased seismic data. The aforementioned coefficients are divided into two groups of scales: alias-free and alias-contaminated scales. The alias-free curvelet coefficients are upscaled to estimate a mask function that is used to constrain the inversion of the alias-contaminated scale coefficients. The mask function is incorporated into the inversion via a minimum norm least-squares algorithm that determines the curvelet coefficients of the desired alias-free data. Once the alias-free coefficients are determined, the curvelet synthesis operator is used to reconstruct seismograms at new spatial positions. The proposed method can be used to reconstruct regularly and irregularly sampled seismic data. We believe that our exposition leads to a clear unifying thread between [Formula: see text] and [Formula: see text] beyond-alias interpolation methods and curvelet reconstruction. As in [Formula: see text] and [Formula: see text] interpolation, we stress the necessity of examining seismic data at different scales (frequency bands) to come up with viable and robust interpolation schemes. Synthetic and real data examples are used to illustrate the performance of the proposed curvelet interpolation method.


2022 ◽  
Vol 14 (2) ◽  
pp. 263
Author(s):  
Haixia Zhao ◽  
Tingting Bai ◽  
Zhiqiang Wang

Seismic field data are usually contaminated by random or complex noise, which seriously affect the quality of seismic data contaminating seismic imaging and seismic interpretation. Improving the signal-to-noise ratio (SNR) of seismic data has always been a key step in seismic data processing. Deep learning approaches have been successfully applied to suppress seismic random noise. The training examples are essential in deep learning methods, especially for the geophysical problems, where the complete training data are not easy to be acquired due to high cost of acquisition. In this work, we propose a natural images pre-trained deep learning method to suppress seismic random noise through insight of the transfer learning. Our network contains pre-trained and post-trained networks: the former is trained by natural images to obtain the preliminary denoising results, while the latter is trained by a small amount of seismic images to fine-tune the denoising effects by semi-supervised learning to enhance the continuity of geological structures. The results of four types of synthetic seismic data and six field data demonstrate that our network has great performance in seismic random noise suppression in terms of both quantitative metrics and intuitive effects.


Geophysics ◽  
2021 ◽  
pp. 1-52
Author(s):  
Nanying Lan ◽  
Zhang Fanchang ◽  
Chuanhui Li

Due to the limitations imposed by acquisition cost, obstacles, and inaccessible regions, the originally acquired seismic data are often sparsely or irregularly sampled in space, which seriously affects the ability of seismic data to image under-ground structures. Fortunately, compressed sensing provides theoretical support for interpolating and recovering irregularly or under-sampled data. Under the framework of compressed sensing, we propose a robust interpolation method for high-dimensional seismic data, based on elastic half norm regularization and tensor dictionary learning. Inspired by the Elastic-Net, we first develop the elastic half norm regularization as a sparsity constraint, and establish a robust high-dimensional interpolation model with this technique. Then, considering the multi-dimensional structure and spatial correlation of seismic data, we introduce a tensor dictionary learning algorithm to train a high-dimensional adaptive tensor dictionary from the original data. This tensor dictionary is used as the sparse transform for seismic data interpolation because it can capture more detailed seismic features to achieve the optimal and fast sparse representation of high-dimensional seismic data. Finally, we solve the robust interpolation model by an efficient iterative thresholding algorithm in the transform space and perform the space conversion by a modified imputation algorithm to recover the wavefields at the unobserved spatial positions. We conduct high-dimensional interpolation experiments on model and field seismic data on a regular data grid. Experimental results demonstrate that, this method has superior performance and higher computational efficiency in both noise-free and noisy seismic data interpolation, compared to extensively utilized dictionary learning-based interpolation methods.


Geophysics ◽  
2021 ◽  
pp. 1-57
Author(s):  
Yang Liu ◽  
Geng WU ◽  
Zhisheng Zheng

Although there is an increase in the amount of seismic data acquired with wide-azimuth geometry, it is difficult to achieve regular data distributions in spatial directions owing to limitations imposed by the surface environment and economic factor. To address this issue, interpolation is an economical solution. The current state of the art methods for seismic data interpolation are iterative methods. However, iterative methods tend to incur high computational cost which restricts their application in cases of large, high-dimensional datasets. Hence, we developed a two-step non-iterative method to interpolate nonstationary seismic data based on streaming prediction filters (SPFs) with varying smoothness in the time-space domain; and we extended these filters to two spatial dimensions. Streaming computation, which is the kernel of the method, directly calculates the coefficients of nonstationary SPF in the overdetermined equation with local smoothness constraints. In addition to the traditional streaming prediction-error filter (PEF), we proposed a similarity matrix to improve the constraint condition where the smoothness characteristics of the adjacent filter coefficient change with the varying data. We also designed non-causal in space filters for interpolation by using several neighboring traces around the target traces to predict the signal; this was performed to obtain more accurate interpolated results than those from the causal in space version. Compared with Fourier Projection onto a Convex Sets (POCS) interpolation method, the proposed method has the advantages such as fast computational speed and nonstationary event reconstruction. The application of the proposed method on synthetic and nonstationary field data showed that it can successfully interpolate high-dimensional data with low computational cost and reasonable accuracy even in the presence of aliased and conflicting events.


Sign in / Sign up

Export Citation Format

Share Document