Robust high-dimensional seismic data interpolation based on elastic half norm regularization and tensor dictionary learning

Geophysics ◽  
2021 ◽  
pp. 1-52
Author(s):  
Nanying Lan ◽  
Zhang Fanchang ◽  
Chuanhui Li

Due to the limitations imposed by acquisition cost, obstacles, and inaccessible regions, the originally acquired seismic data are often sparsely or irregularly sampled in space, which seriously affects the ability of seismic data to image under-ground structures. Fortunately, compressed sensing provides theoretical support for interpolating and recovering irregularly or under-sampled data. Under the framework of compressed sensing, we propose a robust interpolation method for high-dimensional seismic data, based on elastic half norm regularization and tensor dictionary learning. Inspired by the Elastic-Net, we first develop the elastic half norm regularization as a sparsity constraint, and establish a robust high-dimensional interpolation model with this technique. Then, considering the multi-dimensional structure and spatial correlation of seismic data, we introduce a tensor dictionary learning algorithm to train a high-dimensional adaptive tensor dictionary from the original data. This tensor dictionary is used as the sparse transform for seismic data interpolation because it can capture more detailed seismic features to achieve the optimal and fast sparse representation of high-dimensional seismic data. Finally, we solve the robust interpolation model by an efficient iterative thresholding algorithm in the transform space and perform the space conversion by a modified imputation algorithm to recover the wavefields at the unobserved spatial positions. We conduct high-dimensional interpolation experiments on model and field seismic data on a regular data grid. Experimental results demonstrate that, this method has superior performance and higher computational efficiency in both noise-free and noisy seismic data interpolation, compared to extensively utilized dictionary learning-based interpolation methods.

Geophysics ◽  
2021 ◽  
pp. 1-57
Author(s):  
Yang Liu ◽  
Geng WU ◽  
Zhisheng Zheng

Although there is an increase in the amount of seismic data acquired with wide-azimuth geometry, it is difficult to achieve regular data distributions in spatial directions owing to limitations imposed by the surface environment and economic factor. To address this issue, interpolation is an economical solution. The current state of the art methods for seismic data interpolation are iterative methods. However, iterative methods tend to incur high computational cost which restricts their application in cases of large, high-dimensional datasets. Hence, we developed a two-step non-iterative method to interpolate nonstationary seismic data based on streaming prediction filters (SPFs) with varying smoothness in the time-space domain; and we extended these filters to two spatial dimensions. Streaming computation, which is the kernel of the method, directly calculates the coefficients of nonstationary SPF in the overdetermined equation with local smoothness constraints. In addition to the traditional streaming prediction-error filter (PEF), we proposed a similarity matrix to improve the constraint condition where the smoothness characteristics of the adjacent filter coefficient change with the varying data. We also designed non-causal in space filters for interpolation by using several neighboring traces around the target traces to predict the signal; this was performed to obtain more accurate interpolated results than those from the causal in space version. Compared with Fourier Projection onto a Convex Sets (POCS) interpolation method, the proposed method has the advantages such as fast computational speed and nonstationary event reconstruction. The application of the proposed method on synthetic and nonstationary field data showed that it can successfully interpolate high-dimensional data with low computational cost and reasonable accuracy even in the presence of aliased and conflicting events.


2012 ◽  
Vol 588-589 ◽  
pp. 1312-1315
Author(s):  
Yi Kun Zhang ◽  
Ming Hui Zhang ◽  
Xin Hong Hei ◽  
Deng Xin Hua ◽  
Hao Chen

Aiming at building a Lidar data interpolation model, this paper designs and implements a GA-BP interpolation method. The proposed method uses genetic method to optimize BP neural network, which greatly improves the calculation accuracy and convergence rate of BP neural network. Experimental results show that the proposed method has a higher interpolation accuracy compared with BP neural network as well as linear interpolation method.


2020 ◽  
Vol 222 (3) ◽  
pp. 1717-1727 ◽  
Author(s):  
Yangkang Chen

SUMMARY The K-SVD algorithm has been successfully utilized for adaptively learning the sparse dictionary in 2-D seismic denoising. Because of the high computational cost of many singular value decompositions (SVDs) in the K-SVD algorithm, it is not applicable in practical situations, especially in 3-D or 5-D problems. In this paper, I extend the dictionary learning based denoising approach from 2-D to 3-D. To address the computational efficiency problem in K-SVD, I propose a fast dictionary learning approach based on the sequential generalized K-means (SGK) algorithm for denoising multidimensional seismic data. The SGK algorithm updates each dictionary atom by taking an arithmetic average of several training signals instead of calculating an SVD as used in K-SVD algorithm. I summarize the sparse dictionary learning algorithm using K-SVD, and introduce SGK algorithm together with its detailed mathematical implications. 3-D synthetic, 2-D and 3-D field data examples are used to demonstrate the performance of both K-SVD and SGK algorithms. It has been shown that SGK algorithm can significantly increase the computational efficiency while only slightly degrading the denoising performance.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA115-WA136 ◽  
Author(s):  
Hao Zhang ◽  
Xiuyan Yang ◽  
Jianwei Ma

We have developed an interpolation method based on the denoising convolutional neural network (CNN) for seismic data. It provides a simple and efficient way to break through the problem of the scarcity of geophysical training labels that are often required by deep learning methods. This new method consists of two steps: (1) training a set of CNN denoisers to learn denoising from natural image noisy-clean pairs and (2) integrating the trained CNN denoisers into the project onto convex set (POCS) framework to perform seismic data interpolation. We call it the CNN-POCS method. This method alleviates the demands of seismic data that require shared similar features in the applications of end-to-end deep learning for seismic data interpolation. Additionally, the adopted method is flexible and applicable for different types of missing traces because the missing or down-sampling locations are not involved in the training step; thus, it is of a plug-and-play nature. These indicate the high generalizability of the proposed method and a reduction in the necessity of problem-specific training. The primary results of synthetic and field data show promising interpolation performances of the adopted CNN-POCS method in terms of the signal-to-noise ratio, dealiasing, and weak-feature reconstruction, in comparison with the traditional [Formula: see text]-[Formula: see text] prediction filtering, curvelet transform, and block-matching 3D filtering methods.


Geophysics ◽  
2017 ◽  
Vol 82 (6) ◽  
pp. V385-V396 ◽  
Author(s):  
Mohammad Amir Nazari Siahsar ◽  
Saman Gholtashi ◽  
Amin Roshandel Kahoo ◽  
Wei Chen ◽  
Yangkang Chen

Representation of a signal in a sparse way is a useful and popular methodology in signal-processing applications. Among several widely used sparse transforms, dictionary learning (DL) algorithms achieve most attention due to their ability in making data-driven nonanalytical (nonfixed) atoms. Various DL methods are well-established in seismic data processing due to the inherent low-rank property of this kind of data. We have introduced a novel data-driven 3D DL algorithm that is extended from the 2D nonnegative DL scheme via the multitasking strategy for random noise attenuation of seismic data. In addition to providing parts-based learning, we exploit nonnegativity constraint to induce sparsity on the data transformation and reduce the space of the solution and, consequently, the computational cost. In 3D data, we consider each slice as a task. Whereas 3D seismic data exhibit high correlation between slices, a multitask learning approach is used to enhance the performance of the method by sharing a common sparse coefficient matrix for the whole related tasks of the data. Basically, in the learning process, each task can help other tasks to learn better and thus a sparser representation is obtained. Furthermore, different from other DL methods that use a limited random number of patches to learn a dictionary, the proposed algorithm can take the whole data information into account with a reasonable time cost and thus can obtain an efficient and effective denoising performance. We have applied the method on synthetic and real 3D data, which demonstrated superior performance in random noise attenuation when compared with state-of-the-art denoising methods such as MSSA, BM4D, and FXY predictive filtering, especially in amplitude and continuity preservation in low signal-to-noise ratio cases and fault zones.


Geophysics ◽  
2021 ◽  
pp. 1-83
Author(s):  
Mohammed Outhmane Faouzi Zizi ◽  
Pierre Turquais

For a marine seismic survey, the recorded and processed data size can reach several terabytes. Storing seismic data sets is costly and transferring them between storage devices can be challenging. Dictionary learning has been shown to provide representations with a high level of sparsity. This method stores the shape of the redundant events once, and represents each occurrence of these events with a single sparse coefficient. Therefore, an efficient dictionary learning based compression workflow, which is specifically designed for seismic data, is developed here. This compression method differs from conventional compression methods in three respects: 1) the transform domain is not predefined but data-driven; 2) the redundancy in seismic data is fully exploited by learning small-sized dictionaries from local windows of the seismic shot gathers; 3) two modes are proposed depending on the geophysical application. Based on a test seismic data set, we demonstrate superior performance of the proposed workflow in terms of compression ratio for a wide range of signal-to-residual ratios, compared to standard seismic data methods, such as the zfp software or algorithms from the Seismic Unix package. Using a more realistic data set of marine seismic acquisition, we evaluate the capability of the proposed workflow to preserve the seismic signal for different applications. For applications such as near-real time transmission and long-term data storage, we observe insignificant signal leakage on a 2D line stack when the dictionary learning method reaches a compression ratio of 24.85. For other applications such as visual QC of shot gathers, our method preserves the visual aspect of the data even when a compression ratio of 95 is reached.


Geophysics ◽  
2021 ◽  
pp. 1-102
Author(s):  
Murad Almadani ◽  
Umair bin Waheed ◽  
Mudassir Masood ◽  
Yangkang Chen

Seismic data inevitably suffers from random noise and missing traces in field acquisition. This limits the utilization of seismic data for subsequent imaging or inversion applications. Recently, dictionary learning has gained remarkable success in seismic data denoising and interpolation. Variants of the patch-based learning technique, such as the K-SVD algorithm, have been shown to improve denoising and interpolation performance compared to the analytic transform-based methods. However, patch-based learning algorithms work on overlapping patches of data and do not take the full data into account during reconstruction. By contrast, the Convolutional Sparse Coding (CSC) model treats signals globally and, therefore, has shown superior performance over patch-based methods in several image processing applications. In consequence, we test the use of CSC model for seismic data denoising and interpolation. In particular, we use the Local Block Coordinate Descent (LoBCoD) algorithm to reconstruct missing traces and clean seismic data from noisy input. The denoising and interpolation performance of the LoBCoD algorithm has been compared with that of K-SVD and Orthogonal Matching Pursuit (OMP) algorithms using synthetic and field data examples. We use three quality measures to test the denoising accuracy: the peak signal-to-noise ratio (PSNR), the relative L2-norm of the error (RLNE), and the structural similarity index (SSIM). We find that LoBCoD performs better than K-SVD and OMP for all test cases in improving PSNR and SSIM, and in reducing RLNE. These observations suggest enormous potential of the CSC model in seismic data denoising and interpolation applications.


2018 ◽  
Author(s):  
Xie Junfa ◽  
Wang Xiaowei ◽  
Wang Yuchao ◽  
Hu Ziduo ◽  
Zhang Tao

Sign in / Sign up

Export Citation Format

Share Document