Seismic Data Interpolation Using Streaming Prediction Filter in Time-Space Domain

Author(s):  
G. Wu ◽  
Y. Liu ◽  
Z. Zheng
Geophysics ◽  
2021 ◽  
pp. 1-57
Author(s):  
Yang Liu ◽  
Geng WU ◽  
Zhisheng Zheng

Although there is an increase in the amount of seismic data acquired with wide-azimuth geometry, it is difficult to achieve regular data distributions in spatial directions owing to limitations imposed by the surface environment and economic factor. To address this issue, interpolation is an economical solution. The current state of the art methods for seismic data interpolation are iterative methods. However, iterative methods tend to incur high computational cost which restricts their application in cases of large, high-dimensional datasets. Hence, we developed a two-step non-iterative method to interpolate nonstationary seismic data based on streaming prediction filters (SPFs) with varying smoothness in the time-space domain; and we extended these filters to two spatial dimensions. Streaming computation, which is the kernel of the method, directly calculates the coefficients of nonstationary SPF in the overdetermined equation with local smoothness constraints. In addition to the traditional streaming prediction-error filter (PEF), we proposed a similarity matrix to improve the constraint condition where the smoothness characteristics of the adjacent filter coefficient change with the varying data. We also designed non-causal in space filters for interpolation by using several neighboring traces around the target traces to predict the signal; this was performed to obtain more accurate interpolated results than those from the causal in space version. Compared with Fourier Projection onto a Convex Sets (POCS) interpolation method, the proposed method has the advantages such as fast computational speed and nonstationary event reconstruction. The application of the proposed method on synthetic and nonstationary field data showed that it can successfully interpolate high-dimensional data with low computational cost and reasonable accuracy even in the presence of aliased and conflicting events.


Geophysics ◽  
2021 ◽  
pp. 1-92
Author(s):  
Yangkang Chen ◽  
Sergey Fomel ◽  
Hang Wang ◽  
shaohuan zu

The prediction error filter (PEF) assumes that the seismic data can be destructed to zero by applying a convolutional operation between the target data and prediction filter in either time-space or frequency-space domain. Here, we extend the commonly known PEF in 2D or 3D problems to its 5D version. To handle the non-stationary property of the seismic data, we formulate the PEF in a non-stationary way, which is called the non-stationary prediction error filter (NPEF). In the NPEF, the coefficients of a fixed-size PEF vary across the whole seismic data. In NPEF, we aim at solving a highly ill-posed inverse problem via the computationally efficient iterative shaping regularization. The NPEF can be used to denoise multi-dimensional seismic data, and more importantly, to restore the highly incomplete aliased 5D seismic data. We compare the proposed NPEF method with the state-of-the-art rank-reduction method for the 5D seismic data interpolation in cases of irregularly and regularly missing traces via several synthetic and real seismic data. Results show that although the proposed NPEF method is less effective than the rank-reduction method in interpolating irregularly missing traces especially in the case of low signal to noise ratio (S/N), it outperforms the rank-reduction method in interpolating aliased 5D dataset with regularly missing traces.


Geophysics ◽  
2020 ◽  
Vol 85 (1) ◽  
pp. V99-V118
Author(s):  
Yi Lin ◽  
Jinhai Zhang

Random noise attenuation plays an important role in seismic data processing. Most traditional methods suppress random noise either in the time-space domain or in the transformed domain, which may encounter difficulty in retaining the detailed structures. We have introduced the progressive denoising method to suppress random noise in seismic data. This method estimates random noise at each sample independently by imposing proper constraints on local windowed data in the time-space domain and then in the transformed domain, and the denoised results of the whole data set are gradually improved by many iterations. First, we apply an unnormalized bilateral kernel in time-space domain to reject large-amplitude signals; then, we apply a range kernel in the frequency-wavenumber domain to reject medium-amplitude signals; finally, we can obtain a total estimate of random noise by repeating these steps approximately 30 times. Numerical examples indicate that the progressive denoising method can achieve a better denoising result, compared with the two typical single-domain methods: the [Formula: see text]-[Formula: see text] deconvolution method and the curvelet domain thresholding method. As an edge-preserving method, the progressive denoising method can greatly reduce the random noise without harming the useful signals, especially to those high-frequency components, which would be crucial for high-resolution imaging and interpretations in the following stages.


Geophysics ◽  
1995 ◽  
Vol 60 (6) ◽  
pp. 1887-1896 ◽  
Author(s):  
Ray Abma ◽  
Jon Claerbout

Attenuating random noise with a prediction filter in the time‐space domain generally produces results similar to those of predictions done in the frequency‐space domain. However, in the presence of moderate‐ to high‐amplitude noise, time‐space or t-x prediction passes less random noise than does frequency‐space, or f-x prediction. The f-x prediction may also produce false events in the presence of parallel events where t-x prediction does not. These advantages of t-x prediction are the result of its ability to control the length of the prediction filter in time. An f-x prediction produces an effective t-x domain filter that is as long in time as the input data. Gulunay’s f-x domain prediction tends to bias the predictions toward the traces nearest the output trace, allowing somewhat more noise to be passed, but this bias may be overcome by modifying the system of equations used to calculate the filter. The 3-D extension to the 2-D t-x and f-x prediction techniques allows improved noise attenuation because more samples are used in the predictions, and the requirement that events be strictly linear is relaxed.


Geophysics ◽  
2003 ◽  
Vol 68 (2) ◽  
pp. 745-750 ◽  
Author(s):  
Wenkai Lu ◽  
Xuegong Zhang ◽  
Yanda Li

The removal of multiples without simultaneously distorting primaries is a difficult aspect of demultiple techniques. We present a new demultiple approach based on detecting and estimating localized coherent signals (multiples and primaries) on stacking velocity spectra. The trajectories of localized coherent signals are determined from the velocity and zero‐offset traveltime; amplitude variations with offset (AVO) are modeled from known AVO properties. We estimate primaries and multiples from stacking velocities by polynomial approximation of the amplitude along the hyperbolic path. This procedure lets us predict multiples on near‐offsets using the multiples on other offsets. To further preserve the amplitude of the rebuilt primaries, stronger multiples, which are detected according to a multiple‐to‐primary energy ratio, are estimated and subtracted from the seismic data before estimating the primaries. Because the estimation of primaries and multiples is done locally in the time–space domain, our method is efficient. Comparisons are made between our method and conventional Radon filters. The results of both synthetic and field seismic data show that our method is very promising in practical applications because it can suppress the multiples efficiently while preserving the primary amplitudes well.


Geophysics ◽  
2019 ◽  
Vol 84 (1) ◽  
pp. V11-V20 ◽  
Author(s):  
Benfeng Wang ◽  
Ning Zhang ◽  
Wenkai Lu ◽  
Jialin Wang

Seismic data interpolation is a longstanding issue. Most current methods are only suitable for randomly missing cases. To deal with regularly missing cases, an antialiasing strategy should be included. However, seismic survey design using a random distribution of shots and receivers is always operationally challenging and impractical. We have used deep-learning-based approaches for seismic data antialiasing interpolation, which could extract deeper features of the training data in a nonlinear way by self-learning. It can also avoid linear events, sparsity, and low-rank assumptions of the traditional interpolation methods. Based on convolutional neural networks, eight-layers residual learning networks (ResNets) with a better back-propagation property for deep layers is designed for interpolation. Detailed training analysis is also performed. A set of simulated data is used to train the designed ResNets. The performance is assessed with several synthetic and field data. Numerical examples indicate that the trained ResNets can help to reconstruct regularly missing traces with high accuracy. The interpolated results in the time-space domain and the frequency-wavenumber ([Formula: see text]-[Formula: see text]) domain demonstrate the validity of the trained ResNets. Even though the accuracy decreases with the increase of the feature difference between the test and training data, the proposed method can still provide reasonable interpolation results. Finally, the trained ResNets is used to reconstruct dense data with halved trace intervals for synthetic and field data. The reconstructed dense data are more continuous along the spatial direction, and the spatial aliasing effects disappear in the [Formula: see text]-[Formula: see text] domain. The reconstructed dense data have the potential to improve the accuracy of subsequent seismic data processing and inversion.


2020 ◽  
Vol 1631 ◽  
pp. 012110
Author(s):  
Xiaoguo Xie ◽  
Shuling Pan ◽  
Bing Luo ◽  
Cailing Chen ◽  
Kai Chen

Sign in / Sign up

Export Citation Format

Share Document