A dictionary learning method with atom splitting for seismic footprint suppression

Geophysics ◽  
2021 ◽  
pp. 1-97
Author(s):  
Dawei Liu ◽  
Lei Gao ◽  
Xiaokai Wang ◽  
wenchao Chen

Acquisition footprint causes serious interference with seismic attribute analysis, which severely hinders accurate reservoir characterization. Therefore, acquisition footprint suppression has become increasingly important in industry and academia. In this work, we assume that the time slice of 3D post-stack migration seismic data mainly comprises two components, i.e., useful signals and acquisition footprint. Useful signals describe the spatial distributions of geological structures with local piecewise smooth morphological features. However, acquisition footprint often behaves as periodic artifacts in the time-slice domain. In particular, the local morphological features of the acquisition footprint in the marine seismic acquisition appear as stripes. As useful signals and acquisition footprint have different morphological features, we can train an adaptive dictionary and divide the atoms of the dictionary into two sub-dictionaries to reconstruct these two components. We propose an adaptive dictionary learning method for acquisition footprint suppression in the time slice of 3D post-stack migration seismic data. To obtain an adaptive dictionary, we use the K-singular value decomposition algorithm to sparsely represent the patches in the time slice of 3D post-stack migration seismic data. Each atom of the trained dictionary represents certain local morphological features of the time slice. According to the difference in the variation level between the horizontal and vertical directions, the atoms of the trained dictionary are divided into two types. One type significantly represents the local morphological features of the acquisition footprint, whereas the other type represents the local morphological features of useful signals. Then, these two components are reconstructed using morphological component analysis based on different types of atoms, respectively. Synthetic and field data examples indicate that the proposed method can effectively suppress the acquisition footprint with fidelity to the original data.

Geophysics ◽  
2021 ◽  
pp. 1-86
Author(s):  
Wei Chen ◽  
Omar M. Saad ◽  
Yapo Abolé Serge Innocent Oboué ◽  
Liuqing Yang ◽  
Yangkang Chen

Most traditional seismic denoising algorithms will cause damages to useful signals, which are visible from the removed noise profiles and are known as signal leakage. The local signal-and-noise orthogonalization method is an effective method for retrieving the leaked signals from the removed noise. Retrieving leaked signals while rejecting the noise is compromised by the smoothing radius parameter in the local orthogonalization method. It is not convenient to adjust the smoothing radius because it is a global parameter while the seismic data is highly variable locally. To retrieve the leaked signals adaptively, we propose a new dictionary learning method. Because of the patch-based nature of the dictionary learning method, it can adapt to the local feature of seismic data. We train a dictionary of atoms that represent the features of the useful signals from the initially denoised data. Based on the learned features, we retrieve the weak leaked signals from the noise via a sparse co ding step. Considering the large computational cost when training a dictionary from high-dimensional seismic data, we leverage a fast dictionary up dating algorithm, where the singular value decomposition (SVD) is replaced via the algebraic mean to update the dictionary atom. We test the performance of the proposed method on several synthetic and field data examples, and compare it with that from the state-of-the-art local orthogonalization method.


2021 ◽  
Vol 11 (11) ◽  
pp. 4874
Author(s):  
Milan Brankovic ◽  
Eduardo Gildin ◽  
Richard L. Gibson ◽  
Mark E. Everett

Seismic data provides integral information in geophysical exploration, for locating hydrocarbon rich areas as well as for fracture monitoring during well stimulation. Because of its high frequency acquisition rate and dense spatial sampling, distributed acoustic sensing (DAS) has seen increasing application in microseimic monitoring. Given large volumes of data to be analyzed in real-time and impractical memory and storage requirements, fast compression and accurate interpretation methods are necessary for real-time monitoring campaigns using DAS. In response to the developments in data acquisition, we have created shifted-matrix decomposition (SMD) to compress seismic data by storing it into pairs of singular vectors coupled with shift vectors. This is achieved by shifting the columns of a matrix of seismic data before applying singular value decomposition (SVD) to it to extract a pair of singular vectors. The purpose of SMD is data denoising as well as compression, as reconstructing seismic data from its compressed form creates a denoised version of the original data. By analyzing the data in its compressed form, we can also run signal detection and velocity estimation analysis. Therefore, the developed algorithm can simultaneously compress and denoise seismic data while also analyzing compressed data to estimate signal presence and wave velocities. To show its efficiency, we compare SMD to local SVD and structure-oriented SVD, which are similar SVD-based methods used only for denoising seismic data. While the development of SMD is motivated by the increasing use of DAS, SMD can be applied to any seismic data obtained from a large number of receivers. For example, here we present initial applications of SMD to readily available marine seismic data.


Geophysics ◽  
2020 ◽  
Vol 85 (5) ◽  
pp. V407-V414
Author(s):  
Yanghua Wang ◽  
Xiwu Liu ◽  
Fengxia Gao ◽  
Ying Rao

The 3D seismic data in the prestack domain are contaminated by impulse noise. We have adopted a robust vector median filter (VMF) for attenuating the impulse noise from 3D seismic data cubes. The proposed filter has two attractive features. First, it is robust; the vector median that is the output of the filter not only has a minimum distance to all input data vectors, but it also has a high similarity to the original data vector. Second, it is structure adaptive; the filter is implemented following the local structure of coherent seismic events. The application of the robust and structure-adaptive VMF is demonstrated using an example data set acquired from an area with strong sedimentary rhythmites composed of steep-dipping thin layers. This robust filter significantly improves the signal-to-noise ratio of seismic data while preserving any discontinuity of reflections and maintaining the fidelity of amplitudes, which will facilitate the reservoir characterization that follows.


2020 ◽  
Vol 39 (10) ◽  
pp. 727-733
Author(s):  
Haibin Di ◽  
Leigh Truelove ◽  
Cen Li ◽  
Aria Abubakar

Accurate mapping of structural faults and stratigraphic sequences is essential to the success of subsurface interpretation, geologic modeling, reservoir characterization, stress history analysis, and resource recovery estimation. In the past decades, manual interpretation assisted by computational tools — i.e., seismic attribute analysis — has been commonly used to deliver the most reliable seismic interpretation. Because of the dramatic increase in seismic data size, the efficiency of this process is challenged. The process has also become overly time-intensive and subject to bias from seismic interpreters. In this study, we implement deep convolutional neural networks (CNNs) for automating the interpretation of faults and stratigraphies on the Opunake-3D seismic data set over the Taranaki Basin of New Zealand. In general, both the fault and stratigraphy interpretation are formulated as problems of image segmentation, and each workflow integrates two deep CNNs. Their specific implementation varies in the following three aspects. First, the fault detection is binary, whereas the stratigraphy interpretation targets multiple classes depending on the sequences of interest to seismic interpreters. Second, while the fault CNN utilizes only the seismic amplitude for its learning, the stratigraphy CNN additionally utilizes the fault probability to serve as a structural constraint on the near-fault zones. Third and more innovatively, for enhancing the lateral consistency and reducing artifacts of machine prediction, the fault workflow incorporates a component of horizontal fault grouping, while the stratigraphy workflow incorporates a component of feature self-learning of a seismic data set. With seven of 765 inlines and 23 of 2233 crosslines manually annotated, which is only about 1% of the available seismic data, the fault and four sequences are well interpreted throughout the entire seismic survey. The results not only match the seismic images, but more importantly they support the graben structure as documented in the Taranaki Basin.


Geosciences ◽  
2018 ◽  
Vol 8 (11) ◽  
pp. 426
Author(s):  
Kristina Novak Zelenika ◽  
Karolina Novak Mavar ◽  
Stipica Brnada

The sweetness seismic attribute is a very useful tool for proper description of the depositional environment, reservoir quality and lithofacies discrimination. This paper shows that depositional channels and turbidity sandstones deposited during the Upper Pannonian and Lower Pontian in the Sava Depression can be described using porosity–thickness and sweetness seismic attribute maps. Two studied reservoirs are of Neogene stage (“UP” reservoir of Upper Pannonian age and “LP” reservoir of Lower Pontian age) and located in the Sava Depression, Croatia. Both reservoirs contain medium to fine grained sandstones that are intercalated with basinal marls. A comparison of the sweetness seismic attribute and porosity–thickness maps show a good visual match with correlation coefficient of approximately 0.85. A mismatch was observed in areas with small reservoir thickness. This work demonstrates the importance of using porosity–thickness maps for reservoir characterization. The workflow presented in this work has wider applications in frontier areas with poor seismic data or coverage.


Geophysics ◽  
2021 ◽  
pp. 1-83
Author(s):  
Mohammed Outhmane Faouzi Zizi ◽  
Pierre Turquais

For a marine seismic survey, the recorded and processed data size can reach several terabytes. Storing seismic data sets is costly and transferring them between storage devices can be challenging. Dictionary learning has been shown to provide representations with a high level of sparsity. This method stores the shape of the redundant events once, and represents each occurrence of these events with a single sparse coefficient. Therefore, an efficient dictionary learning based compression workflow, which is specifically designed for seismic data, is developed here. This compression method differs from conventional compression methods in three respects: 1) the transform domain is not predefined but data-driven; 2) the redundancy in seismic data is fully exploited by learning small-sized dictionaries from local windows of the seismic shot gathers; 3) two modes are proposed depending on the geophysical application. Based on a test seismic data set, we demonstrate superior performance of the proposed workflow in terms of compression ratio for a wide range of signal-to-residual ratios, compared to standard seismic data methods, such as the zfp software or algorithms from the Seismic Unix package. Using a more realistic data set of marine seismic acquisition, we evaluate the capability of the proposed workflow to preserve the seismic signal for different applications. For applications such as near-real time transmission and long-term data storage, we observe insignificant signal leakage on a 2D line stack when the dictionary learning method reaches a compression ratio of 24.85. For other applications such as visual QC of shot gathers, our method preserves the visual aspect of the data even when a compression ratio of 95 is reached.


2013 ◽  
Vol 448-453 ◽  
pp. 3772-3775
Author(s):  
Wen Long Zang

Data mining technology is information from large amounts of data to dig out the unknown can be a practical and effective information, so provide richer information, or to provide decision support. Using data mining techniques handle large amounts of seismic data, analyze data to solve the reservoir characteristics of the intrinsic link scattered distribution problem is not easy to extract, and through Bayesian learning method to extract the hidden reservoir characteristics, based on completion of reservoir parameters of reservoir characterization data accurate modeling. Experimental results show that this method can effectively extract the reservoir characteristics of the data and accurately complete reservoir modeling, and achieved satisfactory results.


Geophysics ◽  
2018 ◽  
Vol 83 (3) ◽  
pp. V215-V231 ◽  
Author(s):  
Lina Liu ◽  
Jianwei Ma ◽  
Gerlind Plonka

We have developed a new regularization method for the sparse representation and denoising of seismic data. Our approach is based on two components: a sparse data representation in a learned dictionary and a similarity measure for image patches that is evaluated using the Laplacian matrix of a graph. Dictionary-learning (DL) methods aim to find a data-dependent basis or a frame that admits a sparse data representation while capturing the characteristics of the given data. We have developed two algorithms for DL based on clustering and singular-value decomposition, called the first and second dictionary constructions. Besides using an adapted dictionary, we also consider a similarity measure for the local geometric structures of the seismic data using the Laplacian matrix of a graph. Our method achieves better denoising performance than existing denoising methods, in terms of peak signal-to-noise ratio values and visual estimation of weak-event preservation. Comparisons of experimental results on field data using traditional [Formula: see text]-[Formula: see text] deconvolution (FX-Decon) and curvelet thresholding methods are also provided.


Sign in / Sign up

Export Citation Format

Share Document