A dictionary learning method for seismic data compression

Geophysics ◽  
2021 ◽  
pp. 1-83
Author(s):  
Mohammed Outhmane Faouzi Zizi ◽  
Pierre Turquais

For a marine seismic survey, the recorded and processed data size can reach several terabytes. Storing seismic data sets is costly and transferring them between storage devices can be challenging. Dictionary learning has been shown to provide representations with a high level of sparsity. This method stores the shape of the redundant events once, and represents each occurrence of these events with a single sparse coefficient. Therefore, an efficient dictionary learning based compression workflow, which is specifically designed for seismic data, is developed here. This compression method differs from conventional compression methods in three respects: 1) the transform domain is not predefined but data-driven; 2) the redundancy in seismic data is fully exploited by learning small-sized dictionaries from local windows of the seismic shot gathers; 3) two modes are proposed depending on the geophysical application. Based on a test seismic data set, we demonstrate superior performance of the proposed workflow in terms of compression ratio for a wide range of signal-to-residual ratios, compared to standard seismic data methods, such as the zfp software or algorithms from the Seismic Unix package. Using a more realistic data set of marine seismic acquisition, we evaluate the capability of the proposed workflow to preserve the seismic signal for different applications. For applications such as near-real time transmission and long-term data storage, we observe insignificant signal leakage on a 2D line stack when the dictionary learning method reaches a compression ratio of 24.85. For other applications such as visual QC of shot gathers, our method preserves the visual aspect of the data even when a compression ratio of 95 is reached.

Author(s):  
A. Ogbamikhumi ◽  
T. Tralagba ◽  
E. E. Osagiede

Field ‘K’ is a mature field in the coastal swamp onshore Niger delta, which has been producing since 1960. As a huge producing field with some potential for further sustainable production, field monitoring is therefore important in the identification of areas of unproduced hydrocarbon. This can be achieved by comparing production data with the corresponding changes in acoustic impedance observed in the maps generated from base survey (initial 3D seismic) and monitor seismic survey (4D seismic) across the field. This will enable the 4D seismic data set to be used for mapping reservoir details such as advancing water front and un-swept zones. The availability of good quality onshore time-lapse seismic data for Field ‘K’ acquired in 1987 and 2002 provided the opportunity to evaluate the effect of changes in reservoir fluid saturations on time-lapse amplitudes. Rock physics modelling and fluid substitution studies on well logs were carried out, and acoustic impedance change in the reservoir was estimated to be in the range of 0.25% to about 8%. Changes in reservoir fluid saturations were confirmed with time-lapse amplitudes within the crest area of the reservoir structure where reservoir porosity is 0.25%. In this paper, we demonstrated the use of repeat Seismic to delineate swept zones and areas hit with water override in a producing onshore reservoir.


Geophysics ◽  
2021 ◽  
pp. 1-97
Author(s):  
Dawei Liu ◽  
Lei Gao ◽  
Xiaokai Wang ◽  
wenchao Chen

Acquisition footprint causes serious interference with seismic attribute analysis, which severely hinders accurate reservoir characterization. Therefore, acquisition footprint suppression has become increasingly important in industry and academia. In this work, we assume that the time slice of 3D post-stack migration seismic data mainly comprises two components, i.e., useful signals and acquisition footprint. Useful signals describe the spatial distributions of geological structures with local piecewise smooth morphological features. However, acquisition footprint often behaves as periodic artifacts in the time-slice domain. In particular, the local morphological features of the acquisition footprint in the marine seismic acquisition appear as stripes. As useful signals and acquisition footprint have different morphological features, we can train an adaptive dictionary and divide the atoms of the dictionary into two sub-dictionaries to reconstruct these two components. We propose an adaptive dictionary learning method for acquisition footprint suppression in the time slice of 3D post-stack migration seismic data. To obtain an adaptive dictionary, we use the K-singular value decomposition algorithm to sparsely represent the patches in the time slice of 3D post-stack migration seismic data. Each atom of the trained dictionary represents certain local morphological features of the time slice. According to the difference in the variation level between the horizontal and vertical directions, the atoms of the trained dictionary are divided into two types. One type significantly represents the local morphological features of the acquisition footprint, whereas the other type represents the local morphological features of useful signals. Then, these two components are reconstructed using morphological component analysis based on different types of atoms, respectively. Synthetic and field data examples indicate that the proposed method can effectively suppress the acquisition footprint with fidelity to the original data.


Geophysics ◽  
2021 ◽  
pp. 1-86
Author(s):  
Wei Chen ◽  
Omar M. Saad ◽  
Yapo Abolé Serge Innocent Oboué ◽  
Liuqing Yang ◽  
Yangkang Chen

Most traditional seismic denoising algorithms will cause damages to useful signals, which are visible from the removed noise profiles and are known as signal leakage. The local signal-and-noise orthogonalization method is an effective method for retrieving the leaked signals from the removed noise. Retrieving leaked signals while rejecting the noise is compromised by the smoothing radius parameter in the local orthogonalization method. It is not convenient to adjust the smoothing radius because it is a global parameter while the seismic data is highly variable locally. To retrieve the leaked signals adaptively, we propose a new dictionary learning method. Because of the patch-based nature of the dictionary learning method, it can adapt to the local feature of seismic data. We train a dictionary of atoms that represent the features of the useful signals from the initially denoised data. Based on the learned features, we retrieve the weak leaked signals from the noise via a sparse co ding step. Considering the large computational cost when training a dictionary from high-dimensional seismic data, we leverage a fast dictionary up dating algorithm, where the singular value decomposition (SVD) is replaced via the algebraic mean to update the dictionary atom. We test the performance of the proposed method on several synthetic and field data examples, and compare it with that from the state-of-the-art local orthogonalization method.


Geophysics ◽  
2020 ◽  
pp. 1-104
Author(s):  
Volodya Hlebnikov ◽  
Thomas Elboth ◽  
Vetle Vinje ◽  
Leiv-J. Gelius

The presence of noise in towed marine seismic data is a long-standing problem. The various types of noise present in marine seismic records are never truly random. Instead, seismic noise is more complex and often challenging to attenuate in seismic data processing. Therefore, we examine a wide range of real data examples contaminated by different types of noise including swell noise, seismic interference noise, strumming noise, passing vessel noise, vertical particle velocity noise, streamer hit and fishing gear noise, snapping shrimp noise, spike-like noise, cross-feed noise and streamer mounted devices noise. The noise examples investigated focus only on data acquired with analogue group-forming. Each noise type is classified based on its origin, coherency and frequency content. We then demonstrate how the noise component can be effectively attenuated through industry standard seismic processing techniques. In this tutorial, we avoid presenting the finest details of either the physics of the different types of noise themselves or the noise attenuation algorithms applied. Rather, we focus on presenting the noise problems themselves and show how well the community is able to address such noise. Our aim is that based on the provided insights, the geophysical community will be able to gain an appreciation of some of the most common types of noise encountered in marine towed seismic, in the hope to inspire more researchers to focus their attention on noise problems with greater potential industry impact.


2011 ◽  
Vol 51 (1) ◽  
pp. 549 ◽  
Author(s):  
Chris Uruski

Around the end of the twentieth century, awareness grew that, in addition to the Taranaki Basin, other unexplored basins in New Zealand’s large exclusive economic zone (EEZ) and extended continental shelf (ECS) may contain petroleum. GNS Science initiated a program to assess the prospectivity of more than 1 million square kilometres of sedimentary basins in New Zealand’s marine territories. The first project in 2001 acquired, with TGS-NOPEC, a 6,200 km reconnaissance 2D seismic survey in deep-water Taranaki. This showed a large Late Cretaceous delta built out into a northwest-trending basin above a thick succession of older rocks. Many deltas around the world are petroleum provinces and the new data showed that the deep-water part of Taranaki Basin may also be prospective. Since the 2001 survey a further 9,000 km of infill 2D seismic data has been acquired and exploration continues. The New Zealand government recognised the potential of its frontier basins and, in 2005 Crown Minerals acquired a 2D survey in the East Coast Basin, North Island. This was followed by surveys in the Great South, Raukumara and Reinga basins. Petroleum Exploration Permits were awarded in most of these and licence rounds in the Northland/Reinga Basin closed recently. New data have since been acquired from the Pegasus, Great South and Canterbury basins. The New Zealand government, through Crown Minerals, funds all or part of a survey. GNS Science interprets the new data set and the data along with reports are packaged for free dissemination prior to a licensing round. The strategy has worked well, as indicated by the entry of ExxonMobil, OMV and Petrobras into New Zealand. Anadarko, another new entry, farmed into the previously licensed Canterbury and deep-water Taranaki basins. One of the main results of the surveys has been to show that geology and prospectivity of New Zealand’s frontier basins may be similar to eastern Australia, as older apparently unmetamophosed successions are preserved. By extrapolating from the results in the Taranaki Basin, ultimate prospectivity is likely to be a resource of some tens of billions of barrels of oil equivalent. New Zealand’s largely submerged continent may yield continent-sized resources.


2020 ◽  
Vol 39 (10) ◽  
pp. 727-733
Author(s):  
Haibin Di ◽  
Leigh Truelove ◽  
Cen Li ◽  
Aria Abubakar

Accurate mapping of structural faults and stratigraphic sequences is essential to the success of subsurface interpretation, geologic modeling, reservoir characterization, stress history analysis, and resource recovery estimation. In the past decades, manual interpretation assisted by computational tools — i.e., seismic attribute analysis — has been commonly used to deliver the most reliable seismic interpretation. Because of the dramatic increase in seismic data size, the efficiency of this process is challenged. The process has also become overly time-intensive and subject to bias from seismic interpreters. In this study, we implement deep convolutional neural networks (CNNs) for automating the interpretation of faults and stratigraphies on the Opunake-3D seismic data set over the Taranaki Basin of New Zealand. In general, both the fault and stratigraphy interpretation are formulated as problems of image segmentation, and each workflow integrates two deep CNNs. Their specific implementation varies in the following three aspects. First, the fault detection is binary, whereas the stratigraphy interpretation targets multiple classes depending on the sequences of interest to seismic interpreters. Second, while the fault CNN utilizes only the seismic amplitude for its learning, the stratigraphy CNN additionally utilizes the fault probability to serve as a structural constraint on the near-fault zones. Third and more innovatively, for enhancing the lateral consistency and reducing artifacts of machine prediction, the fault workflow incorporates a component of horizontal fault grouping, while the stratigraphy workflow incorporates a component of feature self-learning of a seismic data set. With seven of 765 inlines and 23 of 2233 crosslines manually annotated, which is only about 1% of the available seismic data, the fault and four sequences are well interpreted throughout the entire seismic survey. The results not only match the seismic images, but more importantly they support the graben structure as documented in the Taranaki Basin.


2018 ◽  
Vol 44 (2) ◽  
pp. 144-179
Author(s):  
Eric Parsons ◽  
Cory Koedel ◽  
Li Tan

We study the relative performance of two policy-relevant value-added models—a one-step fixed effect model and a two-step aggregated residuals model—using a simulated data set well grounded in the value-added literature. A key feature of our data generating process is that student achievement depends on a continuous measure of economic disadvantage. This is a realistic condition that has implications for model performance because researchers typically have access to only a noisy, binary measure of disadvantage. We find that one- and two-step value-added models perform similarly across a wide range of student and teacher sorting conditions, with the two-step model modestly outperforming the one-step model in conditions that best match observed sorting in real data. A reason for the generally superior performance of the two-step model is that it better handles the use of an error-prone, dichotomous proxy for student disadvantage.


Geophysics ◽  
2017 ◽  
Vol 82 (6) ◽  
pp. V385-V396 ◽  
Author(s):  
Mohammad Amir Nazari Siahsar ◽  
Saman Gholtashi ◽  
Amin Roshandel Kahoo ◽  
Wei Chen ◽  
Yangkang Chen

Representation of a signal in a sparse way is a useful and popular methodology in signal-processing applications. Among several widely used sparse transforms, dictionary learning (DL) algorithms achieve most attention due to their ability in making data-driven nonanalytical (nonfixed) atoms. Various DL methods are well-established in seismic data processing due to the inherent low-rank property of this kind of data. We have introduced a novel data-driven 3D DL algorithm that is extended from the 2D nonnegative DL scheme via the multitasking strategy for random noise attenuation of seismic data. In addition to providing parts-based learning, we exploit nonnegativity constraint to induce sparsity on the data transformation and reduce the space of the solution and, consequently, the computational cost. In 3D data, we consider each slice as a task. Whereas 3D seismic data exhibit high correlation between slices, a multitask learning approach is used to enhance the performance of the method by sharing a common sparse coefficient matrix for the whole related tasks of the data. Basically, in the learning process, each task can help other tasks to learn better and thus a sparser representation is obtained. Furthermore, different from other DL methods that use a limited random number of patches to learn a dictionary, the proposed algorithm can take the whole data information into account with a reasonable time cost and thus can obtain an efficient and effective denoising performance. We have applied the method on synthetic and real 3D data, which demonstrated superior performance in random noise attenuation when compared with state-of-the-art denoising methods such as MSSA, BM4D, and FXY predictive filtering, especially in amplitude and continuity preservation in low signal-to-noise ratio cases and fault zones.


2021 ◽  
Vol 944 (1) ◽  
pp. 012005
Author(s):  
G L Situmeang ◽  
H M Manik ◽  
T B Nainggolan ◽  
Susilohadi

Abstract Wide range frequency bandwidth on seismic data is a necessity due to its close relation to resolution and depth of target. High-frequency seismic waves provide high-resolution imaging that defines thin bed layers in shallow sediment, while low-frequency seismic waves can penetrate into deeper target depth. As a result of broadband seismic technology, its wide range of frequency bandwidth is a suitable geophysical exploration method in the oil and gas industry. A major obstacle that is frequently found in marine seismic data acquisition is the existence of multiples. Short period multiple and reverberation are commonly attenuated by the predictive deconvolution method on prestack data. Advanced methods are needed to suppress long period multiple in marine seismic data. The 2D broadband marine seismic data from deep Morowali Waters, Sulawesi, contains both short and long period multiples. The predictive deconvolution, which is applied to the processing sequences, successfully eliminates short period multiple on prestack data. The combination of F-k filter and Surface Related Multiple Elimination (SRME) methods are successful in attenuating long period multiple of the 2D broadband marine seismic data. The Prestack Time Migration section shows fine resolution of seismic images.


Sign in / Sign up

Export Citation Format

Share Document