Data-driven edge detectors for seismic data interpretation

Geophysics ◽  
2021 ◽  
pp. 1-41
Author(s):  
Julián L. Gómez ◽  
Lucía E. N. Gelis ◽  
Danilo R. Velis

We present a novel method to assist in seismic interpretation. The algorithm learns data-driven edge-detectors for structure enhancement when applied to time slices of 3D poststack seismic data. We obtain the operators by distilling the local and structural information retrieved from patches taken randomly from the input time slices. The filters conform to an orthogonal family that behaves as structure-aware Sobel-like edge detectors, and the user can set their size and number. The results from marine Canada and New Zealand 3D seismic data demonstrate that the proposed algorithm allows the semblance attribute to improve the delineation of the subsurface channels. This fact is further supported by testing the method with realistic synthetic 2D and 3D data sets containing channeling and meandering systems. We contrast the results with standard plain Sobel filtering, multidirectional Sobel filters of variable size, and the dip-oriented plane-wave destruction Sobel attribute. The proposed method gives results that are comparable or superior to those of Sobel-based approaches. In addition, the obtained filters can adapt to the geological structures present in each time slice, which reduces the number of unwanted artifacts in the final product.

Geophysics ◽  
2020 ◽  
Vol 85 (6) ◽  
pp. IM37-IM49 ◽  
Author(s):  
Sanyi Yuan ◽  
Jinghan Wang ◽  
Tao Liu ◽  
Tao Xie ◽  
Shangxu Wang

Phase information of seismic signals is sensitive to subsurface discontinuities. However, 1D phase attributes are not robust when dealing with noisy data. In addition, variations of seismic phase attributes with azimuth are seldom explored. To address these issues, we have developed 6D phase-difference attributes (PDAs) derived from azimuthal phase-frequency spectra. For the seismic volume of a certain azimuth and frequency, we first construct stacked phase traces at each common-depth point along a certain decomposed trending direction. Then, the 6D PDA is extracted by calculating the complex-valued covariance at a 6D phase space. The proposed method enables characterization of the subsurface discontinuities and indicates seismic anisotropy. Moreover, we provide one q-value attribute obtained by singular value decomposition to describe the variation intensity of PDA along different azimuths. Simulated and field wide-azimuth seismic data sets are used to illustrate the performance of the proposed 6D PDA and the derived q-value attribute. The results show that PDA at different frequencies can image various geologic features, including faults, fracture groups, and karst caves. Our field data example shows that PDA is able to discern the connectivity of karst caves using large-azimuth data. In addition, PDA along different azimuths and the q-value attribute provide a measurement of azimuthal variations, as well as the anisotropy. Our 6D PDA method can be used as a potential tool for wide-azimuth seismic data interpretation.


Geophysics ◽  
2009 ◽  
Vol 74 (1) ◽  
pp. SI1-SI8 ◽  
Author(s):  
Yanwei Xue ◽  
Shuqian Dong ◽  
Gerard T. Schuster

Surface waves are a form of coherent noise that can obscure valuable reflection information in exploration records. It is sometimes difficult to eliminate these surface waves by traditional filtering approaches, such as an [Formula: see text] filter, without damaging the useful signals. As a partial remedy, we propose an interferometric method to predict and subtract surface waves in seismic data. The removal of surface waves by the proposed interferometric method consists of three steps: (1) remove most of the surface waves by a nonlinear local filter; (2) predict the residual surface waves by the interferometric method; (3) separate the residual surface waves from the result of step 2 by a nonlinear local filter, and remove the residual surface waves by a matched filter from the result of step 1. Field data tests for 2D and 3D data show that the method effectively suppresses surface waves and preserves the reflection information. Results suggest that the effectiveness of this method is sensitive to the parameter selection of the nonlinear local filter.


Geophysics ◽  
2017 ◽  
Vol 82 (6) ◽  
pp. V385-V396 ◽  
Author(s):  
Mohammad Amir Nazari Siahsar ◽  
Saman Gholtashi ◽  
Amin Roshandel Kahoo ◽  
Wei Chen ◽  
Yangkang Chen

Representation of a signal in a sparse way is a useful and popular methodology in signal-processing applications. Among several widely used sparse transforms, dictionary learning (DL) algorithms achieve most attention due to their ability in making data-driven nonanalytical (nonfixed) atoms. Various DL methods are well-established in seismic data processing due to the inherent low-rank property of this kind of data. We have introduced a novel data-driven 3D DL algorithm that is extended from the 2D nonnegative DL scheme via the multitasking strategy for random noise attenuation of seismic data. In addition to providing parts-based learning, we exploit nonnegativity constraint to induce sparsity on the data transformation and reduce the space of the solution and, consequently, the computational cost. In 3D data, we consider each slice as a task. Whereas 3D seismic data exhibit high correlation between slices, a multitask learning approach is used to enhance the performance of the method by sharing a common sparse coefficient matrix for the whole related tasks of the data. Basically, in the learning process, each task can help other tasks to learn better and thus a sparser representation is obtained. Furthermore, different from other DL methods that use a limited random number of patches to learn a dictionary, the proposed algorithm can take the whole data information into account with a reasonable time cost and thus can obtain an efficient and effective denoising performance. We have applied the method on synthetic and real 3D data, which demonstrated superior performance in random noise attenuation when compared with state-of-the-art denoising methods such as MSSA, BM4D, and FXY predictive filtering, especially in amplitude and continuity preservation in low signal-to-noise ratio cases and fault zones.


2005 ◽  
Vol 45 (1) ◽  
pp. 407 ◽  
Author(s):  
H. Edwards ◽  
J. Crosby ◽  
N. David ◽  
C. Loader ◽  
S. Westlake

In a maturing province such as the North West Shelf, it is time-critical to find remaining hydrocarbon resources as well as to develop small finds before existing big field installations and their associated infrastructure are decommissioned. Finding the remaining smaller fields with subtle geophysical expression is a challenge, and a thorough understanding of the petroleum geology is essential. To achieve this, the subsurface structure and depositional systems must be understood in a regional as well as a local context.To date, exploration companies’ regional models have been based on a mixture of 2D and 3D seismic of varying vintages, orientations, and quality. Consequently they have been incomplete and lacking detail. To address this problem, PGS initiated the MegaSurvey Project, merging a number of 3D surveys into large, consistent 3D data sets. For the first time, the regional picture and prospect-size detail are both available from a single dataset.Two MegaSurveys for the North West Shelf are now available; the Vulcan Sub-Basin MegaSurvey (VMS) and the Carnarvon MegaSurvey (CMS).The MegaSurvey seismic data and consistent horizon interpretation (tied to released well control) enables asset- focussed oil companies to concentrate on the more detailed search-for-the-subtle-trap to find, understand, and develop remaining reserves. Interpretation of the first MegaSurvey (Vulcan Sub-Basin) was completed in 2004 and work is focussed on the Carnarvon MegaSurvey, the interpretation of which will be completed in March 2005.The PGS 3D MegaSurveys allow visualisation of the subsurface both on a scale and resolution that has hitherto been unavailable. They provide an essential new tool to help fully unlock the remaining potential of the North West Shelf.


Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. V271-V280
Author(s):  
Julián L. Gómez ◽  
Danilo R. Velis

We have developed an algorithm to perform structure-oriented filtering (SOF) in 3D seismic data by learning the data structure in the frequency domain. The method, called spectral SOF (SSOF), allows us to enhance the signal structures in the [Formula: see text]-[Formula: see text]-[Formula: see text] domain by running a 1D edge-preserving filter along curvilinear self-adaptive trajectories that connect points of similar characteristics. These self-adaptive paths are given by the eigenvectors of the smoothed structure tensor, which are easily computed using closed-form expressions. SSOF relies on a few parameters that are easily tuned and on simple 1D convolutions for tensor calculation and smoothing. It is able to process a 3D data volume with a 2D strategy using basic 1D edge-preserving filters. In contrast to other SOF techniques, such as anisotropic diffusion, anisotropic smoothing, and plane-wave prediction, SSOF does not require any iterative process to reach the denoised result. We determine the performance of SSOF using three public domain field data sets, which are subsets of the well-known Waipuku, Penobscot, and Teapot surveys. We use the Waipuku subset to indicate the signal preservation of the method in good-quality data when mostly background random noise is present. Then, we use the Penobscot subset to illustrate random noise and footprint signature attenuation, as well as to show how faults and fractures are improved. Finally, we analyze the Teapot stacked and depth-migrated subsets to show random and coherent noise removal, leading to an improvement of the volume structural details and overall lateral continuity. The results indicate that random noise, footprints, and other artifacts can be successfully suppressed, enhancing the delineation of geologic structures and seismic horizons and preserving the original signal bandwidth.


2015 ◽  
Vol 3 (3) ◽  
pp. SX13-SX20 ◽  
Author(s):  
Gary L. Kinsland ◽  
Christoph W. Borst

We have developed an argument and evidence from our experiences for the utility of 3D virtual reality (3DVR) systems in the interpretation of 3D geologic data. Interpretation of 3D data by geoscientists is performed in “the mind.” Visualization of 3D data in 3DVR environments is an efficient method of getting the data into the mind. Descriptions of visualization and interpretation of several different geologic data sets in 3DVR environments illustrate the advantages of 3DVR. Despite the advantages of visualization in 3DVR, several reasons exist for the present limited use of 3DVR by geoscientists. With the relatively recent availability and affordability of smaller hardware and software systems, we believe 3DVR should become commonplace on the desktops of geoscience interpreters.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA269-WA277
Author(s):  
Xudong Duan ◽  
Jie Zhang

Picking the first breaks from seismic data is often a challenging problem and still requires significant human effort. We have developed an iterative process that applies a traditional seismic automated picking method to obtain preliminary first breaks and then uses a machine learning (ML) method to identify, remove, and fix poor picks based on a multitrace analysis. The ML method involves constructing a convolutional neural network architecture to help identify poor picks across multiple traces and eliminate them. We then further refill the picks on empty traces with the help of the trained model. To allow training samples applicable to various regions and different data sets, we apply moveout correction with preliminary picks and address the picks in the flattened input. We collect 11,239,800 labeled seismic traces. During the training process, the model’s classification accuracy on the training and validation data sets reaches 98.2% and 97.3%, respectively. We also evaluate the precision and recall rate, both of which exceed 94%. For prediction, the results of 2D and 3D data sets that differ from the training data sets are used to demonstrate the feasibility of our method.


Author(s):  
Douglas L. Dorset

The quantitative use of electron diffraction intensity data for the determination of crystal structures represents the pioneering achievement in the electron crystallography of organic molecules, an effort largely begun by B. K. Vainshtein and his co-workers. However, despite numerous representative structure analyses yielding results consistent with X-ray determination, this entire effort was viewed with considerable mistrust by many crystallographers. This was no doubt due to the rather high crystallographic R-factors reported for some structures and, more importantly, the failure to convince many skeptics that the measured intensity data were adequate for ab initio structure determinations.We have recently demonstrated the utility of these data sets for structure analyses by direct phase determination based on the probabilistic estimate of three- and four-phase structure invariant sums. Examples include the structure of diketopiperazine using Vainshtein's 3D data, a similar 3D analysis of the room temperature structure of thiourea, and a zonal determination of the urea structure, the latter also based on data collected by the Moscow group.


2003 ◽  
Vol 42 (05) ◽  
pp. 215-219
Author(s):  
G. Platsch ◽  
A. Schwarz ◽  
K. Schmiedehausen ◽  
B. Tomandl ◽  
W. Huk ◽  
...  

Summary: Aim: Although the fusion of images from different modalities may improve diagnostic accuracy, it is rarely used in clinical routine work due to logistic problems. Therefore we evaluated performance and time needed for fusing MRI and SPECT images using a semiautomated dedicated software. Patients, material and Method: In 32 patients regional cerebral blood flow was measured using 99mTc ethylcystein dimer (ECD) and the three-headed SPECT camera MultiSPECT 3. MRI scans of the brain were performed using either a 0,2 T Open or a 1,5 T Sonata. Twelve of the MRI data sets were acquired using a 3D-T1w MPRAGE sequence, 20 with a 2D acquisition technique and different echo sequences. Image fusion was performed on a Syngo workstation using an entropy minimizing algorithm by an experienced user of the software. The fusion results were classified. We measured the time needed for the automated fusion procedure and in case of need that for manual realignment after automated, but insufficient fusion. Results: The mean time of the automated fusion procedure was 123 s. It was for the 2D significantly shorter than for the 3D MRI datasets. For four of the 2D data sets and two of the 3D data sets an optimal fit was reached using the automated approach. The remaining 26 data sets required manual correction. The sum of the time required for automated fusion and that needed for manual correction averaged 320 s (50-886 s). Conclusion: The fusion of 3D MRI data sets lasted significantly longer than that of the 2D MRI data. The automated fusion tool delivered in 20% an optimal fit, in 80% manual correction was necessary. Nevertheless, each of the 32 SPECT data sets could be merged in less than 15 min with the corresponding MRI data, which seems acceptable for clinical routine use.


Sign in / Sign up

Export Citation Format

Share Document