Common-azimuth seismic data fault analysis using residual UNet

2020 ◽  
Vol 8 (3) ◽  
pp. SM25-SM37 ◽  
Author(s):  
Naihao Liu ◽  
Tao He ◽  
Yajun Tian ◽  
Bangyu Wu ◽  
Jinghuai Gao ◽  
...  

Seismic fault interpretation is one of the key steps for seismic structure interpretation, which is a time-consuming task and strongly depends on the experience of the interpreter. Aiming to automate fault interpretation, we have considered it as an image segmentation issue and adopt a solution using a residual UNet (ResUNet), which introduces residual units to UNet. Using the ResUNet model, we develop a fault-versus-azimuth analysis based on offset vector tile data, which, as common-azimuth seismic data, provide more detailed and useful information for interpreting seismic faults. To avoid manual efforts for picking training labels and the inaccuracy introduced by different interpreters, we use synthetic seismic data with a random number of faults with different locations and throws as the training and validation data sets. ResUNet is finally trained using only synthetic data and tested on field data. Field data applications show that the proposed fault-detection algorithm using ResUNet can predict seismic faults more accurately than coherence- and UNet-based approaches. Moreover, geologic fault interpretation results computed using common-azimuth data exhibit higher lateral resolution than those computed using poststack seismic data.

2021 ◽  
Author(s):  
Emma A. H. Michie ◽  
Mark J. Mulrooney ◽  
Alvar Braathen

Abstract. Significant uncertainties occur through varying methodologies when interpreting faults using seismic data. These uncertainties are carried through to the interpretation of how faults may act as baffles/barriers or increase fluid flow. How fault segments are picked when interpreting structures, i.e. what seismic line spacing is specified, as well as what surface generation algorithm is used, will dictate how detailed the surface is, and hence will impact any further interpretation such as fault seal or fault growth models. We can observe that an optimum spacing for fault interpretation for this case study is set at approximately 100 m. It appears that any additional detail through interpretation with a line spacing of ≤ 50 m adds complexity associated with sensitivities by the individual interpreter. Further, the location of all fault segmentation identified on Throw-Distance plots using the finest line spacing are also observed when 100 m line spacing is used. Hence, interpreting at a finer scale may not necessarily improve the subsurface model and any related analysis, but in fact lead to the production of very rough surfaces, which impacts any further fault analysis. Interpreting on spacing greater than 100 m often leads to overly smoothed fault surfaces that miss details that could be crucial, both for fault seal as well as for fault growth models. Uncertainty in seismic interpretation methodology will follow through to fault seal analysis, specifically for analysis of whether in situ stresses combined with increased pressure through CO2 injection will act to reactivate the faults, leading to up-fault fluid flow/seep. We have shown that changing picking strategies alter the interpreted stability of the fault, where picking with an increased line spacing has shown to increase the overall fault stability. Picking strategy has shown to have minor, although potentially crucial, impact on the predicted Shale Gouge Ratio.


Geophysics ◽  
1991 ◽  
Vol 56 (7) ◽  
pp. 1064-1070 ◽  
Author(s):  
Ilan Bruner ◽  
Eugeny Landa

Detection and investigation of fault zones are important tools for tectonic analysis and geological studies. A fault zone inferred on high‐resolution seismic lines has been interpreted using a method of detection of diffracted waves utilizing the main kinematic and dynamic properties of the wavefield. The application of the method to field data from the northern Negev in Israel shows that it provides a good estimate of results and, when used in conjunction with the final stacked data, can give the suspected location of the fault, its sense (reverse or normal), and the amount of “low amplitude” displacement (in an order of the wavelength or even less).


Solid Earth ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 1259-1286
Author(s):  
Emma A. H. Michie ◽  
Mark J. Mulrooney ◽  
Alvar Braathen

Abstract. Significant uncertainties occur through varying methodologies when interpreting faults using seismic data. These uncertainties are carried through to the interpretation of how faults may act as baffles or barriers, or increase fluid flow. How fault segments are picked when interpreting structures, i.e. which seismic line orientation, bin spacing and line spacing are specified, as well as what surface generation algorithm is used, will dictate how rugose the surface is and hence will impact any further interpretation such as fault seal or fault growth models. We can observe that an optimum spacing for fault interpretation for this case study is set at approximately 100 m, both for accuracy of analysis but also for considering time invested. It appears that any additional detail through interpretation with a line spacing of ≤ 50 m adds complexity associated with sensitivities by the individual interpreter. Further, the locations of all seismic-scale fault segmentation identified on throw–distance plots using the finest line spacing are also observed when 100 m line spacing is used. Hence, interpreting at a finer scale may not necessarily improve the subsurface model and any related analysis but in fact lead to the production of very rough surfaces, which impacts any further fault analysis. Interpreting on spacing greater than 100 m often leads to overly smoothed fault surfaces that miss details that could be crucial, both for fault seal as well as for fault growth models. Uncertainty in seismic interpretation methodology will follow through to fault seal analysis, specifically for analysis of whether in situ stresses combined with increased pressure through CO2 injection will act to reactivate the faults, leading to up-fault fluid flow. We have shown that changing picking strategies alter the interpreted stability of the fault, where picking with an increased line spacing has shown to increase the overall fault stability. Picking strategy has shown to have a minor, although potentially crucial, impact on the predicted shale gouge ratio.


2012 ◽  
Vol 518-523 ◽  
pp. 5640-5643
Author(s):  
Lei Feng ◽  
Guang Ming Li

With the deepening of the degree of oil exploitation, investigation of geological structure is particularly important, especially those faults that have an important impact on the exploration and development of oil. However, seismic data is affected by various kinds of factors in the progress of data acquisition, which reduces SNR and interfere with the accuracy of geological structure interpretation. This paper based on image processing provides fault enhancement medthod. It can reduce random factors impact and depicte fault more clearly. This method combine anisotropy and orientation information of image, then use generalized Kuwahara filter to enhance fault. This technique has a most important value in seismic fault interpretation.


Geophysics ◽  
2019 ◽  
Vol 84 (4) ◽  
pp. O73-O80 ◽  
Author(s):  
Yihuai Lou ◽  
Bo Zhang ◽  
Ruiqi Wang ◽  
Tengfei Lin ◽  
Danping Cao

Faults in the subsurface can be an avenue of, or a barrier to, hydrocarbon flow and pressure communication. Manual interpretation of discontinuities on 3D seismic amplitude volume is the most common way to define faults within a reservoir. Unfortunately, 3D seismic fault interpretation can be a time-consuming and tedious task. Seismic attributes such as coherence help define faults, but suffer from “staircase” artifacts and nonfault-related stratigraphic discontinuities. We assume that each sample of the seismic data is located at a potential fault plane. The hypothesized fault divides the seismic data centered at the analysis sample into two subwindows. We then compute the coherence for the two subwindows and full analysis window. We repeat the process by rotating the hypothesized fault plane along a set of user-defined discrete fault dip and azimuth. We obtain almost the same coherence values for the subwindows and the full window if the analysis point is not located at a fault plane. The “best” fault plane results in maximum coherence for the subwindows and minimum coherence for the full window if the analysis point is located at a fault plane. To improve the continuity of the fault attributes, we finally smooth the fault probability attribute along the estimated fault plane. We illustrate the effectiveness of our workflow by applying it to a synthetic and two real seismic data. The results indicate that our workflow successfully produces a continuous fault attribute without staircase artifacts and stratigraphic discontinuities.


Geophysics ◽  
2021 ◽  
Vol 86 (1) ◽  
pp. V23-V30
Author(s):  
Zhaolun Liu ◽  
Kai Lu

We have developed convolutional sparse coding (CSC) to attenuate noise in seismic data. CSC gives a data-driven set of basis functions whose coefficients form a sparse distribution. The noise attenuation method by CSC can be divided into the training and denoising phases. Seismic data with a relatively high signal-to-noise ratio are chosen for training to get the learned basis functions. Then, we use all (or a subset) of the basis functions to attenuate the random or coherent noise in the seismic data. Numerical experiments on synthetic data show that CSC can learn a set of shifted invariant filters, which can reduce the redundancy of learned filters in the traditional sparse-coding denoising method. CSC achieves good denoising performance when training with the noisy data and better performance when training on a similar but noiseless data set. The numerical results from the field data test indicate that CSC can effectively suppress seismic noise in complex field data. By excluding filters with coherent noise features, our method can further attenuate coherent noise and separate ground roll.


2019 ◽  
Vol 7 (3) ◽  
pp. SE251-SE267 ◽  
Author(s):  
Haibin Di ◽  
Mohammod Amir Shafiq ◽  
Zhen Wang ◽  
Ghassan AlRegib

Fault interpretation is one of the routine processes used for subsurface structure mapping and reservoir characterization from 3D seismic data. Various techniques have been developed for computer-aided fault imaging in the past few decades; for example, the conventional methods of edge detection, curvature analysis, red-green-blue rendering, and the popular machine-learning methods such as the support vector machine (SVM), the multilayer perceptron (MLP), and the convolutional neural network (CNN). However, most of the conventional methods are performed at the sample level with the local reflection pattern ignored and are correspondingly sensitive to the coherent noises/processing artifacts present in seismic signals. The CNN has proven its efficiency in utilizing such local seismic patterns to assist seismic fault interpretation, but it is quite computationally intensive and often demands higher hardware configuration (e.g., graphics processing unit). We have developed an innovative scheme for improving seismic fault detection by integrating the computationally efficient SVM/MLP classification algorithms with local seismic attribute patterns, here denoted as the super-attribute-based classification. Its added values are verified through applications to the 3D seismic data set over the Great South Basin (GSB) in New Zealand, where the subsurface structure is dominated by polygonal faults. A good match is observed between the original seismic images and the detected lineaments, and the generated fault volume is tested usable to the existing advanced fault interpretation tools/modules, such as seeded picking and automatic extraction. It is concluded that the improved performance of our scheme results from its two components. First, the SVM/MLP classifier is computationally efficient in parsing as many seismic attributes as specified by interpreters and maximizing the contributions from each attribute, which helps minimize the negative effects from using a less useful or “wrong” attribute. Second, the use of super attributes incorporates local seismic patterns into training a fault classifier, which helps exclude the random noises and/or artifacts of distinct reflection patterns.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB153-WB164 ◽  
Author(s):  
William Curry ◽  
Guojian Shan

Reflection seismic data typically are undersampled. Missing near offsets can be interpolated in reflection seismic data with pseudoprimaries, generated by crosscorrelating multiples and primaries in incomplete recorded data. These pseudoprimary data can be generated at the missing near offsets but contain many artifacts, so it is undesirable simply to replace the missing data with the pseudoprimaries. A nonstationary prediction-error filter (PEF) can instead be estimated from the pseudoprimaries and used to interpolate missing data to produce an interpolated output that is superior to direct substitution of the pseudoprimaries into the missing offsets. This approach is applied successfully to 2D synthetic and field data. Limitations in conventional acquisition geometry limit this approach in 3D, which can be illustrated using a synthetic data set.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Sign in / Sign up

Export Citation Format

Share Document