Adaptive multiple subtraction based on multiband pattern coding

Geophysics ◽  
2016 ◽  
Vol 81 (1) ◽  
pp. V69-V78 ◽  
Author(s):  
Jinlin Liu ◽  
Wenkai Lu

Adaptive multiple subtraction is the key step of surface-related multiple elimination methods. The main challenge of this technique resides in removing multiples without distorting primaries. We have developed a new pattern-based method for adaptive multiple subtraction with the consideration that primaries can be better protected if the multiples are differentiated from the primaries. Different from previously proposed methods, our method casts the adaptive multiple subtraction problem as a pattern coding and decoding process. We set out to learn distinguishable patterns from the predicted multiples before estimating the multiples contained in seismic data. Hence, in our method, we first carried out pattern coding of the predicted multiples to learn the special patterns of the multiples within different frequency bands. This coding process aims at exploiting the key patterns contained in the predicted multiples. The learned patterns are then used to decode (extract) the multiples contained in the seismic data, in which process those patterns that are similar to the learned patterns were identified and extracted. Because the learned patterns are obtained from the predicted multiples only, our method avoids the interferences of primaries in nature and shows an impressive capability for removing multiples without distorting the primaries. Our applications on synthetic and real data sets gave some promising results.

Geophysics ◽  
2020 ◽  
Vol 85 (2) ◽  
pp. V223-V232 ◽  
Author(s):  
Zhicheng Geng ◽  
Xinming Wu ◽  
Sergey Fomel ◽  
Yangkang Chen

The seislet transform uses the wavelet-lifting scheme and local slopes to analyze the seismic data. In its definition, the designing of prediction operators specifically for seismic images and data is an important issue. We have developed a new formulation of the seislet transform based on the relative time (RT) attribute. This method uses the RT volume to construct multiscale prediction operators. With the new prediction operators, the seislet transform gets accelerated because distant traces get predicted directly. We apply our method to synthetic and real data to demonstrate that the new approach reduces computational cost and obtains excellent sparse representation on test data sets.


Geophysics ◽  
2012 ◽  
Vol 77 (1) ◽  
pp. A5-A8 ◽  
Author(s):  
David Bonar ◽  
Mauricio Sacchi

The nonlocal means algorithm is a noise attenuation filter that was originally developed for the purposes of image denoising. This algorithm denoises each sample or pixel within an image by utilizing other similar samples or pixels regardless of their spatial proximity, making the process nonlocal. Such a technique places no assumptions on the data except that structures within the data contain a degree of redundancy. Because this is generally true for reflection seismic data, we propose to adopt the nonlocal means algorithm to attenuate random noise in seismic data. Tests with synthetic and real data sets demonstrate that the nonlocal means algorithm does not smear seismic energy across sharp discontinuities or curved events when compared to seismic denoising methods such as f-x deconvolution.


Author(s):  
Maxim I. Protasov ◽  
◽  
Vladimir A. Tcheverda ◽  
Valery V. Shilikov ◽  
◽  
...  

The paper deals with a 3D diffraction imaging with the subsequent diffraction attribute calculation. The imaging is based on an asymmetric summation of seismic data and provides three diffraction attributes: structural diffraction attribute, point diffraction attribute, an azimuth of structural diffraction. These attributes provide differentiating fractured and cavernous objects and to determine the fractures orientations. Approbation of the approach was provided on several real data sets.


Geophysics ◽  
2009 ◽  
Vol 74 (5) ◽  
pp. R59-R67 ◽  
Author(s):  
Igor B. Morozov ◽  
Jinfeng Ma

The seismic-impedance inversion problem is underconstrained inherently and does not allow the use of rigorous joint inversion. In the absence of a true inverse, a reliable solution free from subjective parameters can be obtained by defining a set of physical constraints that should be satisfied by the resulting images. A method for constructing synthetic logs is proposed that explicitly and accurately satisfies (1) the convolutional equation, (2) time-depth constraints of the seismic data, (3) a background low-frequency model from logs or seismic/geologic interpretation, and (4) spectral amplitudes and geostatistical information from spatially interpolated well logs. The resulting synthetic log sections or volumes are interpretable in standard ways. Unlike broadly used joint-inversion algorithms, the method contains no subjectively selected user parameters, utilizes the log data more completely, and assesses intermediate results. The procedure is simple and tolerant to noise, and it leads to higher-resolution images. Separating the seismic and subseismic frequency bands also simplifies data processing for acoustic-impedance (AI) inversion. For example, zero-phase deconvolution and true-amplitude processing of seismic data are not required and are included automatically in this method. The approach is applicable to 2D and 3D data sets and to multiple pre- and poststack seismic attributes. It has been tested on inversions for AI and true-amplitude reflectivity using 2D synthetic and real-data examples.


Geophysics ◽  
2011 ◽  
Vol 76 (5) ◽  
pp. V79-V89 ◽  
Author(s):  
Wail A. Mousa ◽  
Abdullatif A. Al-Shuhail ◽  
Ayman Al-Lehyani

We introduce a new method for first-arrival picking based on digital color-image segmentation of energy ratios of refracted seismic data. The method uses a new color-image segmentation scheme based on projection onto convex sets (POCS). The POCS requires a reference color for the first break and one iteration to segment the first-break amplitudes from other arrivals. We tested the segmentation method on synthetic seismic data sets with various amounts of additive Gaussian noise. The proposed method gives similar performance to a modified version of Coppens’ method for traces with high signal-to-noise ratio and medium-to-large offsets. Finally, we applied our method and used as well the modified first-arrival picking method based on Coppens’ method to pick the first arrivals on four real data sets, where both were compared to the first breaks that were picked manually and then interpolated. Based on an assessment error of a 20-ms window with respect to manual picks that are interpolated, we find that our method gives comparable performance to Coppens’ method, depending on the data difficulty of picking first arrivals. Therefore, we believe that our proposed method is a good new addition to the existing methods of first-arrival picking.


Geophysics ◽  
2018 ◽  
Vol 83 (1) ◽  
pp. V39-V48 ◽  
Author(s):  
Ali Gholami ◽  
Toktam Zand

The focusing power of the conventional hyperbolic Radon transform decreases for long-offset seismic data due to the nonhyperbolic behavior of moveout curves at far offsets. Furthermore, conventional Radon transforms are ineffective for processing data sets containing events of different shapes. The shifted hyperbola is a flexible three-parameter (zero-offset traveltime, slowness, and focusing-depth) function, which is capable of generating linear and hyperbolic shapes and improves the accuracy of the seismic traveltime approximation at far offsets. Radon transform based on shifted hyperbolas thus improves the focus of seismic events in the transform domain. We have developed a new method for effective decomposition of seismic data by using such three-parameter Radon transform. A very fast algorithm is constructed for high-resolution calculations of the new Radon transform using the recently proposed generalized Fourier slice theorem (GFST). The GFST establishes an analytic expression between the [Formula: see text] coefficients of the data and the [Formula: see text] coefficients of its Radon transform, with which a very fast switching between the model and data spaces is possible by means of interpolation procedures and fast Fourier transforms. High performance of the new algorithm is demonstrated on synthetic and real data sets for trace interpolation and linear (ground roll) noise attenuation.


Geophysics ◽  
2000 ◽  
Vol 65 (2) ◽  
pp. 368-376 ◽  
Author(s):  
Bruce S. Hart ◽  
Robert S. Balch

Much industry interest is centered on how to integrate well data and attributes derived from 3-D seismic data sets in the hope of defining reservoir properties in interwell areas. Unfortunately, the statistical underpinnings of the methods become less robust in areas where only a few wells are available, as might be the case in a new or small field. Especially in areas of limited well availability, we suggest that the physical basis of the attributes selected during the correlation procedure be validated by generating synthetic seismic sections from geologic models, then deriving attributes from the sections. We demonstrate this approach with a case study from Appleton field of southwestern Alabama. In this small field, dolomites of the Jurassic Smackover Formation produce from an anticlinal feature about 3800 m deep. We used available geologic information to generate synthetic seismic sections that showed the expected seismic response of the target formation; then we picked the relevant horizons in a 3-D seismic data volume that spanned the study area. Using multiple regression, we derived an empirical relationship between three seismic attributes of this 3-D volume and a log‐derived porosity indicator. Our choice of attributes was validated by deriving complex trace attributes from our seismic modeling results and confirming that the relationships between well properties and real‐data attributes were physically valid. Additionally, the porosity distribution predicted by the 3-D seismic data was reasonable within the context of the depositional model used for the area. Results from a new well drilled after our study validated our porosity prediction, although our structural prediction for the top of the porosity zone was erroneous. These results remind us that seismic interpretations should be viewed as works in progress which need to be updated when new data become available.


Geophysics ◽  
2021 ◽  
pp. 1-44
Author(s):  
Eduardo Silva ◽  
Jessé Costa ◽  
Jörg Schleicher

Eikonal solvers have found important applications in seismic data processing and in-version, the so-called image-guided methods. To this day in image-guided applications, thesolution of the eikonal equation is implemented using partial-differential-equationsolvers, such as fast-marching or fast-sweeping methods. We show that alternatively, onecan numerically integrate the dynamic Hamiltonian system defined by the image-guidedeikonal equation and reconstruct the solution with image-guided rays. We present interest-ing applications of image-guided raytracing to seismic data processing, demonstrating theuse of the resulting rays in image-guided interpolation and smoothing, well-log interpola-tion, image flattening, and residual-moveout picking. Some of these applications make useof properties of the raytracing system that are not directly obtained by eikonal solvers, suchas ray position, ray density, wavefront curvature, and ray curvature. These ray propertiesopen space for a different set of applications of the image-guided eikonal equation, beyondthe original motivation of accelerating the construction of minimum distance tables. Westress that image-guided raytracing is an embarrassingly parallel problem, which makes itsimplementation highly efficient on massively parallel platforms. Image-guided raytracing isadvantageous for most applications involving the tracking of seismic events and imaging-guided interpolation. Our numerical experiments using synthetic and real data sets showthe efficiency and robustness of image-guided rays for the selected applications.


2020 ◽  
Vol 224 (3) ◽  
pp. 1705-1723
Author(s):  
A Lois ◽  
F Kopsaftopoulos ◽  
D Giannopoulos ◽  
K Polychronopoulou ◽  
N Martakis

SUMMARY In this paper, we propose a two-step procedure for the automated detection of micro-earthquakes, using single-station, three-component passive seismic data. The first step consists of the computation of an appropriate characteristic function, along with an energy-based thresholding scheme, in order to attain an initial discrimination of the seismic noise from the ‘useful’ information. The three-component data matrix is factorized via the singular value decomposition by means of a properly selected moving window and for each step of the windowing procedure a diagonal matrix containing the estimated singular values is formed. The ${L_2}$-norm of the singular values resulting from the above-mentioned windowing process defines the time series which serves as a characteristic function. The extraction of the seismic signals from the initial record is achieved by following a histogram-based thresholding scheme. The histogram of the characteristic function, which constitutes its empirical probability density function, is estimated and the optimum threshold value is chosen corresponds to the bin that separates the above-mentioned histogram in two different areas delineating the background noise and the outliers. Since detection algorithms often suffer from false alarms, which increase in extremely noisy environments, as a second stage, we propose a new ‘decision-making’ scenario to be applied on the extracted intervals, for the purpose of decreasing the probability of false alarms. In this context, we propose a methodology, based on comparing among autoregressive models estimated both on isolated seismic noise, in addition to the detections resulting from the first stage. The performance and efficiency of the proposed technique is supported by its application to a series of experiments that were based on both synthetic and real data sets. In particular, we investigate the effectiveness of the characteristic function, along with the thresholding scheme by subjecting them to noise robustness tests using synthetic seismic noise, with different statistical characteristics and at noise levels varying from 5 down to –5 dB. Results are compared with those obtained by the implementation of a three-component version of the well-known STA/LTA algorithm to the same data set. Moreover, the proposed technique and its potential to distinguish seismic noise from the useful information through the proposed decision making scheme is evaluated, by its application to real data sets, acquired by three-component short-period recorders that were installed for monitoring the microseismic activity in areas characterized by different noise attributes.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB113-WB120 ◽  
Author(s):  
Sheng Xu ◽  
Yu Zhang ◽  
Gilles Lambaré

Wide-azimuth seismic data sets are generally acquired more sparsely than narrow-azimuth seismic data sets. This brings new challenges to seismic data regularization algorithms, which aim to reconstruct seismic data for regularly sampled acquisition geometries from seismic data recorded from irregularly sampled acquisition geometries. The Fourier-based seismic data regularization algorithm first estimates the spatial frequency content on an irregularly sampled input grid. Then, it reconstructs the seismic data on any desired grid. Three main difficulties arise in this process: the “spectral leakage” problem, the accurate estimation of Fourier components, and the effective antialiasing scheme used inside the algorithm. The antileakage Fourier transform algorithm can overcome the spectral leakage problem and handles aliased data. To generalize it to higher dimensions, we propose an area weighting scheme to accurately estimate the Fourier components. However, the computational cost dramatically increases with the sampling dimensions. A windowed Fourier transform reduces the computational cost in high-dimension applications but causes undersampling in wavenumber domain and introduces some artifacts, known as Gibbs phenomena. As a solution, we propose a wavenumber domain oversampling inversion scheme. The robustness and effectiveness of the proposed algorithm are demonstrated with some applications to both synthetic and real data examples.


Sign in / Sign up

Export Citation Format

Share Document