Deep denoising autoencoder for seismic random noise attenuation

Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. V367-V376 ◽  
Author(s):  
Omar M. Saad ◽  
Yangkang Chen

Attenuation of seismic random noise is considered an important processing step to enhance the signal-to-noise ratio of seismic data. A new approach is proposed to attenuate random noise based on a deep-denoising autoencoder (DDAE). In this approach, the time-series seismic data are used as an input for the DDAE. The DDAE encodes the input seismic data to multiple levels of abstraction, and then it decodes those levels to reconstruct the seismic signal without noise. The DDAE is pretrained in a supervised way using synthetic data; following this, the pretrained model is used to denoise the field data set in an unsupervised scheme using a new customized loss function. We have assessed the proposed algorithm based on four synthetic data sets and two field examples, and we compare the results with several benchmark algorithms, such as f- x deconvolution ( f- x deconv) and the f- x singular spectrum analysis ( f- x SSA). As a result, our algorithm succeeds in attenuating the random noise in an effective manner.

Geophysics ◽  
1989 ◽  
Vol 54 (2) ◽  
pp. 181-190 ◽  
Author(s):  
Jakob B. U. Haldorsen ◽  
Paul A. Farmer

Occasionally, seismic data contain transient noise that can range from being a nuisance to becoming intolerable when several seismic vessels try simultaneously to collect data in an area. The traditional approach to solving this problem has been to allocate time slots to the different acquisition crews; the procedure, although effective, is very expensive. In this paper a statistical method called “trimmed mean stack” is evaluated as a tool for reducing the detrimental effects of noise from interfering seismic crews. Synthetic data, as well as field data, are used to illustrate the efficacy of the technique. Although a conventional stack gives a marginally better signal‐to‐noise ratio (S/N) for data without interference noise, typical usage of the trimmed mean stack gives a reduced S/N equivalent to a fold reduction of about 1 or 2 percent. On the other hand, for a data set containing high‐energy transient noise, trimming produces stacked sections without visible high‐amplitude contaminating energy. Equivalent sections produced with conventional processing techniques would be totally unacceptable. The application of a trimming procedure could mean a significant reduction in the costs of data acquisition by allowing several seismic crews to work simultaneously.


2016 ◽  
Vol 4 (4) ◽  
pp. T521-T531 ◽  
Author(s):  
Andrea Zerilli ◽  
Marco P. Buonora ◽  
Paulo T. L. Menezes ◽  
Tiziano Labruzzo ◽  
Adriano J. A. Marçal ◽  
...  

Salt basins, mainly Tertiary basins with mobilized salt, are notoriously difficult places to explore because of the traditionally poor seismic images typically obtained around and below salt bodies. In areas where the salt structures are extremely complex, the seismic signal-to-noise ratio may still be limited and, therefore, complicate the estimation of the velocity field variations that could be used to migrate the seismic data correctly and recover a good image suitable for prospect generation. We have evaluated the results of an integrated seismic-electromagnetic (EM) two-step interpretation workflow that we applied to a broadband marine controlled-source EM (mCSEM) research survey acquired over a selected ultra-deepwater area of Espirito Santo Basin, Brazil. The presence of shallow allochthonous salt structures makes around salt and subsalt seismic depth imaging remarkably challenging. To illustrate the proposed workflow, we have concentrated on a subdomain of the mCSEM data set, in which a shallow allochthonous salt body has been interpreted before. In the first step, we applied a 3D pixel-based inversion to the mCSEM data intending to recover the first guess of the geometry and resistivity of the salt body, but also the background resistivity. As a starting model, we used a resistivity mesh given by seismic interpretation and resistivity information provided by available nearby wells. Then, we applied a structure-based inversion to the mCSEM data, in which the retrieved model in step one was used as an input. The goal of that second inversion was to recover the base of the salt interface. The top of the salt and the background resistivities remained fixed throughout the process. As a result, we were able to define better the base of the allochthonous salt body. That was reinterpreted approximately 300–700 m shallower than interpreted from narrow azimuth seismic.


Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. V351-V368 ◽  
Author(s):  
Xiaojing Wang ◽  
Bihan Wen ◽  
Jianwei Ma

Weak signal preservation is critical in the application of seismic data denoising, especially in deep seismic exploration. It is hard to separate those weak signals in seismic data from random noise because it is less compressible or sparsifiable, although they are usually important for seismic data analysis. Conventional sparse coding models exploit the local sparsity through learning a union of basis, but it does not take into account any prior information about the internal correlation of patches. Motivated by an observation that data patches within a group are expected to share the same sparsity pattern in the transform domain, so-called group sparsity, we have developed a novel transform learning with group sparsity (TLGS) method that jointly exploits local sparsity and internal patch self-similarity. Furthermore, for weak signal preservation, we extended the TLGS method and developed the transform learning with external reference. External clean or denoised patches are applied as the anchored references, which are grouped together with similar corrupted patches. They are jointly modeled under a sparse transform, which is adaptively learned. This is achieved by jointly learning a subset of the transform for each group data. Our method achieves better denoising performance than existing denoising methods, in terms of signal-to-noise ratio values and visual preservation of weak signal. Comparisons of experimental results on one synthetic data and three field data using the [Formula: see text]-[Formula: see text] deconvolution method and the data-driven tight frame method are also provided.


Geophysics ◽  
2015 ◽  
Vol 80 (1) ◽  
pp. V1-V11 ◽  
Author(s):  
Ke Chen ◽  
Mauricio D. Sacchi

Singular spectrum analysis (SSA) or Cadzow reduced-rank filtering is an efficient method for random noise attenuation. SSA starts by embedding the seismic data into a Hankel matrix. Rank reduction of this Hankel matrix followed by antidiagonal averaging is utilized to estimate an enhanced seismic signal. Rank reduction is often implemented via the singular value decomposition (SVD). The SVD is a nonrobust matrix factorization technique that leads to suboptimal results when the seismic data are contaminated by erratic noise. The term erratic noise designates non-Gaussian noise that consists of large isolated events with known or unknown distribution. We adopted a robust low-rank factorization that permitted use of the SSA filter in situations in which the data were contaminated by erratic noise. In our robust SSA method, we replaced the quadratic error criterion function that yielded the truncated SVD solution by a bisquare function. The Hankel matrix was then approximated by the product of two lower dimensional factor matrices. The iteratively reweighed least-squares method was used to approximately solve for the optimal robust factorization. Our algorithm was tested with synthetic and real data. In our synthetic examples, the data were contaminated with band-limited Gaussian noise and erratic noise. Then, denoising was carried out by means of [Formula: see text] deconvolution, the classical SSA method, and the proposed robust SSA method. The [Formula: see text] deconvolution and the classical SSA method failed to properly eliminate the noise and to preserve the desired signal. On the other hand, the robust SSA method was found to be immune to erratic noise and was able to preserve the desired signal. We also tested the robust SSA method with a data set from the Western Canadian Sedimentary Basin. The results with this data set revealed improved denoising performance in portions of data contaminated with erratic noise.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. V79-V86 ◽  
Author(s):  
Hakan Karsli ◽  
Derman Dondurur ◽  
Günay Çifçi

Time-dependent amplitude and phase information of stacked seismic data are processed independently using complex trace analysis in order to facilitate interpretation by improving resolution and decreasing random noise. We represent seismic traces using their envelopes and instantaneous phases obtained by the Hilbert transform. The proposed method reduces the amplitudes of the low-frequency components of the envelope, while preserving the phase information. Several tests are performed in order to investigate the behavior of the present method for resolution improvement and noise suppression. Applications on both 1D and 2D synthetic data show that the method is capable of reducing the amplitudes and temporal widths of the side lobes of the input wavelets, and hence, the spectral bandwidth of the input seismic data is enhanced, resulting in an improvement in the signal-to-noise ratio. The bright-spot anomalies observed on the stacked sections become clearer because the output seismic traces have a simplified appearance allowing an easier data interpretation. We recommend applying this simple signal processing for signal enhancement prior to interpretation, especially for single channel and low-fold seismic data.


2019 ◽  
Vol 217 (3) ◽  
pp. 1727-1741 ◽  
Author(s):  
D W Vasco ◽  
Seiji Nakagawa ◽  
Petr Petrov ◽  
Greg Newman

SUMMARY We introduce a new approach for locating earthquakes using arrival times derived from waveforms. The most costly computational step of the algorithm scales as the number of stations in the active seismographic network. In this approach, a variation on existing grid search methods, a series of full waveform simulations are conducted for all receiver locations, with sources positioned successively at each station. The traveltime field over the region of interest is calculated by applying a phase picking algorithm to the numerical wavefields produced from each simulation. An event is located by subtracting the stored traveltime field from the arrival time at each station. This provides a shifted and time-reversed traveltime field for each station. The shifted and time-reversed fields all approach the origin time of the event at the source location. The mean or median value at the source location thus approximates the event origin time. Measures of dispersion about this mean or median time at each grid point, such as the sample standard error and the average deviation, are minimized at the correct source position. Uncertainty in the event position is provided by the contours of standard error defined over the grid. An application of this technique to a synthetic data set indicates that the approach provides stable locations even when the traveltimes are contaminated by additive random noise containing a significant number of outliers and velocity model errors. It is found that the waveform-based method out-performs one based upon the eikonal equation for a velocity model with rapid spatial variations in properties due to layering. A comparison with conventional location algorithms in both a laboratory and field setting demonstrates that the technique performs at least as well as existing techniques.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. R199-R217 ◽  
Author(s):  
Xintao Chai ◽  
Shangxu Wang ◽  
Genyang Tang

Seismic data are nonstationary due to subsurface anelastic attenuation and dispersion effects. These effects, also referred to as the earth’s [Formula: see text]-filtering effects, can diminish seismic resolution. We previously developed a method of nonstationary sparse reflectivity inversion (NSRI) for resolution enhancement, which avoids the intrinsic instability associated with inverse [Formula: see text] filtering and generates superior [Formula: see text] compensation results. Applying NSRI to data sets that contain multiples (addressing surface-related multiples only) requires a demultiple preprocessing step because NSRI cannot distinguish primaries from multiples and will treat them as interference convolved with incorrect [Formula: see text] values. However, multiples contain information about subsurface properties. To use information carried by multiples, with the feedback model and NSRI theory, we adapt NSRI to the context of nonstationary seismic data with surface-related multiples. Consequently, not only are the benefits of NSRI (e.g., circumventing the intrinsic instability associated with inverse [Formula: see text] filtering) extended, but also multiples are considered. Our method is limited to be a 1D implementation. Theoretical and numerical analyses verify that given a wavelet, the input [Formula: see text] values primarily affect the inverted reflectivities and exert little effect on the estimated multiples; i.e., multiple estimation need not consider [Formula: see text] filtering effects explicitly. However, there are benefits for NSRI considering multiples. The periodicity and amplitude of the multiples imply the position of the reflectivities and amplitude of the wavelet. Multiples assist in overcoming scaling and shifting ambiguities of conventional problems in which multiples are not considered. Experiments using a 1D algorithm on a synthetic data set, the publicly available Pluto 1.5 data set, and a marine data set support the aforementioned findings and reveal the stability, capabilities, and limitations of the proposed method.


Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. V71-V80 ◽  
Author(s):  
Xiong Ma ◽  
Guofa Li ◽  
Hao Li ◽  
Wuyang Yang

Seismic absorption compensation is an important processing approach to mitigate the attenuation effects caused by the intrinsic inelasticity of subsurface media and to enhance seismic resolution. However, conventional absorption compensation approaches ignore the spatial connection along seismic traces, which makes the compensation result vulnerable to high-frequency noise amplification, thus reducing the signal-to-noise ratio (S/N) of the result. To alleviate this issue, we have developed a structurally constrained multichannel absorption compensation (SC-MAC) algorithm. In the cost function of this algorithm, we exploit an [Formula: see text] norm to constrain the reflectivity series and an [Formula: see text] norm to regularize the reflection structural characteristic of the compensation data. The reflection structural characteristic operator, extracted from the observed stacked seismic data, is the core of the structural regularization term. We then solve the cost function of SC-MAC by the alternating direction method of multipliers. Benefiting from the introduction of reflection structure constraint, SC-MAC improves the stability of the compensation result and inhibits the amplification of high-frequency noise. Synthetic and field data examples demonstrate that our proposed method is more robust to random noise and can not only improve the resolution of seismic data, but also maintain the S/N of the compensation seismic data.


Sign in / Sign up

Export Citation Format

Share Document