Suppression of high‐energy noise using an alternative stacking procedure

Geophysics ◽  
1989 ◽  
Vol 54 (2) ◽  
pp. 181-190 ◽  
Author(s):  
Jakob B. U. Haldorsen ◽  
Paul A. Farmer

Occasionally, seismic data contain transient noise that can range from being a nuisance to becoming intolerable when several seismic vessels try simultaneously to collect data in an area. The traditional approach to solving this problem has been to allocate time slots to the different acquisition crews; the procedure, although effective, is very expensive. In this paper a statistical method called “trimmed mean stack” is evaluated as a tool for reducing the detrimental effects of noise from interfering seismic crews. Synthetic data, as well as field data, are used to illustrate the efficacy of the technique. Although a conventional stack gives a marginally better signal‐to‐noise ratio (S/N) for data without interference noise, typical usage of the trimmed mean stack gives a reduced S/N equivalent to a fold reduction of about 1 or 2 percent. On the other hand, for a data set containing high‐energy transient noise, trimming produces stacked sections without visible high‐amplitude contaminating energy. Equivalent sections produced with conventional processing techniques would be totally unacceptable. The application of a trimming procedure could mean a significant reduction in the costs of data acquisition by allowing several seismic crews to work simultaneously.

Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. V367-V376 ◽  
Author(s):  
Omar M. Saad ◽  
Yangkang Chen

Attenuation of seismic random noise is considered an important processing step to enhance the signal-to-noise ratio of seismic data. A new approach is proposed to attenuate random noise based on a deep-denoising autoencoder (DDAE). In this approach, the time-series seismic data are used as an input for the DDAE. The DDAE encodes the input seismic data to multiple levels of abstraction, and then it decodes those levels to reconstruct the seismic signal without noise. The DDAE is pretrained in a supervised way using synthetic data; following this, the pretrained model is used to denoise the field data set in an unsupervised scheme using a new customized loss function. We have assessed the proposed algorithm based on four synthetic data sets and two field examples, and we compare the results with several benchmark algorithms, such as f- x deconvolution ( f- x deconv) and the f- x singular spectrum analysis ( f- x SSA). As a result, our algorithm succeeds in attenuating the random noise in an effective manner.


2021 ◽  
Vol 11 (2) ◽  
pp. 790
Author(s):  
Pablo Venegas ◽  
Rubén Usamentiaga ◽  
Juan Perán ◽  
Idurre Sáez de Ocáriz

Infrared thermography is a widely used technology that has been successfully applied to many and varied applications. These applications include the use as a non-destructive testing tool to assess the integrity state of materials. The current level of development of this application is high and its effectiveness is widely verified. There are application protocols and methodologies that have demonstrated a high capacity to extract relevant information from the captured thermal signals and guarantee the detection of anomalies in the inspected materials. However, there is still room for improvement in certain aspects, such as the increase of the detection capacity and the definition of a detailed characterization procedure of indications, that must be investigated further to reduce uncertainties and optimize this technology. In this work, an innovative thermographic data analysis methodology is proposed that extracts a greater amount of information from the recorded sequences by applying advanced processing techniques to the results. The extracted information is synthesized into three channels that may be represented through real color images and processed by quaternion algebra techniques to improve the detection level and facilitate the classification of defects. To validate the proposed methodology, synthetic data and actual experimental sequences have been analyzed. Seven different definitions of signal-to-noise ratio (SNR) have been used to assess the increment in the detection capacity, and a generalized application procedure has been proposed to extend their use to color images. The results verify the capacity of this methodology, showing significant increments in the SNR compared to conventional processing techniques in thermographic NDT.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. R199-R217 ◽  
Author(s):  
Xintao Chai ◽  
Shangxu Wang ◽  
Genyang Tang

Seismic data are nonstationary due to subsurface anelastic attenuation and dispersion effects. These effects, also referred to as the earth’s [Formula: see text]-filtering effects, can diminish seismic resolution. We previously developed a method of nonstationary sparse reflectivity inversion (NSRI) for resolution enhancement, which avoids the intrinsic instability associated with inverse [Formula: see text] filtering and generates superior [Formula: see text] compensation results. Applying NSRI to data sets that contain multiples (addressing surface-related multiples only) requires a demultiple preprocessing step because NSRI cannot distinguish primaries from multiples and will treat them as interference convolved with incorrect [Formula: see text] values. However, multiples contain information about subsurface properties. To use information carried by multiples, with the feedback model and NSRI theory, we adapt NSRI to the context of nonstationary seismic data with surface-related multiples. Consequently, not only are the benefits of NSRI (e.g., circumventing the intrinsic instability associated with inverse [Formula: see text] filtering) extended, but also multiples are considered. Our method is limited to be a 1D implementation. Theoretical and numerical analyses verify that given a wavelet, the input [Formula: see text] values primarily affect the inverted reflectivities and exert little effect on the estimated multiples; i.e., multiple estimation need not consider [Formula: see text] filtering effects explicitly. However, there are benefits for NSRI considering multiples. The periodicity and amplitude of the multiples imply the position of the reflectivities and amplitude of the wavelet. Multiples assist in overcoming scaling and shifting ambiguities of conventional problems in which multiples are not considered. Experiments using a 1D algorithm on a synthetic data set, the publicly available Pluto 1.5 data set, and a marine data set support the aforementioned findings and reveal the stability, capabilities, and limitations of the proposed method.


2019 ◽  
Vol 7 (3) ◽  
pp. T701-T711
Author(s):  
Jianhu Gao ◽  
Bingyang Liu ◽  
Shengjun Li ◽  
Hongqiu Wang

Hydrocarbon detection is always one of the most critical sections in geophysical exploration, which plays an important role in subsequent hydrocarbon production. However, due to the low signal-to-noise ratio and weak reflection amplitude of deep seismic data, some conventional methods do not always provide favorable hydrocarbon prediction results. The interesting dolomite reservoirs in Central Sichuan are buried over an average depth of 4500 m, and the dolomite rocks have a low porosity below approximately 4%, which is measured by well-logging data. Furthermore, the dominant system of pores and fractures as well as strong heterogeneity along the lateral and vertical directions lead to some difficulties in describing the reservoir distribution. Spectral decomposition (SD) has become successful in illuminating subsurface features and can also be used to identify potential hydrocarbon reservoirs by detecting low-frequency shadows. However, the current applications for hydrocarbon detection always suffer from low resolution for thin reservoirs, probably due to the influence of the window function and without a prior constraint. To address this issue, we developed sparse inverse SD (SISD) based on the wavelet transform, which involves a sparse constraint of time-frequency spectra. We focus on investigating the applications of sparse spectral attributes derived from SISD to deep marine dolomite hydrocarbon detection from a 3D real seismic data set with an area of approximately [Formula: see text]. We predict and evaluate gas-bearing zones in two target reservoir segments by analyzing and comparing the spectral amplitude responses of relatively high- and low-frequency components. The predicted results indicate that most favorable gas-bearing areas are located near the northeast fault zone in the upper reservoir segment and at the relatively high structural positions in the lower reservoir segment, which are in good agreement with the gas-testing results of three wells in the study area.


2020 ◽  
Vol 21 (S1) ◽  
Author(s):  
Daniel Ruiz-Perez ◽  
Haibin Guan ◽  
Purnima Madhivanan ◽  
Kalai Mathee ◽  
Giri Narasimhan

Abstract Background Partial Least-Squares Discriminant Analysis (PLS-DA) is a popular machine learning tool that is gaining increasing attention as a useful feature selector and classifier. In an effort to understand its strengths and weaknesses, we performed a series of experiments with synthetic data and compared its performance to its close relative from which it was initially invented, namely Principal Component Analysis (PCA). Results We demonstrate that even though PCA ignores the information regarding the class labels of the samples, this unsupervised tool can be remarkably effective as a feature selector. In some cases, it outperforms PLS-DA, which is made aware of the class labels in its input. Our experiments range from looking at the signal-to-noise ratio in the feature selection task, to considering many practical distributions and models encountered when analyzing bioinformatics and clinical data. Other methods were also evaluated. Finally, we analyzed an interesting data set from 396 vaginal microbiome samples where the ground truth for the feature selection was available. All the 3D figures shown in this paper as well as the supplementary ones can be viewed interactively at http://biorg.cs.fiu.edu/plsda Conclusions Our results highlighted the strengths and weaknesses of PLS-DA in comparison with PCA for different underlying data models.


2017 ◽  
Vol 5 (3) ◽  
pp. SJ81-SJ90 ◽  
Author(s):  
Kainan Wang ◽  
Jesse Lomask ◽  
Felix Segovia

Well-log-to-seismic tying is a key step in many interpretation workflows for oil and gas exploration. Synthetic seismic traces from the wells are often manually tied to seismic data; this process can be very time consuming and, in some cases, inaccurate. Automatic methods, such as dynamic time warping (DTW), can match synthetic traces to seismic data. Although these methods are extremely fast, they tend to create interval velocities that are not geologically realistic. We have described the modification of DTW to create a blocked dynamic warping (BDW) method. BDW generates an automatic, optimal well tie that honors geologically consistent velocity constraints. Consequently, it results in updated velocities that are more realistic than other methods. BDW constrains the updated velocity to be constant or linearly variable inside each geologic layer. With an optimal correlation between synthetic seismograms and surface seismic data, this algorithm returns an automatically updated time-depth curve and an updated interval velocity model that still retains the original geologic velocity boundaries. In other words, the algorithm finds the optimal solution for tying the synthetic to the seismic data while restricting the interval velocity changes to coincide with the initial input blocking. We have determined the application of the BDW technique on a synthetic data example and field data set.


Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. D27-D36 ◽  
Author(s):  
Andrey Bakulin ◽  
Marta Woodward ◽  
Dave Nichols ◽  
Konstantin Osypov ◽  
Olga Zdraveva

Tilted transverse isotropy (TTI) is increasingly recognized as a more geologically plausible description of anisotropy in sedimentary formations than vertical transverse isotropy (VTI). Although model-building approaches for VTI media are well understood, similar approaches for TTI media are in their infancy, even when the symmetry-axis direction is assumed known. We describe a tomographic approach that builds localized anisotropic models by jointly inverting surface-seismic and well data. We present a synthetic data example of anisotropic tomography applied to a layered TTI model with a symmetry-axis tilt of 45 degrees. We demonstrate three scenarios for constraining the solution. In the first scenario, velocity along the symmetry axis is known and tomography inverts for Thomsen’s [Formula: see text] and [Formula: see text] parame-ters. In the second scenario, tomography inverts for [Formula: see text], [Formula: see text], and velocity, using surface-seismic data and vertical check-shot traveltimes. In contrast to the VTI case, both these inversions are nonunique. To combat nonuniqueness, in the third scenario, we supplement check-shot and seismic data with the [Formula: see text] profile from an offset well. This allows recovery of the correct profiles for velocity along the symmetry axis and [Formula: see text]. We conclude that TTI is more ambiguous than VTI for model building. Additional well data or rock-physics assumptions may be required to constrain the tomography and arrive at geologically plausible TTI models. Furthermore, we demonstrate that VTI models with atypical Thomsen parameters can also fit the same joint seismic and check-shot data set. In this case, although imaging with VTI models can focus the TTI data and match vertical event depths, it leads to substantial lateral mispositioning of the reflections.


2015 ◽  
Author(s):  
Baoqing Zhang* ◽  
Huawei Zhou ◽  
Zaiyu Ding ◽  
Ran Li ◽  
Zhaoquan He ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document