Gabor deconvolution: Estimating reflectivity by nonstationary deconvolution of seismic data

Geophysics ◽  
2011 ◽  
Vol 76 (3) ◽  
pp. W15-W30 ◽  
Author(s):  
Gary F. Margrave ◽  
Michael P. Lamoureux ◽  
David C. Henley

We have extended the method of stationary spiking deconvolution of seismic data to the context of nonstationary signals in which the nonstationarity is due to attenuation processes. As in the stationary case, we have assumed a statistically white reflectivity and a minimum-phase source and attenuation process. This extension is based on a nonstationary convolutional model, which we have developed and related to the stationary convolutional model. To facilitate our method, we have devised a simple numerical approach to calculate the discrete Gabor transform, or complex-valued time-frequency decomposition, of any signal. Although the Fourier transform renders stationary convolution into exact, multiplicative factors, the Gabor transform, or windowed Fourier transform, induces only an approximate factorization of the nonstationary convolutional model. This factorization serves as a guide to develop a smoothing process that, when applied to the Gabor transform of the nonstationary seismic trace, estimates the magnitude of the time-frequency attenuation function and the source wavelet. By assuming that both are minimum-phase processes, their phases can be determined. Gabor deconvolution is accomplished by spectral division in the time-frequency domain. The complex-valued Gabor transform of the seismic trace is divided by the complex-valued estimates of attenuation and source wavelet to estimate the Gabor transform of the reflectivity. An inverse Gabor transform recovers the time-domain reflectivity. The technique has applications to synthetic data and real data.

Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB203-WB210 ◽  
Author(s):  
Gilles Hennenfent ◽  
Lloyd Fenelon ◽  
Felix J. Herrmann

We extend our earlier work on the nonequispaced fast discrete curvelet transform (NFDCT) and introduce a second generation of the transform. This new generation differs from the previous one by the approach taken to compute accurate curvelet coefficients from irregularly sampled data. The first generation relies on accurate Fourier coefficients obtained by an [Formula: see text]-regularized inversion of the nonequispaced fast Fourier transform (FFT) whereas the second is based on a direct [Formula: see text]-regularized inversion of the operator that links curvelet coefficients to irregular data. Also, by construction the second generation NFDCT is lossless unlike the first generation NFDCT. This property is particularly attractive for processing irregularly sampled seismic data in the curvelet domain and bringing them back to their irregular record-ing locations with high fidelity. Secondly, we combine the second generation NFDCT with the standard fast discrete curvelet transform (FDCT) to form a new curvelet-based method, coined nonequispaced curvelet reconstruction with sparsity-promoting inversion (NCRSI) for the regularization and interpolation of irregularly sampled data. We demonstrate that for a pure regularization problem the reconstruction is very accurate. The signal-to-reconstruction error ratio in our example is above [Formula: see text]. We also conduct combined interpolation and regularization experiments. The reconstructions for synthetic data are accurate, particularly when the recording locations are optimally jittered. The reconstruction in our real data example shows amplitudes along the main wavefronts smoothly varying with limited acquisition imprint.


Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. V51-V60 ◽  
Author(s):  
Ramesh (Neelsh) Neelamani ◽  
Anatoly Baumstein ◽  
Warren S. Ross

We propose a complex-valued curvelet transform-based (CCT-based) algorithm that adaptively subtracts from seismic data those noises for which an approximate template is available. The CCT decomposes a geophysical data set in terms of small reflection pieces, with each piece having a different characteristic frequency, location, and dip. One can precisely change the amplitude and shift the location of each seismic reflection piece in a template by controlling the amplitude and phase of the template's CCT coefficients. Based on these insights, our approach uses the phase and amplitude of the data's and template's CCT coefficients to correct misalignment and amplitude errors in the noise template, thereby matching the adapted template with the actual noise in the seismic data, reflection event-by-event. We also extend our approach to subtract noises that require several templates to be approximated. By itself, the method can only correct small misalignment errors ([Formula: see text] in [Formula: see text] data) in the template; it relies on conventional least-squares (LS) adaptation to correct large-scale misalignment errors, such as wavelet mismatches and bulk shifts. Synthetic and real-data results illustrate that the CCT-based approach improves upon the LS approach and a curvelet-based approach described by Herrmann and Verschuur.


Geophysics ◽  
1999 ◽  
Vol 64 (1) ◽  
pp. 251-260 ◽  
Author(s):  
Gary F. Margrave

The signal band of reflection seismic data is that portion of the temporal Fourier spectrum which is dominated by reflected source energy. The signal bandwidth directly determines the spatial and temporal resolving power and is a useful measure of the value of such data. The realized signal band, which is the signal band of seismic data as optimized in processing, may be estimated by the interpretation of appropriately constructed f-x spectra. A temporal window, whose length has a specified random fluctuation from trace to trace, is applied to an ensemble of seismic traces, and the temporal Fourier transform is computed. The resultant f-x spectra are then separated into amplitude and phase sections, viewed as conventional seismic displays, and interpreted. The signal is manifested through the lateral continuity of spectral events; noise causes lateral incoherence. The fundamental assumption is that signal is correlated from trace to trace while noise is not. A variety of synthetic data examples illustrate that reasonable results are obtained even when the signal decays with time (i.e., is nonstationary) or geologic structure is extreme. Analysis of real data from a 3-C survey shows an easily discernible signal band for both P-P and P-S reflections, with the former being roughly twice the latter. The potential signal band, which may be regarded as the maximum possible signal band, is independent of processing techniques. An estimator for this limiting case is the corner frequency (the frequency at which a decaying signal drops below background noise levels) as measured on ensemble‐averaged amplitude spectra from raw seismic data. A comparison of potential signal band with realized signal band for the 3-C data shows good agreement for P-P data, which suggests the processing is nearly optimal. For P-S data, the realized signal band is about half of the estimated potential. This may indicate a relative immaturity of P-S processing algorithms or it may be due to P-P energy on the raw radial component records.


2020 ◽  
Author(s):  
Ana Gabriela Bravo-Osuna ◽  
Enrique Gómez-Treviño ◽  
Olaf Josafat Cortés-Arroyo ◽  
Néstor Fernando Delgadillo-Jáuregui ◽  
Rocío Fabiola Arellano-Castro

Abstract The magnetotelluric method is increasingly being used to monitor electrical resistivity changes in the subsurface. One of the preferred parameters derived from the surface impedance is the strike direction, which is very sensitive to changes in the direction of the subsurface electrical current flow. The preferred method for estimating the strike changes is that provided by the phase tensor because it is immune to galvanic distortions. However, it is also a fact that the associated analytic formula is unstable for noisy data, something that limits its applicability for monitoring purposes, because in general this involves comparison of two or more very similar data sets. One of the issues is that the noise complicates the distribution of estimates between the four quadrants. This can be handled by sending all values to the same quadrant by adding or subtracting the appropriate amount. This is justified by showing that the analytic formula is also a least squares solution. This is equivalent to define penalty functions for the matrix of eigenvalues and then select the minima numerically. Contrary to the analytic formula this numerical approach can be generalized to compute strikes using windows of any number of periods, thus providing tradeoffs between variance and resolution. The performance of the proposed approach is illustrated by its application to synthetic data and to real data from a monitoring array in the Cerro Prieto geothermal field, México.


Geosciences ◽  
2018 ◽  
Vol 8 (12) ◽  
pp. 497
Author(s):  
Fedor Krasnov ◽  
Alexander Butorin

Sparse spikes deconvolution is one of the oldest inverse problems, which is a stylized version of recovery in seismic imaging. The goal of sparse spike deconvolution is to recover an approximation of a given noisy measurement T = W ∗ r + W 0 . Since the convolution destroys many low and high frequencies, this requires some prior information to regularize the inverse problem. In this paper, the authors continue to study the problem of searching for positions and amplitudes of the reflection coefficients of the medium (SP&ARCM). In previous research, the authors proposed a practical algorithm for solving the inverse problem of obtaining geological information from the seismic trace, which was named A 0 . In the current paper, the authors improved the method of the A 0 algorithm and applied it to the real (non-synthetic) data. Firstly, the authors considered the matrix approach and Differential Evolution approach to the SP&ARCM problem and showed that their efficiency is limited in the case. Secondly, the authors showed that the course to improve the A 0 lays in the direction of optimization with sequential regularization. The authors presented calculations for the accuracy of the A 0 for that case and experimental results of the convergence. The authors also considered different initialization parameters of the optimization process from the point of the acceleration of the convergence. Finally, the authors carried out successful approbation of the algorithm A 0 on synthetic and real data. Further practical development of the algorithm A 0 will be aimed at increasing the robustness of its operation, as well as in application in more complex models of real seismic data. The practical value of the research is to increase the resolving power of the wave field by reducing the contribution of interference, which gives new information for seismic-geological modeling.


2021 ◽  
Vol 73 (1) ◽  
Author(s):  
Ana G. Bravo-Osuna ◽  
Enrique Gómez-Treviño ◽  
Olaf J. Cortés-Arroyo ◽  
Nestor F. Delgadillo-Jauregui ◽  
Rocío F. Arellano-Castro

AbstractThe magnetotelluric method is increasingly being used to monitor electrical resistivity changes in the subsurface. One of the preferred parameters derived from the surface impedance is the strike direction, which is very sensitive to changes in the direction of the subsurface electrical current flow. The preferred method for estimating the strike changes is that provided by the phase tensor because it is immune to galvanic distortions. However, it is also a fact that the associated analytic formula is unstable for noisy data, something that limits its applicability for monitoring purposes, because in general this involves comparison of two or more very similar datasets. One of the issues is that the noise complicates the distribution of estimates between the four quadrants. This can be handled by sending all values to the same quadrant by adding or subtracting the appropriate amount. This is justified by showing that the analytic formula is also a least squares solution. This is equivalent to define penalty functions for the matrix of eigenvalues and then select the minima numerically. Contrary to the analytic formula, this numerical approach can be generalized to compute strikes using windows of any number of periods, thus providing tradeoffs between variance and resolution. The performance of the proposed approach is illustrated by its application to synthetic data and to real data from a monitoring array in the Cerro Prieto geothermal field, México.


2014 ◽  
Vol 490-491 ◽  
pp. 1356-1360 ◽  
Author(s):  
Shu Cong Liu ◽  
Er Gen Gao ◽  
Chen Xun

The wavelet packet transform is a new time-frequency analysis method, and is superior to the traditional wavelet transform and Fourier transform, which can finely do time-frequency dividion on seismic data. A series of simulation experiments on analog seismic signals wavelet packet decomposition and reconstruction at different scales were done by combining different noisy seismic signals, in order to achieve noise removal at optimal wavelet decomposition scale. Simulation results and real data experiments showed that the wavelet packet transform method can effectively remove the noise in seismic signals and retain the valid signals, wavelet packet transform denoising is very effective.


2013 ◽  
Vol 24 (04) ◽  
pp. 1350017 ◽  
Author(s):  
JOSÉ R. A. TORREÃO ◽  
SILVIA M. C. VICTER ◽  
JOÃO L. FERNANDES

We introduce a time-frequency transform based on Gabor functions whose parameters are given by the Fourier transform of the analyzed signal. At any given frequency, the width and the phase of the Gabor function are obtained, respectively, from the magnitude and the phase of the signal's corresponding Fourier component, yielding an analyzing kernel which is a representation of the signal's content at that particular frequency. The resulting Gabor transform tunes itself to the input signal, allowing the accurate detection of time and frequency events, even in situations where the traditional Gabor and S-transform approaches tend to fail. This is the case, for instance, when considering the time-frequency representation of electroencephalogram traces (EEG) of epileptic subjects, as illustrated by the experimental study presented here.


Sign in / Sign up

Export Citation Format

Share Document