Seismic signal band estimation by interpretation of f-x spectra

Geophysics ◽  
1999 ◽  
Vol 64 (1) ◽  
pp. 251-260 ◽  
Author(s):  
Gary F. Margrave

The signal band of reflection seismic data is that portion of the temporal Fourier spectrum which is dominated by reflected source energy. The signal bandwidth directly determines the spatial and temporal resolving power and is a useful measure of the value of such data. The realized signal band, which is the signal band of seismic data as optimized in processing, may be estimated by the interpretation of appropriately constructed f-x spectra. A temporal window, whose length has a specified random fluctuation from trace to trace, is applied to an ensemble of seismic traces, and the temporal Fourier transform is computed. The resultant f-x spectra are then separated into amplitude and phase sections, viewed as conventional seismic displays, and interpreted. The signal is manifested through the lateral continuity of spectral events; noise causes lateral incoherence. The fundamental assumption is that signal is correlated from trace to trace while noise is not. A variety of synthetic data examples illustrate that reasonable results are obtained even when the signal decays with time (i.e., is nonstationary) or geologic structure is extreme. Analysis of real data from a 3-C survey shows an easily discernible signal band for both P-P and P-S reflections, with the former being roughly twice the latter. The potential signal band, which may be regarded as the maximum possible signal band, is independent of processing techniques. An estimator for this limiting case is the corner frequency (the frequency at which a decaying signal drops below background noise levels) as measured on ensemble‐averaged amplitude spectra from raw seismic data. A comparison of potential signal band with realized signal band for the 3-C data shows good agreement for P-P data, which suggests the processing is nearly optimal. For P-S data, the realized signal band is about half of the estimated potential. This may indicate a relative immaturity of P-S processing algorithms or it may be due to P-P energy on the raw radial component records.

Geosciences ◽  
2018 ◽  
Vol 8 (12) ◽  
pp. 497
Author(s):  
Fedor Krasnov ◽  
Alexander Butorin

Sparse spikes deconvolution is one of the oldest inverse problems, which is a stylized version of recovery in seismic imaging. The goal of sparse spike deconvolution is to recover an approximation of a given noisy measurement T = W ∗ r + W 0 . Since the convolution destroys many low and high frequencies, this requires some prior information to regularize the inverse problem. In this paper, the authors continue to study the problem of searching for positions and amplitudes of the reflection coefficients of the medium (SP&ARCM). In previous research, the authors proposed a practical algorithm for solving the inverse problem of obtaining geological information from the seismic trace, which was named A 0 . In the current paper, the authors improved the method of the A 0 algorithm and applied it to the real (non-synthetic) data. Firstly, the authors considered the matrix approach and Differential Evolution approach to the SP&ARCM problem and showed that their efficiency is limited in the case. Secondly, the authors showed that the course to improve the A 0 lays in the direction of optimization with sequential regularization. The authors presented calculations for the accuracy of the A 0 for that case and experimental results of the convergence. The authors also considered different initialization parameters of the optimization process from the point of the acceleration of the convergence. Finally, the authors carried out successful approbation of the algorithm A 0 on synthetic and real data. Further practical development of the algorithm A 0 will be aimed at increasing the robustness of its operation, as well as in application in more complex models of real seismic data. The practical value of the research is to increase the resolving power of the wave field by reducing the contribution of interference, which gives new information for seismic-geological modeling.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB203-WB210 ◽  
Author(s):  
Gilles Hennenfent ◽  
Lloyd Fenelon ◽  
Felix J. Herrmann

We extend our earlier work on the nonequispaced fast discrete curvelet transform (NFDCT) and introduce a second generation of the transform. This new generation differs from the previous one by the approach taken to compute accurate curvelet coefficients from irregularly sampled data. The first generation relies on accurate Fourier coefficients obtained by an [Formula: see text]-regularized inversion of the nonequispaced fast Fourier transform (FFT) whereas the second is based on a direct [Formula: see text]-regularized inversion of the operator that links curvelet coefficients to irregular data. Also, by construction the second generation NFDCT is lossless unlike the first generation NFDCT. This property is particularly attractive for processing irregularly sampled seismic data in the curvelet domain and bringing them back to their irregular record-ing locations with high fidelity. Secondly, we combine the second generation NFDCT with the standard fast discrete curvelet transform (FDCT) to form a new curvelet-based method, coined nonequispaced curvelet reconstruction with sparsity-promoting inversion (NCRSI) for the regularization and interpolation of irregularly sampled data. We demonstrate that for a pure regularization problem the reconstruction is very accurate. The signal-to-reconstruction error ratio in our example is above [Formula: see text]. We also conduct combined interpolation and regularization experiments. The reconstructions for synthetic data are accurate, particularly when the recording locations are optimally jittered. The reconstruction in our real data example shows amplitudes along the main wavefronts smoothly varying with limited acquisition imprint.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. V367-V376 ◽  
Author(s):  
Omar M. Saad ◽  
Yangkang Chen

Attenuation of seismic random noise is considered an important processing step to enhance the signal-to-noise ratio of seismic data. A new approach is proposed to attenuate random noise based on a deep-denoising autoencoder (DDAE). In this approach, the time-series seismic data are used as an input for the DDAE. The DDAE encodes the input seismic data to multiple levels of abstraction, and then it decodes those levels to reconstruct the seismic signal without noise. The DDAE is pretrained in a supervised way using synthetic data; following this, the pretrained model is used to denoise the field data set in an unsupervised scheme using a new customized loss function. We have assessed the proposed algorithm based on four synthetic data sets and two field examples, and we compare the results with several benchmark algorithms, such as f- x deconvolution ( f- x deconv) and the f- x singular spectrum analysis ( f- x SSA). As a result, our algorithm succeeds in attenuating the random noise in an effective manner.


Geophysics ◽  
1990 ◽  
Vol 55 (12) ◽  
pp. 1613-1624 ◽  
Author(s):  
C. deGroot‐Hedlin ◽  
S. Constable

Magnetotelluric (MT) data are inverted for smooth 2-D models using an extension of the existing 1-D algorithm, Occam’s inversion. Since an MT data set consists of a finite number of imprecise data, an infinity of solutions to the inverse problem exists. Fitting field or synthetic electromagnetic data as closely as possible results in theoretical models with a maximum amount of roughness, or structure. However, by relaxing the misfit criterion only a small amount, models which are maximally smooth may be generated. Smooth models are less likely to result in overinterpretation of the data and reflect the true resolving power of the MT method. The models are composed of a large number of rectangular prisms, each having a constant conductivity. [Formula: see text] information, in the form of boundary locations only or both boundary locations and conductivity, may be included, providing a powerful tool for improving the resolving power of the data. Joint inversion of TE and TM synthetic data generated from known models allows comparison of smooth models with the true structure. In most cases, smoothed versions of the true structure may be recovered in 12–16 iterations. However, resistive features with a size comparable to depth of burial are poorly resolved. Real MT data present problems of non‐Gaussian data errors, the breakdown of the two‐dimensionality assumption and the large number of data in broadband soundings; nevertheless, real data can be inverted using the algorithm.


Geophysics ◽  
2010 ◽  
Vol 75 (2) ◽  
pp. S73-S79
Author(s):  
Ørjan Pedersen ◽  
Sverre Brandsberg-Dahl ◽  
Bjørn Ursin

One-way wavefield extrapolation methods are used routinely in 3D depth migration algorithms for seismic data. Due to their efficient computer implementations, such one-way methods have become increasingly popular and a wide variety of methods have been introduced. In salt provinces, the migration algorithms must be able to handle large velocity contrasts because the velocities in salt are generally much higher than in the surrounding sediments. This can be a challenge for one-way wavefield extrapolation methods. We present a depth migration method using one-way propagators within lateral windows for handling the large velocity contrasts associated with salt-sediment interfaces. Using adaptive windowing, we can handle large perturbations locally in a similar manner as the beamlet propagator, thus limiting the impact of the errors on the global wavefield. We demonstrate the performance of our method by applying it to synthetic data from the 2D SEG/EAGE [Formula: see text] salt model and an offshore real data example.


Geophysics ◽  
2003 ◽  
Vol 68 (2) ◽  
pp. 641-655 ◽  
Author(s):  
Anders Sollid ◽  
Bjørn Ursin

Scattering‐angle migration maps seismic prestack data directly into angle‐dependent reflectivity at the image point. The method automatically accounts for triplicated rayfields and is easily extended to handle anisotropy. We specify scattering‐angle migration integrals for PP and PS ocean‐bottom seismic (OBS) data in 3D and 2.5D elastic media exhibiting weak contrasts and weak anisotropy. The derivation is based on the anisotropic elastic Born‐Kirchhoff‐Helmholtz surface scattering integral. The true‐amplitude weights are chosen such that the amplitude versus angle (AVA) response of the angle gather is equal to the Born scattering coefficient or, alternatively, the linearized reflection coefficient. We implement scattering‐angle migration by shooting a fan of rays from the subsurface point to the acquisition surface, followed by integrating the phase‐ and amplitude‐corrected seismic data over the migration dip at the image point while keeping the scattering‐angle fixed. A dense summation over migration dip only adds a minor additional cost and enhances the coherent signal in the angle gathers. The 2.5D scattering‐angle migration is demonstrated on synthetic data and on real PP and PS data from the North Sea. In the real data example we use a transversely isotropic (TI) background model to obtain depth‐consistent PP and PS images. The aim of the succeeding AVA analysis is to predict the fluid type in the reservoir sand. Specifically, the PS stack maps the contrasts in lithology while being insensitive to the fluid fill. The PP large‐angle stack maps the oil‐filled sand but shows no response in the brine‐filled zones. A comparison to common‐offset Kirchhoff migration demonstrates that, for the same computational cost, scattering‐angle migration provides common image gathers with less noise and fewer artifacts.


2020 ◽  
Vol 222 (1) ◽  
pp. 544-559
Author(s):  
Lianqing Zhou ◽  
Xiaodong Song ◽  
Richard L Weaver

SUMMARY Ambient noise correlation has been used extensively to retrieve traveltimes of surface waves. However, studies of retrieving amplitude information and attenuation from ambient noise are limited. In this study, we develop methods and strategies to extract Rayleigh wave amplitude and attenuation from ambient noise correlation, based on theoretical derivation, numerical simulation, and practical considerations of real seismic data. The synthetic data included a numerical simulation of a highly anisotropic noise source and Earth-like temporally varying strength. Results from synthetic data validate that amplitudes and attenuations can indeed be extracted from noise correlations for a linear array. A temporal flattening procedure is effective in speeding up convergence while preserving relative amplitudes. The traditional one-bit normalization and other types of temporal normalization that are applied to each individual station separately are problematic in recovering attenuation and should be avoided. In this study, we propose an ‘asynchronous’ temporal flattening procedure for real data that does not require all stations to have data at the same time. Furthermore, we present the detailed procedure for amplitude retrieval from ambient noise. Tests on real data suggest attenuations extracted from our noise-based methods are comparable with those from earthquakes. Our study shows an exciting promise of retrieving amplitude and attenuation information from ambient noise correlations and suggests practical considerations for applications to real data.


Geophysics ◽  
2011 ◽  
Vol 76 (3) ◽  
pp. W15-W30 ◽  
Author(s):  
Gary F. Margrave ◽  
Michael P. Lamoureux ◽  
David C. Henley

We have extended the method of stationary spiking deconvolution of seismic data to the context of nonstationary signals in which the nonstationarity is due to attenuation processes. As in the stationary case, we have assumed a statistically white reflectivity and a minimum-phase source and attenuation process. This extension is based on a nonstationary convolutional model, which we have developed and related to the stationary convolutional model. To facilitate our method, we have devised a simple numerical approach to calculate the discrete Gabor transform, or complex-valued time-frequency decomposition, of any signal. Although the Fourier transform renders stationary convolution into exact, multiplicative factors, the Gabor transform, or windowed Fourier transform, induces only an approximate factorization of the nonstationary convolutional model. This factorization serves as a guide to develop a smoothing process that, when applied to the Gabor transform of the nonstationary seismic trace, estimates the magnitude of the time-frequency attenuation function and the source wavelet. By assuming that both are minimum-phase processes, their phases can be determined. Gabor deconvolution is accomplished by spectral division in the time-frequency domain. The complex-valued Gabor transform of the seismic trace is divided by the complex-valued estimates of attenuation and source wavelet to estimate the Gabor transform of the reflectivity. An inverse Gabor transform recovers the time-domain reflectivity. The technique has applications to synthetic data and real data.


Geophysics ◽  
1983 ◽  
Vol 48 (12) ◽  
pp. 1598-1610 ◽  
Author(s):  
J. Bee Bednar

Seismic exploration problems frequently require analysis of noisy data. Traditional processing removes or reduces noise effects by linear statistical filtering. This filtering process can be viewed as a weighted averaging with coefficients chosen to enhance the data information content. When the signal and noise components occupy separate spectral windows, or when the statistical properties of the noise are sufficiently understood, linear statistical filtering is an effective tool for data enhancement. When the noise properties are not well understood, or when the noise and signal occupy the same spectral window, linear or weighted averaging performs poorly as a signal enhancement process. One must look for alternative procedures to extract the desired information. As a nonlinear operation which is statistically similar to averaging, median filtering represents one potential alternative. This paper investigates the application of median filtering to several seismic data enhancement problems. A methodology for using median filtering as one step in cepstral deconvolution or seismic signature estimation is presented. The median filtering process is applied to statistical editing of acoustic impedance data and the removal of noise bursts from reflection data. The most surprising conclusion obtained from the empirical studies on synthetic data is that, in high‐noise situations, cepstral‐based median filtering appears to perform exceptionally well as a deconvolver but poorly as a signature estimator. For real data, the process is stable and, to the extent that the data follow the convolutional model, does a reasonable job at both pulse estimation and deconvolution.


Sign in / Sign up

Export Citation Format

Share Document