Seismic data interpolation using a fast generalized Fourier transform

Geophysics ◽  
2011 ◽  
Vol 76 (1) ◽  
pp. V1-V10 ◽  
Author(s):  
Mostafa Naghizadeh ◽  
Kristopher A. Innanen

We have found a fast and efficient method for the interpolation of nonstationary seismic data. The method uses the fast generalized Fourier transform (FGFT) to identify the space-wavenumber evolution of nonstationary spatial signals at each temporal frequency. The nonredundant nature of FGFT renders a big computational advantage to this interpolation method. A least-squares fitting scheme is used next to retrieve the optimal FGFT coefficients representative of the ideal interpolated data. For randomly sampled data on a regular grid, we seek a sparse representation of FGFT coefficients to retrieve the missing samples. In addition, to interpolate the regularly sampled seismic data at a given frequency, we use a mask function derived from the FGFT coefficients of the low frequencies. Synthetic and real data examples can be used to examine the performance of the method.

Geophysics ◽  
2012 ◽  
Vol 77 (2) ◽  
pp. V71-V80 ◽  
Author(s):  
Mostafa Naghizadeh

I introduce a unified approach for denoising and interpolation of seismic data in the frequency-wavenumber ([Formula: see text]) domain. First, an angular search in the [Formula: see text] domain is carried out to identify a sparse number of dominant dips, not only using low frequencies but over the whole frequency range. Then, an angular mask function is designed based on the identified dominant dips. The mask function is utilized with the least-squares fitting principle for optimal denoising or interpolation of data. The least-squares fit is directly applied in the time-space domain. The proposed method can be used to interpolate regularly sampled data as well as randomly sampled data on a regular grid. Synthetic and real data examples are provided to examine the performance of the proposed method.


Geophysics ◽  
2013 ◽  
Vol 78 (1) ◽  
pp. A1-A5 ◽  
Author(s):  
Mostafa Naghizadeh ◽  
Mauricio Sacchi

We tested a strategy for beyond-alias interpolation of seismic data using Cadzow reconstruction. The strategy enables Cadzow reconstruction to be used for interpolation of regularly sampled seismic records. First, in the frequency-space ([Formula: see text]) domain, we generated a Hankel matrix from the spatial samples of the low frequencies. To perform interpolation at a given frequency, the spatial samples were interlaced with zero samples and another Hankel matrix was generated from the zero-interlaced data. Next, the rank-reduced eigen-decomposition of the Hankel matrix at low frequencies was used for beyond-alias preconditioning of the Hankel matrix at a given frequency. Finally, antidiagonal averaging of the conditioned Hankel matrix produced the final interpolated data. In addition, the multidimensional extension of the proposed algorithm was explained. The proposed method provides a unifying thread between reduced-rank Cadzow reconstruction and beyond alias [Formula: see text] prediction error interpolation. Synthetic and real data examples were provided to examine the performance of the proposed interpolation method.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB189-WB202 ◽  
Author(s):  
Mostafa Naghizadeh ◽  
Mauricio D. Sacchi

We propose a robust interpolation scheme for aliased regularly sampled seismic data that uses the curvelet transform. In a first pass, the curvelet transform is used to compute the curvelet coefficients of the aliased seismic data. The aforementioned coefficients are divided into two groups of scales: alias-free and alias-contaminated scales. The alias-free curvelet coefficients are upscaled to estimate a mask function that is used to constrain the inversion of the alias-contaminated scale coefficients. The mask function is incorporated into the inversion via a minimum norm least-squares algorithm that determines the curvelet coefficients of the desired alias-free data. Once the alias-free coefficients are determined, the curvelet synthesis operator is used to reconstruct seismograms at new spatial positions. The proposed method can be used to reconstruct regularly and irregularly sampled seismic data. We believe that our exposition leads to a clear unifying thread between [Formula: see text] and [Formula: see text] beyond-alias interpolation methods and curvelet reconstruction. As in [Formula: see text] and [Formula: see text] interpolation, we stress the necessity of examining seismic data at different scales (frequency bands) to come up with viable and robust interpolation schemes. Synthetic and real data examples are used to illustrate the performance of the proposed curvelet interpolation method.


Geophysics ◽  
2005 ◽  
Vol 70 (4) ◽  
pp. V87-V95 ◽  
Author(s):  
Sheng Xu ◽  
Yu Zhang ◽  
Don Pham ◽  
Gilles Lambaré

Seismic data regularization, which spatially transforms irregularly sampled acquired data to regularly sampled data, is a long-standing problem in seismic data processing. Data regularization can be implemented using Fourier theory by using a method that estimates the spatial frequency content on an irregularly sampled grid. The data can then be reconstructed on any desired grid. Difficulties arise from the nonorthogonality of the global Fourier basis functions on an irregular grid, which results in the problem of “spectral leakage”: energy from one Fourier coefficient leaks onto others. We investigate the nonorthogonality of the Fourier basis on an irregularly sampled grid and propose a technique called “antileakage Fourier transform” to overcome the spectral leakage. In the antileakage Fourier transform, we first solve for the most energetic Fourier coefficient, assuming that it causes the most severe leakage. To attenuate all aliases and the leakage of this component onto other Fourier coefficients, the data component corresponding to this most energetic Fourier coefficient is subtracted from the original input on the irregular grid. We then use this new input to solve for the next Fourier coefficient, repeating the procedure until all Fourier coefficients are estimated. This procedure is equivalent to “reorthogonalizing” the global Fourier basis on an irregularly sampled grid. We demonstrate the robustness and effectiveness of this technique with successful applications to both synthetic and real data examples.


Author(s):  
Krzysztof Kazimierczuk ◽  
Maria Misiak ◽  
Jan Stanek ◽  
Anna Zawadzka-Kazimierczuk ◽  
Wiktor Koźmiński

Geophysics ◽  
2018 ◽  
Vol 83 (3) ◽  
pp. V185-V195 ◽  
Author(s):  
Mostafa Naghizadeh ◽  
Mauricio Sacchi

We have developed a ground-roll attenuation strategy for seismic records that adopts the curvelet transform. The curvelet transform decomposes the seismic events based on their dip and frequency content information. The curvelet panels that contain only either reflection or ground-roll energy can be used to alter the curvelet panels with mixed reflection and ground-roll energies. We build a curvelet-domain mask function from the ground-roll-free curvelet coefficients (high frequencies) and downscale it to the ground-roll-contaminated curvelet coefficients (low frequencies). The mask function is used inside a least-squares optimization scheme to preserve the seismic reflections and attenuate the ground roll. Synthetic and real seismic data examples show the application of the proposed ground-roll attenuation method.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB203-WB210 ◽  
Author(s):  
Gilles Hennenfent ◽  
Lloyd Fenelon ◽  
Felix J. Herrmann

We extend our earlier work on the nonequispaced fast discrete curvelet transform (NFDCT) and introduce a second generation of the transform. This new generation differs from the previous one by the approach taken to compute accurate curvelet coefficients from irregularly sampled data. The first generation relies on accurate Fourier coefficients obtained by an [Formula: see text]-regularized inversion of the nonequispaced fast Fourier transform (FFT) whereas the second is based on a direct [Formula: see text]-regularized inversion of the operator that links curvelet coefficients to irregular data. Also, by construction the second generation NFDCT is lossless unlike the first generation NFDCT. This property is particularly attractive for processing irregularly sampled seismic data in the curvelet domain and bringing them back to their irregular record-ing locations with high fidelity. Secondly, we combine the second generation NFDCT with the standard fast discrete curvelet transform (FDCT) to form a new curvelet-based method, coined nonequispaced curvelet reconstruction with sparsity-promoting inversion (NCRSI) for the regularization and interpolation of irregularly sampled data. We demonstrate that for a pure regularization problem the reconstruction is very accurate. The signal-to-reconstruction error ratio in our example is above [Formula: see text]. We also conduct combined interpolation and regularization experiments. The reconstructions for synthetic data are accurate, particularly when the recording locations are optimally jittered. The reconstruction in our real data example shows amplitudes along the main wavefronts smoothly varying with limited acquisition imprint.


Geophysics ◽  
2014 ◽  
Vol 79 (3) ◽  
pp. V75-V80 ◽  
Author(s):  
Muhammad Sajid ◽  
Deva Ghosh

The ability to resolve seismic thin beds is a function of the bed thickness and the frequency content of the seismic data. To achieve high resolution, the seismic data must have broad frequency bandwidth. We developed an algorithm that improved the bandwidth of the seismic data without greatly boosting high-frequency noise. The algorithm employed a set of three cascaded difference operators to boost high frequencies and combined with a simple smoothing operator to boost low frequencies. The output of these operators was balanced and added to the original signal to produce whitened data. The four convolutional operators were quite short, so the algorithm was highly efficient. Synthetic and real data examples demonstrated the effectiveness of this algorithm. Comparison with a conventional whitening algorithm showed the algorithm to be competitive.


Geophysics ◽  
2011 ◽  
Vol 76 (3) ◽  
pp. W15-W30 ◽  
Author(s):  
Gary F. Margrave ◽  
Michael P. Lamoureux ◽  
David C. Henley

We have extended the method of stationary spiking deconvolution of seismic data to the context of nonstationary signals in which the nonstationarity is due to attenuation processes. As in the stationary case, we have assumed a statistically white reflectivity and a minimum-phase source and attenuation process. This extension is based on a nonstationary convolutional model, which we have developed and related to the stationary convolutional model. To facilitate our method, we have devised a simple numerical approach to calculate the discrete Gabor transform, or complex-valued time-frequency decomposition, of any signal. Although the Fourier transform renders stationary convolution into exact, multiplicative factors, the Gabor transform, or windowed Fourier transform, induces only an approximate factorization of the nonstationary convolutional model. This factorization serves as a guide to develop a smoothing process that, when applied to the Gabor transform of the nonstationary seismic trace, estimates the magnitude of the time-frequency attenuation function and the source wavelet. By assuming that both are minimum-phase processes, their phases can be determined. Gabor deconvolution is accomplished by spectral division in the time-frequency domain. The complex-valued Gabor transform of the seismic trace is divided by the complex-valued estimates of attenuation and source wavelet to estimate the Gabor transform of the reflectivity. An inverse Gabor transform recovers the time-domain reflectivity. The technique has applications to synthetic data and real data.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB113-WB120 ◽  
Author(s):  
Sheng Xu ◽  
Yu Zhang ◽  
Gilles Lambaré

Wide-azimuth seismic data sets are generally acquired more sparsely than narrow-azimuth seismic data sets. This brings new challenges to seismic data regularization algorithms, which aim to reconstruct seismic data for regularly sampled acquisition geometries from seismic data recorded from irregularly sampled acquisition geometries. The Fourier-based seismic data regularization algorithm first estimates the spatial frequency content on an irregularly sampled input grid. Then, it reconstructs the seismic data on any desired grid. Three main difficulties arise in this process: the “spectral leakage” problem, the accurate estimation of Fourier components, and the effective antialiasing scheme used inside the algorithm. The antileakage Fourier transform algorithm can overcome the spectral leakage problem and handles aliased data. To generalize it to higher dimensions, we propose an area weighting scheme to accurately estimate the Fourier components. However, the computational cost dramatically increases with the sampling dimensions. A windowed Fourier transform reduces the computational cost in high-dimension applications but causes undersampling in wavenumber domain and introduces some artifacts, known as Gibbs phenomena. As a solution, we propose a wavenumber domain oversampling inversion scheme. The robustness and effectiveness of the proposed algorithm are demonstrated with some applications to both synthetic and real data examples.


Sign in / Sign up

Export Citation Format

Share Document