Seismic data interpolation and denoising in the frequency-wavenumber domain

Geophysics ◽  
2012 ◽  
Vol 77 (2) ◽  
pp. V71-V80 ◽  
Author(s):  
Mostafa Naghizadeh

I introduce a unified approach for denoising and interpolation of seismic data in the frequency-wavenumber ([Formula: see text]) domain. First, an angular search in the [Formula: see text] domain is carried out to identify a sparse number of dominant dips, not only using low frequencies but over the whole frequency range. Then, an angular mask function is designed based on the identified dominant dips. The mask function is utilized with the least-squares fitting principle for optimal denoising or interpolation of data. The least-squares fit is directly applied in the time-space domain. The proposed method can be used to interpolate regularly sampled data as well as randomly sampled data on a regular grid. Synthetic and real data examples are provided to examine the performance of the proposed method.

Geophysics ◽  
2011 ◽  
Vol 76 (1) ◽  
pp. V1-V10 ◽  
Author(s):  
Mostafa Naghizadeh ◽  
Kristopher A. Innanen

We have found a fast and efficient method for the interpolation of nonstationary seismic data. The method uses the fast generalized Fourier transform (FGFT) to identify the space-wavenumber evolution of nonstationary spatial signals at each temporal frequency. The nonredundant nature of FGFT renders a big computational advantage to this interpolation method. A least-squares fitting scheme is used next to retrieve the optimal FGFT coefficients representative of the ideal interpolated data. For randomly sampled data on a regular grid, we seek a sparse representation of FGFT coefficients to retrieve the missing samples. In addition, to interpolate the regularly sampled seismic data at a given frequency, we use a mask function derived from the FGFT coefficients of the low frequencies. Synthetic and real data examples can be used to examine the performance of the method.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2018 ◽  
Vol 83 (3) ◽  
pp. V185-V195 ◽  
Author(s):  
Mostafa Naghizadeh ◽  
Mauricio Sacchi

We have developed a ground-roll attenuation strategy for seismic records that adopts the curvelet transform. The curvelet transform decomposes the seismic events based on their dip and frequency content information. The curvelet panels that contain only either reflection or ground-roll energy can be used to alter the curvelet panels with mixed reflection and ground-roll energies. We build a curvelet-domain mask function from the ground-roll-free curvelet coefficients (high frequencies) and downscale it to the ground-roll-contaminated curvelet coefficients (low frequencies). The mask function is used inside a least-squares optimization scheme to preserve the seismic reflections and attenuate the ground roll. Synthetic and real seismic data examples show the application of the proposed ground-roll attenuation method.


Geophysics ◽  
2013 ◽  
Vol 78 (1) ◽  
pp. A1-A5 ◽  
Author(s):  
Mostafa Naghizadeh ◽  
Mauricio Sacchi

We tested a strategy for beyond-alias interpolation of seismic data using Cadzow reconstruction. The strategy enables Cadzow reconstruction to be used for interpolation of regularly sampled seismic records. First, in the frequency-space ([Formula: see text]) domain, we generated a Hankel matrix from the spatial samples of the low frequencies. To perform interpolation at a given frequency, the spatial samples were interlaced with zero samples and another Hankel matrix was generated from the zero-interlaced data. Next, the rank-reduced eigen-decomposition of the Hankel matrix at low frequencies was used for beyond-alias preconditioning of the Hankel matrix at a given frequency. Finally, antidiagonal averaging of the conditioned Hankel matrix produced the final interpolated data. In addition, the multidimensional extension of the proposed algorithm was explained. The proposed method provides a unifying thread between reduced-rank Cadzow reconstruction and beyond alias [Formula: see text] prediction error interpolation. Synthetic and real data examples were provided to examine the performance of the proposed interpolation method.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB203-WB210 ◽  
Author(s):  
Gilles Hennenfent ◽  
Lloyd Fenelon ◽  
Felix J. Herrmann

We extend our earlier work on the nonequispaced fast discrete curvelet transform (NFDCT) and introduce a second generation of the transform. This new generation differs from the previous one by the approach taken to compute accurate curvelet coefficients from irregularly sampled data. The first generation relies on accurate Fourier coefficients obtained by an [Formula: see text]-regularized inversion of the nonequispaced fast Fourier transform (FFT) whereas the second is based on a direct [Formula: see text]-regularized inversion of the operator that links curvelet coefficients to irregular data. Also, by construction the second generation NFDCT is lossless unlike the first generation NFDCT. This property is particularly attractive for processing irregularly sampled seismic data in the curvelet domain and bringing them back to their irregular record-ing locations with high fidelity. Secondly, we combine the second generation NFDCT with the standard fast discrete curvelet transform (FDCT) to form a new curvelet-based method, coined nonequispaced curvelet reconstruction with sparsity-promoting inversion (NCRSI) for the regularization and interpolation of irregularly sampled data. We demonstrate that for a pure regularization problem the reconstruction is very accurate. The signal-to-reconstruction error ratio in our example is above [Formula: see text]. We also conduct combined interpolation and regularization experiments. The reconstructions for synthetic data are accurate, particularly when the recording locations are optimally jittered. The reconstruction in our real data example shows amplitudes along the main wavefronts smoothly varying with limited acquisition imprint.


Geophysics ◽  
2014 ◽  
Vol 79 (3) ◽  
pp. V75-V80 ◽  
Author(s):  
Muhammad Sajid ◽  
Deva Ghosh

The ability to resolve seismic thin beds is a function of the bed thickness and the frequency content of the seismic data. To achieve high resolution, the seismic data must have broad frequency bandwidth. We developed an algorithm that improved the bandwidth of the seismic data without greatly boosting high-frequency noise. The algorithm employed a set of three cascaded difference operators to boost high frequencies and combined with a simple smoothing operator to boost low frequencies. The output of these operators was balanced and added to the original signal to produce whitened data. The four convolutional operators were quite short, so the algorithm was highly efficient. Synthetic and real data examples demonstrated the effectiveness of this algorithm. Comparison with a conventional whitening algorithm showed the algorithm to be competitive.


Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. V51-V60 ◽  
Author(s):  
Ramesh (Neelsh) Neelamani ◽  
Anatoly Baumstein ◽  
Warren S. Ross

We propose a complex-valued curvelet transform-based (CCT-based) algorithm that adaptively subtracts from seismic data those noises for which an approximate template is available. The CCT decomposes a geophysical data set in terms of small reflection pieces, with each piece having a different characteristic frequency, location, and dip. One can precisely change the amplitude and shift the location of each seismic reflection piece in a template by controlling the amplitude and phase of the template's CCT coefficients. Based on these insights, our approach uses the phase and amplitude of the data's and template's CCT coefficients to correct misalignment and amplitude errors in the noise template, thereby matching the adapted template with the actual noise in the seismic data, reflection event-by-event. We also extend our approach to subtract noises that require several templates to be approximated. By itself, the method can only correct small misalignment errors ([Formula: see text] in [Formula: see text] data) in the template; it relies on conventional least-squares (LS) adaptation to correct large-scale misalignment errors, such as wavelet mismatches and bulk shifts. Synthetic and real-data results illustrate that the CCT-based approach improves upon the LS approach and a curvelet-based approach described by Herrmann and Verschuur.


Geophysics ◽  
2003 ◽  
Vol 68 (5) ◽  
pp. 1633-1638 ◽  
Author(s):  
Yanghua Wang

The spectrum of a discrete Fourier transform (DFT) is estimated by linear inversion, and used to produce desirable seismic traces with regular spatial sampling from an irregularly sampled data set. The essence of such a wavefield reconstruction method is to solve the DFT inverse problem with a particular constraint which imposes a sparseness criterion on the least‐squares solution. A working definition for the sparseness constraint is presented to improve the stability and efficiency. Then a sparseness measurement is used to measure the relative sparseness of the two DFT spectra obtained from inversion with or without sparseness constraint. It is a pragmatic indicator about the magnitude of sparseness needed for wavefield reconstruction. For seismic trace regularization, an antialiasing condition must be fulfilled for the regularizing trace interval, whereas optimal trace coordinates in the output can be obtained by minimizing the distances between the newly generated traces and the original traces in the input. Application to real seismic data reveals the effectiveness of the technique and the significance of the sparseness constraint in the least‐squares solution.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB189-WB202 ◽  
Author(s):  
Mostafa Naghizadeh ◽  
Mauricio D. Sacchi

We propose a robust interpolation scheme for aliased regularly sampled seismic data that uses the curvelet transform. In a first pass, the curvelet transform is used to compute the curvelet coefficients of the aliased seismic data. The aforementioned coefficients are divided into two groups of scales: alias-free and alias-contaminated scales. The alias-free curvelet coefficients are upscaled to estimate a mask function that is used to constrain the inversion of the alias-contaminated scale coefficients. The mask function is incorporated into the inversion via a minimum norm least-squares algorithm that determines the curvelet coefficients of the desired alias-free data. Once the alias-free coefficients are determined, the curvelet synthesis operator is used to reconstruct seismograms at new spatial positions. The proposed method can be used to reconstruct regularly and irregularly sampled seismic data. We believe that our exposition leads to a clear unifying thread between [Formula: see text] and [Formula: see text] beyond-alias interpolation methods and curvelet reconstruction. As in [Formula: see text] and [Formula: see text] interpolation, we stress the necessity of examining seismic data at different scales (frequency bands) to come up with viable and robust interpolation schemes. Synthetic and real data examples are used to illustrate the performance of the proposed curvelet interpolation method.


Geophysics ◽  
2005 ◽  
Vol 70 (4) ◽  
pp. V87-V95 ◽  
Author(s):  
Sheng Xu ◽  
Yu Zhang ◽  
Don Pham ◽  
Gilles Lambaré

Seismic data regularization, which spatially transforms irregularly sampled acquired data to regularly sampled data, is a long-standing problem in seismic data processing. Data regularization can be implemented using Fourier theory by using a method that estimates the spatial frequency content on an irregularly sampled grid. The data can then be reconstructed on any desired grid. Difficulties arise from the nonorthogonality of the global Fourier basis functions on an irregular grid, which results in the problem of “spectral leakage”: energy from one Fourier coefficient leaks onto others. We investigate the nonorthogonality of the Fourier basis on an irregularly sampled grid and propose a technique called “antileakage Fourier transform” to overcome the spectral leakage. In the antileakage Fourier transform, we first solve for the most energetic Fourier coefficient, assuming that it causes the most severe leakage. To attenuate all aliases and the leakage of this component onto other Fourier coefficients, the data component corresponding to this most energetic Fourier coefficient is subtracted from the original input on the irregular grid. We then use this new input to solve for the next Fourier coefficient, repeating the procedure until all Fourier coefficients are estimated. This procedure is equivalent to “reorthogonalizing” the global Fourier basis on an irregularly sampled grid. We demonstrate the robustness and effectiveness of this technique with successful applications to both synthetic and real data examples.


Sign in / Sign up

Export Citation Format

Share Document