Antileakage Fourier transform for seismic data regularization

Geophysics ◽  
2005 ◽  
Vol 70 (4) ◽  
pp. V87-V95 ◽  
Author(s):  
Sheng Xu ◽  
Yu Zhang ◽  
Don Pham ◽  
Gilles Lambaré

Seismic data regularization, which spatially transforms irregularly sampled acquired data to regularly sampled data, is a long-standing problem in seismic data processing. Data regularization can be implemented using Fourier theory by using a method that estimates the spatial frequency content on an irregularly sampled grid. The data can then be reconstructed on any desired grid. Difficulties arise from the nonorthogonality of the global Fourier basis functions on an irregular grid, which results in the problem of “spectral leakage”: energy from one Fourier coefficient leaks onto others. We investigate the nonorthogonality of the Fourier basis on an irregularly sampled grid and propose a technique called “antileakage Fourier transform” to overcome the spectral leakage. In the antileakage Fourier transform, we first solve for the most energetic Fourier coefficient, assuming that it causes the most severe leakage. To attenuate all aliases and the leakage of this component onto other Fourier coefficients, the data component corresponding to this most energetic Fourier coefficient is subtracted from the original input on the irregular grid. We then use this new input to solve for the next Fourier coefficient, repeating the procedure until all Fourier coefficients are estimated. This procedure is equivalent to “reorthogonalizing” the global Fourier basis on an irregularly sampled grid. We demonstrate the robustness and effectiveness of this technique with successful applications to both synthetic and real data examples.

Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB113-WB120 ◽  
Author(s):  
Sheng Xu ◽  
Yu Zhang ◽  
Gilles Lambaré

Wide-azimuth seismic data sets are generally acquired more sparsely than narrow-azimuth seismic data sets. This brings new challenges to seismic data regularization algorithms, which aim to reconstruct seismic data for regularly sampled acquisition geometries from seismic data recorded from irregularly sampled acquisition geometries. The Fourier-based seismic data regularization algorithm first estimates the spatial frequency content on an irregularly sampled input grid. Then, it reconstructs the seismic data on any desired grid. Three main difficulties arise in this process: the “spectral leakage” problem, the accurate estimation of Fourier components, and the effective antialiasing scheme used inside the algorithm. The antileakage Fourier transform algorithm can overcome the spectral leakage problem and handles aliased data. To generalize it to higher dimensions, we propose an area weighting scheme to accurately estimate the Fourier components. However, the computational cost dramatically increases with the sampling dimensions. A windowed Fourier transform reduces the computational cost in high-dimension applications but causes undersampling in wavenumber domain and introduces some artifacts, known as Gibbs phenomena. As a solution, we propose a wavenumber domain oversampling inversion scheme. The robustness and effectiveness of the proposed algorithm are demonstrated with some applications to both synthetic and real data examples.


Geophysics ◽  
2011 ◽  
Vol 76 (1) ◽  
pp. V1-V10 ◽  
Author(s):  
Mostafa Naghizadeh ◽  
Kristopher A. Innanen

We have found a fast and efficient method for the interpolation of nonstationary seismic data. The method uses the fast generalized Fourier transform (FGFT) to identify the space-wavenumber evolution of nonstationary spatial signals at each temporal frequency. The nonredundant nature of FGFT renders a big computational advantage to this interpolation method. A least-squares fitting scheme is used next to retrieve the optimal FGFT coefficients representative of the ideal interpolated data. For randomly sampled data on a regular grid, we seek a sparse representation of FGFT coefficients to retrieve the missing samples. In addition, to interpolate the regularly sampled seismic data at a given frequency, we use a mask function derived from the FGFT coefficients of the low frequencies. Synthetic and real data examples can be used to examine the performance of the method.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB203-WB210 ◽  
Author(s):  
Gilles Hennenfent ◽  
Lloyd Fenelon ◽  
Felix J. Herrmann

We extend our earlier work on the nonequispaced fast discrete curvelet transform (NFDCT) and introduce a second generation of the transform. This new generation differs from the previous one by the approach taken to compute accurate curvelet coefficients from irregularly sampled data. The first generation relies on accurate Fourier coefficients obtained by an [Formula: see text]-regularized inversion of the nonequispaced fast Fourier transform (FFT) whereas the second is based on a direct [Formula: see text]-regularized inversion of the operator that links curvelet coefficients to irregular data. Also, by construction the second generation NFDCT is lossless unlike the first generation NFDCT. This property is particularly attractive for processing irregularly sampled seismic data in the curvelet domain and bringing them back to their irregular record-ing locations with high fidelity. Secondly, we combine the second generation NFDCT with the standard fast discrete curvelet transform (FDCT) to form a new curvelet-based method, coined nonequispaced curvelet reconstruction with sparsity-promoting inversion (NCRSI) for the regularization and interpolation of irregularly sampled data. We demonstrate that for a pure regularization problem the reconstruction is very accurate. The signal-to-reconstruction error ratio in our example is above [Formula: see text]. We also conduct combined interpolation and regularization experiments. The reconstructions for synthetic data are accurate, particularly when the recording locations are optimally jittered. The reconstruction in our real data example shows amplitudes along the main wavefronts smoothly varying with limited acquisition imprint.


Geophysics ◽  
2012 ◽  
Vol 77 (2) ◽  
pp. V71-V80 ◽  
Author(s):  
Mostafa Naghizadeh

I introduce a unified approach for denoising and interpolation of seismic data in the frequency-wavenumber ([Formula: see text]) domain. First, an angular search in the [Formula: see text] domain is carried out to identify a sparse number of dominant dips, not only using low frequencies but over the whole frequency range. Then, an angular mask function is designed based on the identified dominant dips. The mask function is utilized with the least-squares fitting principle for optimal denoising or interpolation of data. The least-squares fit is directly applied in the time-space domain. The proposed method can be used to interpolate regularly sampled data as well as randomly sampled data on a regular grid. Synthetic and real data examples are provided to examine the performance of the proposed method.


Geophysics ◽  
2007 ◽  
Vol 72 (1) ◽  
pp. V21-V32 ◽  
Author(s):  
P. M. Zwartjes ◽  
M. D. Sacchi

There are numerous methods for interpolating uniformly sampled, aliased seismic data, but few can handle the combination of nonuniform sampling and aliasing. We combine the principles of Fourier reconstruction of nonaliased, nonuniformly sampled data with the ideas of frequency-wavenumber [Formula: see text] interpolation of aliased, uniformly sampled data in a new two-step algorithm. In the first step, we estimate the Fourier coefficients at the lower nonaliased temporal frequencies from the nonuniformly sampled data. The coefficients are then used in the second step as an a priori model to distinguish between aliased and nonaliased energy at the higher, aliased temporal frequencies. By using a nonquadratic model penalty in the inversion, both the artifacts in the Fourier domain from nonuniform sampling and the aliased energy are suppressed. The underlying assumption is that events are planar; therefore, the algorithm is applied to seismic data in overlapping spatiotemporal windows.


Geophysics ◽  
2011 ◽  
Vol 76 (3) ◽  
pp. W15-W30 ◽  
Author(s):  
Gary F. Margrave ◽  
Michael P. Lamoureux ◽  
David C. Henley

We have extended the method of stationary spiking deconvolution of seismic data to the context of nonstationary signals in which the nonstationarity is due to attenuation processes. As in the stationary case, we have assumed a statistically white reflectivity and a minimum-phase source and attenuation process. This extension is based on a nonstationary convolutional model, which we have developed and related to the stationary convolutional model. To facilitate our method, we have devised a simple numerical approach to calculate the discrete Gabor transform, or complex-valued time-frequency decomposition, of any signal. Although the Fourier transform renders stationary convolution into exact, multiplicative factors, the Gabor transform, or windowed Fourier transform, induces only an approximate factorization of the nonstationary convolutional model. This factorization serves as a guide to develop a smoothing process that, when applied to the Gabor transform of the nonstationary seismic trace, estimates the magnitude of the time-frequency attenuation function and the source wavelet. By assuming that both are minimum-phase processes, their phases can be determined. Gabor deconvolution is accomplished by spectral division in the time-frequency domain. The complex-valued Gabor transform of the seismic trace is divided by the complex-valued estimates of attenuation and source wavelet to estimate the Gabor transform of the reflectivity. An inverse Gabor transform recovers the time-domain reflectivity. The technique has applications to synthetic data and real data.


Geophysics ◽  
2018 ◽  
Vol 83 (3) ◽  
pp. V157-V170 ◽  
Author(s):  
Ebrahim Ghaderpour ◽  
Wenyuan Liao ◽  
Michael P. Lamoureux

Spatial transformation of an irregularly sampled data series to a regularly sampled data series is a challenging problem in many areas such as seismology. The discrete Fourier analysis is limited to regularly sampled data series. On the other hand, the least-squares spectral analysis (LSSA) can analyze an irregularly sampled data series. Although the LSSA method takes into account the correlation among the sinusoidal basis functions of irregularly spaced series, it still suffers from the problem of spectral leakage: Energy leaks from one spectral peak into another. We have developed an iterative method called antileakage LSSA to attenuate the spectral leakage and consequently regularize irregular data series. In this method, we first search for a spectral peak with the highest energy, and then we remove (suppress) it from the original data series. In the next step, we search for a new peak with the highest energy in the residual data series and remove the new and the old components simultaneously from the original data series using a least-squares method. We repeat this procedure until all significant spectral peaks are estimated and removed simultaneously from the original data series. In addition, we address another problem, which is random noise attenuation in the data series, by applying a certain confidence level for significant peaks in the spectrum. We determine the robustness of our method on irregularly sampled synthetic and real data sets, and we compare the results with the antileakage Fourier transform and arbitrary sampled Fourier transform.


First Break ◽  
2009 ◽  
Vol 27 (9) ◽  
Author(s):  
M. Schonewille ◽  
A Klaedtke ◽  
A Vigner ◽  
J Brittan ◽  
T Martin

Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1254
Author(s):  
Xue Han ◽  
Xiaofei Yan ◽  
Deyu Zhang

Let Pc(x)={p≤x|p,[pc]areprimes},c∈R+∖N and λsym2f(n) be the n-th Fourier coefficient associated with the symmetric square L-function L(s,sym2f). For any A>0, we prove that the mean value of λsym2f(n) over Pc(x) is ≪xlog−A−2x for almost all c∈ε,(5+3)/8−ε in the sense of Lebesgue measure. Furthermore, it holds for all c∈(0,1) under the Riemann Hypothesis. Furthermore, we obtain that asymptotic formula for λf2(n) over Pc(x) is ∑p,qprimep≤x,q=[pc]λf2(p)=xclog2x(1+o(1)), for almost all c∈ε,(5+3)/8−ε, where λf(n) is the normalized n-th Fourier coefficient associated with a holomorphic cusp form f for the full modular group.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Sign in / Sign up

Export Citation Format

Share Document