Sparseness‐constrained least‐squares inversion: Application to seismic wave reconstruction

Geophysics ◽  
2003 ◽  
Vol 68 (5) ◽  
pp. 1633-1638 ◽  
Author(s):  
Yanghua Wang

The spectrum of a discrete Fourier transform (DFT) is estimated by linear inversion, and used to produce desirable seismic traces with regular spatial sampling from an irregularly sampled data set. The essence of such a wavefield reconstruction method is to solve the DFT inverse problem with a particular constraint which imposes a sparseness criterion on the least‐squares solution. A working definition for the sparseness constraint is presented to improve the stability and efficiency. Then a sparseness measurement is used to measure the relative sparseness of the two DFT spectra obtained from inversion with or without sparseness constraint. It is a pragmatic indicator about the magnitude of sparseness needed for wavefield reconstruction. For seismic trace regularization, an antialiasing condition must be fulfilled for the regularizing trace interval, whereas optimal trace coordinates in the output can be obtained by minimizing the distances between the newly generated traces and the original traces in the input. Application to real seismic data reveals the effectiveness of the technique and the significance of the sparseness constraint in the least‐squares solution.

Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. R199-R217 ◽  
Author(s):  
Xintao Chai ◽  
Shangxu Wang ◽  
Genyang Tang

Seismic data are nonstationary due to subsurface anelastic attenuation and dispersion effects. These effects, also referred to as the earth’s [Formula: see text]-filtering effects, can diminish seismic resolution. We previously developed a method of nonstationary sparse reflectivity inversion (NSRI) for resolution enhancement, which avoids the intrinsic instability associated with inverse [Formula: see text] filtering and generates superior [Formula: see text] compensation results. Applying NSRI to data sets that contain multiples (addressing surface-related multiples only) requires a demultiple preprocessing step because NSRI cannot distinguish primaries from multiples and will treat them as interference convolved with incorrect [Formula: see text] values. However, multiples contain information about subsurface properties. To use information carried by multiples, with the feedback model and NSRI theory, we adapt NSRI to the context of nonstationary seismic data with surface-related multiples. Consequently, not only are the benefits of NSRI (e.g., circumventing the intrinsic instability associated with inverse [Formula: see text] filtering) extended, but also multiples are considered. Our method is limited to be a 1D implementation. Theoretical and numerical analyses verify that given a wavelet, the input [Formula: see text] values primarily affect the inverted reflectivities and exert little effect on the estimated multiples; i.e., multiple estimation need not consider [Formula: see text] filtering effects explicitly. However, there are benefits for NSRI considering multiples. The periodicity and amplitude of the multiples imply the position of the reflectivities and amplitude of the wavelet. Multiples assist in overcoming scaling and shifting ambiguities of conventional problems in which multiples are not considered. Experiments using a 1D algorithm on a synthetic data set, the publicly available Pluto 1.5 data set, and a marine data set support the aforementioned findings and reveal the stability, capabilities, and limitations of the proposed method.


Geophysics ◽  
2009 ◽  
Vol 74 (4) ◽  
pp. V59-V67 ◽  
Author(s):  
Shoudong Huo ◽  
Yanghua Wang

In seismic multiple attenuation, once the multiple models have been built, the effectiveness of the processing depends on the subtraction step. Usually the primary energy is partially attenuated during the adaptive subtraction if an [Formula: see text]-norm matching filter is used to solve a least-squares problem. The expanded multichannel matching (EMCM) filter generally is effective, but conservative parameters adopted to preserve the primary could lead to some remaining multiples. We have managed to improve the multiple attenuation result through an iterative application of the EMCM filter to accumulate the effect of subtraction. A Butterworth-type masking filter based on the multiple model can be used to preserve most of the primary energy prior to subtraction, and then subtraction can be performed on the remaining part to better suppress the multiples without affecting the primaries. Meanwhile, subtraction can be performed according to the orders of the multiples, as a single subtraction window usually covers different-order multiples with different amplitudes. Theoretical analyses, and synthetic and real seismic data set demonstrations, proved that a combination of these three strategies is effective in improving the adaptive subtraction during seismic multiple attenuation.


Geophysics ◽  
2018 ◽  
Vol 83 (4) ◽  
pp. V243-V252
Author(s):  
Wail A. Mousa

A stable explicit depth wavefield extrapolation is obtained using [Formula: see text] iterative reweighted least-squares (IRLS) frequency-space ([Formula: see text]-[Formula: see text]) finite-impulse response digital filters. The problem of designing such filters to obtain stable images of challenging seismic data is formulated as an [Formula: see text] IRLS minimization. Prestack depth imaging of the challenging Marmousi model data set was then performed using the explicit depth wavefield extrapolation with the proposed [Formula: see text] IRLS-based algorithm. Considering the extrapolation filter design accuracy, the [Formula: see text] IRLS minimization method resulted in an image with higher quality when compared with the weighted least-squares method. The method can, therefore, be used to design high-accuracy extrapolation filters.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB203-WB210 ◽  
Author(s):  
Gilles Hennenfent ◽  
Lloyd Fenelon ◽  
Felix J. Herrmann

We extend our earlier work on the nonequispaced fast discrete curvelet transform (NFDCT) and introduce a second generation of the transform. This new generation differs from the previous one by the approach taken to compute accurate curvelet coefficients from irregularly sampled data. The first generation relies on accurate Fourier coefficients obtained by an [Formula: see text]-regularized inversion of the nonequispaced fast Fourier transform (FFT) whereas the second is based on a direct [Formula: see text]-regularized inversion of the operator that links curvelet coefficients to irregular data. Also, by construction the second generation NFDCT is lossless unlike the first generation NFDCT. This property is particularly attractive for processing irregularly sampled seismic data in the curvelet domain and bringing them back to their irregular record-ing locations with high fidelity. Secondly, we combine the second generation NFDCT with the standard fast discrete curvelet transform (FDCT) to form a new curvelet-based method, coined nonequispaced curvelet reconstruction with sparsity-promoting inversion (NCRSI) for the regularization and interpolation of irregularly sampled data. We demonstrate that for a pure regularization problem the reconstruction is very accurate. The signal-to-reconstruction error ratio in our example is above [Formula: see text]. We also conduct combined interpolation and regularization experiments. The reconstructions for synthetic data are accurate, particularly when the recording locations are optimally jittered. The reconstruction in our real data example shows amplitudes along the main wavefronts smoothly varying with limited acquisition imprint.


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. S99-S110
Author(s):  
Daniel A. Rosales ◽  
Biondo Biondi

A new partial-prestack migration operator to manipulate multicomponent data, called converted-wave azimuth moveout (PS-AMO), transforms converted-wave prestack data with an arbitrary offset and azimuth to equivalent data with a new offset and azimuth position. This operator is a sequential application of converted-wave dip moveout and its inverse. As expected, PS-AMO reduces to the known expression of AMO for the extreme case when the P velocity is the same as the S velocity. Moreover, PS-AMO preserves the resolution of dipping events and internally applies a correction for the lateral shift between the common-midpoint and the common-reflection/conversion point. An implementation of PS-AMO in the log-stretch frequency-wavenumber domain is computationally efficient. The main applications for the PS-AMO operator are geometry regularization, data-reduction through partial stacking, and interpolation of unevenly sampled data. We test our PS-AMO operator by solving 3D acquisition geometry-regularization problems for multicomponent, ocean-bottom seismic data. The geometry-regularization problem is defined as a regularized least-squares-objective function. To preserve the resolution of dipping events, the regularization term uses the PS-AMO operator. Application of this methodology on a portion of the Alba 3D, multicomponent, ocean-bottom seismic data set shows that we can satisfactorily obtain an interpolated data set that honors the physics of converted waves.


Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. V51-V60 ◽  
Author(s):  
Ramesh (Neelsh) Neelamani ◽  
Anatoly Baumstein ◽  
Warren S. Ross

We propose a complex-valued curvelet transform-based (CCT-based) algorithm that adaptively subtracts from seismic data those noises for which an approximate template is available. The CCT decomposes a geophysical data set in terms of small reflection pieces, with each piece having a different characteristic frequency, location, and dip. One can precisely change the amplitude and shift the location of each seismic reflection piece in a template by controlling the amplitude and phase of the template's CCT coefficients. Based on these insights, our approach uses the phase and amplitude of the data's and template's CCT coefficients to correct misalignment and amplitude errors in the noise template, thereby matching the adapted template with the actual noise in the seismic data, reflection event-by-event. We also extend our approach to subtract noises that require several templates to be approximated. By itself, the method can only correct small misalignment errors ([Formula: see text] in [Formula: see text] data) in the template; it relies on conventional least-squares (LS) adaptation to correct large-scale misalignment errors, such as wavelet mismatches and bulk shifts. Synthetic and real-data results illustrate that the CCT-based approach improves upon the LS approach and a curvelet-based approach described by Herrmann and Verschuur.


Geophysics ◽  
2012 ◽  
Vol 77 (2) ◽  
pp. V71-V80 ◽  
Author(s):  
Mostafa Naghizadeh

I introduce a unified approach for denoising and interpolation of seismic data in the frequency-wavenumber ([Formula: see text]) domain. First, an angular search in the [Formula: see text] domain is carried out to identify a sparse number of dominant dips, not only using low frequencies but over the whole frequency range. Then, an angular mask function is designed based on the identified dominant dips. The mask function is utilized with the least-squares fitting principle for optimal denoising or interpolation of data. The least-squares fit is directly applied in the time-space domain. The proposed method can be used to interpolate regularly sampled data as well as randomly sampled data on a regular grid. Synthetic and real data examples are provided to examine the performance of the proposed method.


Geophysics ◽  
2020 ◽  
Vol 85 (2) ◽  
pp. V119-V130 ◽  
Author(s):  
Yingying Wang ◽  
Benfeng Wang ◽  
Ning Tu ◽  
Jianhua Geng

Seismic trace interpolation is an important technique because irregular or insufficient sampling data along the spatial direction may lead to inevitable errors in multiple suppression, imaging, and inversion. Many interpolation methods have been studied for irregularly sampled data. Inspired by the working idea of the autoencoder and convolutional neural network, we have performed seismic trace interpolation by using the convolutional autoencoder (CAE). The irregularly sampled data are taken as corrupted data. By using a training data set including pairs of the corrupted and complete data, CAE can automatically learn to extract features from the corrupted data and reconstruct the complete data from the extracted features. It can avoid some assumptions in the traditional trace interpolation method such as the linearity of events, low-rankness, or sparsity. In addition, once the CAE network training is completed, the corrupted seismic data can be interpolated immediately with very low computational cost. A CAE network composed of three convolutional layers and three deconvolutional layers is designed to explore the capabilities of CAE-based seismic trace interpolation for an irregularly sampled data set. To solve the problem of rare complete shot gathers in field data applications, the trained network on synthetic data is used as an initialization of the network training on field data, called the transfer learning strategy. Experiments on synthetic and field data sets indicate the validity and flexibility of the trained CAE. Compared with the curvelet-transform-based method, CAE can lead to comparable or better interpolation performances efficiently. The transfer learning strategy enhances the training efficiency on field data and improves the interpolation performance of CAE with limited training data.


Geophysics ◽  
1970 ◽  
Vol 35 (5) ◽  
pp. 785-811 ◽  
Author(s):  
Sven Treitel

The transition from single‐channel to multichannel data processing systems requires substantial modifications of the simpler single‐channel model. While the response function of a single‐channel digital filter can be specified in terms of scalar‐valued weighting coefficients, the corresponding response function of a multichannel filter is more conveniently described by matrix‐valued weighting coefficients. Correlation coefficients, which are scalars in the single‐channel case, now become matrices. Multichannel sampled data are manipulated with greater ease by recourse to multichannel z‐transform theory. Exact inverse filters are calculable by a matrix inversion technique which is the counterpart to the computation of exact single‐channel inverse operators by polynomial division. The delay properties of the original filter govern the stability of its inverse. This inverse is expressible in the form of a two‐stage cascaded system, whose first stage is a single‐channel recursive filter. Optimum multichannel filtering systems result from a generalization of the single‐channel least squares error criterion. The corresponding correlation matrices are now functions of coefficients which are themselves matrices. The system of normal matrix‐valued equations that is obtained in this manner can be solved by means of Robinson’s generalization of the Wiener‐Levinson algorithm. Inverse multichannel filters are designed by specifying the desired output to be an identity matrix rather than a unit spike; if this matrix occurs at zero lag, the least squares filter is minimum‐delay. Simple numerical examples serve to illustrate the design principles involved and to indicate the types of problems that can be attacked with multichannel least squares processors.


Sign in / Sign up

Export Citation Format

Share Document