Regularization and datuming of seismic data by weighted, damped least squares

Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.

Geophysics ◽  
2018 ◽  
Vol 83 (4) ◽  
pp. V243-V252
Author(s):  
Wail A. Mousa

A stable explicit depth wavefield extrapolation is obtained using [Formula: see text] iterative reweighted least-squares (IRLS) frequency-space ([Formula: see text]-[Formula: see text]) finite-impulse response digital filters. The problem of designing such filters to obtain stable images of challenging seismic data is formulated as an [Formula: see text] IRLS minimization. Prestack depth imaging of the challenging Marmousi model data set was then performed using the explicit depth wavefield extrapolation with the proposed [Formula: see text] IRLS-based algorithm. Considering the extrapolation filter design accuracy, the [Formula: see text] IRLS minimization method resulted in an image with higher quality when compared with the weighted least-squares method. The method can, therefore, be used to design high-accuracy extrapolation filters.


Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. V51-V60 ◽  
Author(s):  
Ramesh (Neelsh) Neelamani ◽  
Anatoly Baumstein ◽  
Warren S. Ross

We propose a complex-valued curvelet transform-based (CCT-based) algorithm that adaptively subtracts from seismic data those noises for which an approximate template is available. The CCT decomposes a geophysical data set in terms of small reflection pieces, with each piece having a different characteristic frequency, location, and dip. One can precisely change the amplitude and shift the location of each seismic reflection piece in a template by controlling the amplitude and phase of the template's CCT coefficients. Based on these insights, our approach uses the phase and amplitude of the data's and template's CCT coefficients to correct misalignment and amplitude errors in the noise template, thereby matching the adapted template with the actual noise in the seismic data, reflection event-by-event. We also extend our approach to subtract noises that require several templates to be approximated. By itself, the method can only correct small misalignment errors ([Formula: see text] in [Formula: see text] data) in the template; it relies on conventional least-squares (LS) adaptation to correct large-scale misalignment errors, such as wavelet mismatches and bulk shifts. Synthetic and real-data results illustrate that the CCT-based approach improves upon the LS approach and a curvelet-based approach described by Herrmann and Verschuur.


Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. U45-U57 ◽  
Author(s):  
Lianlian Hu ◽  
Xiaodong Zheng ◽  
Yanting Duan ◽  
Xinfei Yan ◽  
Ying Hu ◽  
...  

In exploration geophysics, the first arrivals on data acquired under complicated near-surface conditions are often characterized by significant static corrections, weak energy, low signal-to-noise ratio, and dramatic phase change, and they are difficult to pick accurately with traditional automatic procedures. We have approached this problem by using a U-shaped fully convolutional network (U-net) to first-arrival picking, which is formulated as a binary segmentation problem. U-net has the ability to recognize inherent patterns of the first arrivals by combining attributes of arrivals in space and time on data of varying quality. An effective workflow based on U-net is presented for fast and accurate picking. A set of seismic waveform data and their corresponding first-arrival times are used to train the network in a supervised learning approach, then the trained model is used to detect the first arrivals for other seismic data. Our method is applied on one synthetic data set and three field data sets of low quality to identify the first arrivals. Results indicate that U-net only needs a few annotated samples for learning and is able to efficiently detect first-arrival times with high precision on complicated seismic data from a large survey. With the increasing training data of various first arrivals, a trained U-net has the potential to directly identify the first arrivals on new seismic data.


Geophysics ◽  
2010 ◽  
Vol 75 (2) ◽  
pp. S73-S79
Author(s):  
Ørjan Pedersen ◽  
Sverre Brandsberg-Dahl ◽  
Bjørn Ursin

One-way wavefield extrapolation methods are used routinely in 3D depth migration algorithms for seismic data. Due to their efficient computer implementations, such one-way methods have become increasingly popular and a wide variety of methods have been introduced. In salt provinces, the migration algorithms must be able to handle large velocity contrasts because the velocities in salt are generally much higher than in the surrounding sediments. This can be a challenge for one-way wavefield extrapolation methods. We present a depth migration method using one-way propagators within lateral windows for handling the large velocity contrasts associated with salt-sediment interfaces. Using adaptive windowing, we can handle large perturbations locally in a similar manner as the beamlet propagator, thus limiting the impact of the errors on the global wavefield. We demonstrate the performance of our method by applying it to synthetic data from the 2D SEG/EAGE [Formula: see text] salt model and an offshore real data example.


Geophysics ◽  
2013 ◽  
Vol 78 (5) ◽  
pp. M29-M41 ◽  
Author(s):  
Mahdi H. Almutlaq ◽  
Gary F. Margrave

We evaluated the concept of surface-consistent matching filters for processing time-lapse seismic data, in which matching filters are convolutional filters that minimize the sum-squared error between two signals. Because in the Fourier domain a matching filter is the spectral ratio of the two signals, we extended the well-known surface-consistent hypothesis such that the data term is a trace-by-trace spectral ratio of two data sets instead of only one (i.e., surface-consistent deconvolution). To avoid unstable division of spectra, we computed the spectral ratios in the time domain by first designing trace-sequential, least-squares matching filters, then Fourier transforming them. A subsequent least-squares solution then factored the trace-sequential matching filters into four operators: two surface-consistent (source and receiver) and two subsurface-consistent (offset and midpoint). We evaluated a time-lapse synthetic data set with nonrepeatable acquisition parameters, complex near-surface geology, and a variable subsurface reservoir layer. We computed the four-operator surface-consistent matching filters from two surveys, baseline and monitor, then applied these matching filters to the monitor survey to match it to the baseline survey over a temporal window where changes were not expected. This algorithm significantly reduced the effect of most of the nonrepeatable parameters, such as differences in source strength, receiver coupling, wavelet bandwidth and phase, and static shifts. We computed the normalized root mean square difference on raw stacked data (baseline and monitor) and obtained a mean value of 70%. This value was significantly reduced after applying the 4C surface-consistent matching filters to about 13.6% computed from final stacks.


Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. R173-R187 ◽  
Author(s):  
Huaizhen Chen ◽  
Kristopher A. Innanen ◽  
Tiansheng Chen

P- and S-wave inverse quality factors quantify seismic wave attenuation, which is related to several key reservoir parameters (porosity, saturation, and viscosity). Estimating the inverse quality factors from observed seismic data provides additional and useful information during gas-bearing reservoir prediction. First, we have developed an approximate reflection coefficient and attenuative elastic impedance (QEI) in terms of the inverse quality factors, and then we established an approach to estimate elastic properties (P- and S-wave impedances, and density) and attenuation (P- and S-wave inverse quality factors) from seismic data at different incidence angles and frequencies. The approach is implemented as a two-step inversion: a model-based and damped least-squares inversion for QEI, and a Bayesian Markov chain Monte Carlo inversion for the inverse quality factors. Synthetic data tests confirm that P- and S-wave impedances and inverse quality factors are reasonably estimated in the case of moderate data error or noise. Applying the established approach to a real data set is suggestive of the robustness of the approach, and furthermore that physically meaningful inverse quality factors can be estimated from seismic data acquired over a gas-bearing reservoir.


Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. R199-R217 ◽  
Author(s):  
Xintao Chai ◽  
Shangxu Wang ◽  
Genyang Tang

Seismic data are nonstationary due to subsurface anelastic attenuation and dispersion effects. These effects, also referred to as the earth’s [Formula: see text]-filtering effects, can diminish seismic resolution. We previously developed a method of nonstationary sparse reflectivity inversion (NSRI) for resolution enhancement, which avoids the intrinsic instability associated with inverse [Formula: see text] filtering and generates superior [Formula: see text] compensation results. Applying NSRI to data sets that contain multiples (addressing surface-related multiples only) requires a demultiple preprocessing step because NSRI cannot distinguish primaries from multiples and will treat them as interference convolved with incorrect [Formula: see text] values. However, multiples contain information about subsurface properties. To use information carried by multiples, with the feedback model and NSRI theory, we adapt NSRI to the context of nonstationary seismic data with surface-related multiples. Consequently, not only are the benefits of NSRI (e.g., circumventing the intrinsic instability associated with inverse [Formula: see text] filtering) extended, but also multiples are considered. Our method is limited to be a 1D implementation. Theoretical and numerical analyses verify that given a wavelet, the input [Formula: see text] values primarily affect the inverted reflectivities and exert little effect on the estimated multiples; i.e., multiple estimation need not consider [Formula: see text] filtering effects explicitly. However, there are benefits for NSRI considering multiples. The periodicity and amplitude of the multiples imply the position of the reflectivities and amplitude of the wavelet. Multiples assist in overcoming scaling and shifting ambiguities of conventional problems in which multiples are not considered. Experiments using a 1D algorithm on a synthetic data set, the publicly available Pluto 1.5 data set, and a marine data set support the aforementioned findings and reveal the stability, capabilities, and limitations of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document