Prestack structurally constrained impedance inversion

Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. R89-R103 ◽  
Author(s):  
Haitham Hamid ◽  
Adam Pidlisecky ◽  
Larry Lines

Classical prestack impedance inversion methods are based on performing a common-depth point (CDP) by CDP inversion using Tikhonov-type regularization. We refer to it as lateral unconstrained inversion (1D-LUI). Prestack seismic data usually have a low signal-to-noise ratio, and the 1D-LUI approach is sensitive to noise. The inversion results can be noisy and lead to an unfocused transition between vertical formation boundaries. The lateral constrained inversion (1D-LCI) can suppress the noise and provide sharp boundaries between inverted 1D models in regions where the layer dips are less than 20°. However, in complex geology, the disadvantage of using the 1D-LC approach is the lateral smearing of the steeply dipping layers. We have developed a structurally constrained inversion (1D-SCI) approach to mitigate the smearing associated with 1D-LCI. SCI involves simultaneous inversion of all seismic CDPs using a regularization operator that forces the solution to honor the local structure. The results of the 1D-SCI were superior compared with the 1D-LUI and 1D-LCI approaches. The steeply dipping layers are clearly visible on the SCI inverted results.

Geophysics ◽  
2006 ◽  
Vol 71 (6) ◽  
pp. S273-S283 ◽  
Author(s):  
Jan Thorbecke ◽  
A. J. Berkhout

The common-focus-point technology (CFP) describes prestack migration by focusing in two steps: emission and detection. The output of the first focusing step represents a CFP gather. This gather defines a shot record that represents the subsurface response resulting from a focused source wavefield. We propose applying the recursive shot-record, depth-migration algorithm to the CFP gathers of a seismic data volume and refer to this process as CFP-gather migration. In the situation of complex geology and/or low signal-to-noise ratio, CFP-based image gathers are easier to interpret for nonalignment than the conventional image gathers. This makes the CFP-based image gathers better suited for velocity analysis. This important property is illustrated by examples on the Marmousi model.


Geophysics ◽  
1982 ◽  
Vol 47 (11) ◽  
pp. 1527-1539 ◽  
Author(s):  
J. T. O’Brien ◽  
W. P. Kamp ◽  
G. M. Hoover

Sign‐bit digital recording means that only the sign of the analog signal is recorded with one bit. In conventional seismic recording, 16 to 20 binary bits are acquired per sample point. The economic advantages of sign‐bit acquisition are immediately obvious. Complete amplitude recovery, comparable to full‐gain recording, can be achieved by correct application of sign‐bit techniques. We describe the amplitude recovery process in a semiintuitive manner to promote the understanding necessary for proper application of the technique. The dynamic range requirements in seismic applications are discussed. Sign‐bit digitization is a completely viable technique for recording seismic data, provided that two conditions are fulfilled. First, in real time, the coherent‐signal‐to‐randomnoise‐ratio must be ⩽1.0. Second, the data must be recorded with sufficient redundancy. Redundancy is achieved by source repetition, sweep correlation, and high‐fold common‐depth‐point stacking, usually in combination. Failure to abide by these two restrictions results in (1) incomplete amplitude recovery, i.e., clipped data, and (2) insufficient dynamic range in the recovered signal. We derive the requirement that the signal‐to‐noise ratio be less than one; we also discuss the consequences of violating that requirement, namely clipping, at various points in the processing sequence. The amount of information lost is proportional to the degree of clipping; a small amount can be tolerated. Calculated expectation values show that (subject to the requirement that the signal‐to‐noise ratio be less than 1.0) an unbiased estimator can be chosen. The variance of these estimators is approximately the same as that for full‐gain seismic techniques. With sufficient redundancy, the variance can be made as small as necessary to achieve the required dynamic range. With proper attention to these findings, sign‐bit digitized data are found to be a totally viable tool.


2021 ◽  
Vol 11 (1) ◽  
pp. 78
Author(s):  
Jianbo He ◽  
Zhenyu Wang ◽  
Mingdong Zhang

When the signal to noise ratio of seismic data is very low, velocity spectrum focusing will be poor., the velocity model obtained by conventional velocity analysis methods is not accurate enough, which results in inaccurate migration. For the low signal noise ratio (SNR) data, this paper proposes to use partial Common Reflection Surface (CRS) stack to build CRS gathers, making full use of all of the reflection information of the first Fresnel zone, and improves the signal to noise ratio of pre-stack gathers by increasing the number of folds. In consideration of the CRS parameters of the zero-offset rays emitted angle and normal wave front curvature radius are searched on zero offset profile, we use ellipse evolving stacking to improve the zero offset section quality, in order to improve the reliability of CRS parameters. After CRS gathers are obtained, we use principal component analysis (PCA) approach to do velocity analysis, which improves the noise immunity of velocity analysis. Models and actual data results demonstrate the effectiveness of this method.


1977 ◽  
Vol 67 (2) ◽  
pp. 369-382
Author(s):  
John L. Sexton ◽  
A. J. Rudman ◽  
Judson Mead

Abstract Measurements of ellipticity of Rayleigh waves recorded in the U.S. Midwest have been examined for azimuth dependence, effects of interference, and repeatability, as well as the hypothesis that a single station may be used to determine local structure. Time- and frequency-domain analyses were performed for each event, with more consistent results from the time-domain method. Results indicate that for the period range of 10 to 50 sec, ellipticity depends primarily upon local structure and does not exhibit significant azimuthal dependence. Most ellipticity values for a given period are repeatable within 5 per cent of other measured values from all source regions, with the greatest deviation being about 10 per cent. The cause of the deviations is attributed to interfering waves and/or poor signal-to-noise ratios. Interference effects result in scatter in ellipticity values. An ellipticity peak in the period range of 18 to 22 sec has variable magnitude for different events, depending upon the amount of interference present and the signal-to-noise ratio. Interference effects also manifest themselves as sharp decreases in group-velocity observations even after filtering. Model studies show that ellipticity peaks can exist, which are due to the layered structure and not necessarily to interference effects. Ellipticity measurements (10- to 50-sec-period range) from a single station are useful for determination of a crustal model for the vicinity of the recording station, but should be used in conjunction with other available geophysical and geological data. Ellipticity measurements are shown to be of special value for model determination in areas with sedimentary layering, a result in agreement with the Boore-Toksöz 1969) study.


Geophysics ◽  
2021 ◽  
pp. 1-51
Author(s):  
Chao Wang ◽  
Yun Wang

Reduced-rank filtering is a common method for attenuating noise in seismic data. As conventional reduced-rank filtering distinguishes signals from noises only according to singular values, it performs poorly when the signal-to-noise ratio is very low, or when data contain high levels of isolate or coherent noise. Therefore, we developed a novel and robust reduced-rank filtering based on the singular value decomposition in the time-space domain. In this method, noise is recognized and attenuated according to the characteristics of both singular values and singular vectors. The left and right singular vectors corresponding to large singular values are selected firstly. Then, the right singular vectors are classified into different categories according to their curve characteristics, such as jump, pulse, and smooth. Each kind of right singular vector is related to a type of noise or seismic event, and is corrected by using a different filtering technology, such as mean filtering, edge-preserving smoothing or edge-preserving median filtering. The left singular vectors are also corrected by using the filtering methods based on frequency attributes like main-frequency and frequency bandwidth. To process seismic data containing a variety of events, local data are extracted along the local dip of event. The optimal local dip is identified according to the singular values and singular vectors of the data matrices that are extracted along different trial directions. This new filtering method has been applied to synthetic and field seismic data, and its performance is compared with that of several conventional filtering methods. The results indicate that the new method is more robust for data with a low signal-to-noise ratio, strong isolate noise, or coherent noise. The new method also overcomes the difficulties associated with selecting an optimal rank.


Geophysics ◽  
2013 ◽  
Vol 78 (6) ◽  
pp. V229-V237 ◽  
Author(s):  
Hongbo Lin ◽  
Yue Li ◽  
Baojun Yang ◽  
Haitao Ma

Time-frequency peak filtering (TFPF) may efficiently suppress random noise and hence improve the signal-to-noise ratio. However, the errors are not always satisfactory when applying the TFPF to fast-varying seismic signals. We begin with an error analysis for the TFPF by using the spread factor of the phase and cumulants of noise. This analysis shows that the nonlinear signal component and non-Gaussian random noise lead to the deviation of the pseudo-Wigner-Ville distribution (PWVD) peaks from the instantaneous frequency. The deviation introduces the signal distortion and random oscillations in the result of the TFPF. We propose a weighted reassigned smoothed PWVD with less deviation than PWVD. The proposed method adopts a frequency window to smooth away the residual oscillations in the PWVD, and incorporates a weight function in the reassignment which sharpens the time-frequency distribution for reducing the deviation. Because the weight function is determined by the lateral coherence of seismic data, the smoothed PWVD is assigned to the accurate instantaneous frequency for desired signal components by weighted frequency reassignment. As a result, the TFPF based on the weighted reassigned PWVD (TFPF_WR) can be more effective in suppressing random noise and preserving signal as compared with the TFPF using the PWVD. We test the proposed method on synthetic and field seismic data, and compare it with a wavelet-transform method and [Formula: see text] prediction filter. The results show that the proposed method provides better performance over the other methods in signal preserving under low signal-to-noise ratio.


Geophysics ◽  
2013 ◽  
Vol 78 (5) ◽  
pp. U53-U63 ◽  
Author(s):  
Andrea Tognarelli ◽  
Eusebio Stucchi ◽  
Alessia Ravasio ◽  
Alfredo Mazzotti

We tested the properties of three different coherency functionals for the velocity analysis of seismic data relative to subbasalt exploration. We evaluated the performance of the standard semblance algorithm and two high-resolution coherency functionals based on the use of analytic signals and of the covariance estimation along hyperbolic traveltime trajectories. Approximate knowledge of the wavelet was exploited to design appropriate filters that matched the primary reflections, thereby further improving the ability of the functionals to highlight the events of interest. The tests were carried out on two synthetic seismograms computed on models reproducing the geologic setting of basaltic intrusions and on common midpoint gathers from a 3D survey. Synthetic and field data had a very low signal-to-noise ratio, strong multiple contamination, and weak primary subbasalt signals. The results revealed that high-resolution coherency functionals were more suitable than semblance algorithms to detect primary signals and to distinguish them from multiples and other interfering events. This early discrimination between primaries and multiples could help to target specific signal enhancement and demultiple operations.


Sign in / Sign up

Export Citation Format

Share Document