Sideswipe removal via null steering

Geophysics ◽  
1992 ◽  
Vol 57 (12) ◽  
pp. 1623-1632 ◽  
Author(s):  
Richard E. Duren ◽  
Stan V. Morris

Null steering refers to the removal (or zeroing) of interferences at specified dips by creating receiving patterns with nulls that are aligned on the interferences. This type of beamforming is more effective than forming a simple crossline array and can be applied to both multistreamer and swath data for reducing out‐of‐plane interferences (sideswipe, boat interference, etc.) that corrupt two‐dimensional (2-D) data (the desired signal). Many beamforming techniques lead to signal cancellation when the interferences are correlated with the desired signal. However, a beamforming technique that has been developed is effective in the presence of signal correlated interferences. The technique can be effectively extended to prestack and poststack seismic data. The number of interferences and their dips are identified by a visual examination of the plotted data. This information can be used to design filters that are applied to the total data set. The resulting 2-D data set is free from the crossline interferences with the inline 2-D data remaining unaltered. Model and real data comparisons between null steering and simple crossline array summation show that: (1) null steering significantly attenuates crossline interference, and (2) 2-D inline data, masked by sideswipe, can be revealed once sideswipe is attenuated by null steering. The real data examples show the identification and effective attenuation of interferences that could easily be interpreted as inline 2-D data: (1) an apparent steeply dipping event, and (2) an apparent flat “bright spot.”

Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. V51-V60 ◽  
Author(s):  
Ramesh (Neelsh) Neelamani ◽  
Anatoly Baumstein ◽  
Warren S. Ross

We propose a complex-valued curvelet transform-based (CCT-based) algorithm that adaptively subtracts from seismic data those noises for which an approximate template is available. The CCT decomposes a geophysical data set in terms of small reflection pieces, with each piece having a different characteristic frequency, location, and dip. One can precisely change the amplitude and shift the location of each seismic reflection piece in a template by controlling the amplitude and phase of the template's CCT coefficients. Based on these insights, our approach uses the phase and amplitude of the data's and template's CCT coefficients to correct misalignment and amplitude errors in the noise template, thereby matching the adapted template with the actual noise in the seismic data, reflection event-by-event. We also extend our approach to subtract noises that require several templates to be approximated. By itself, the method can only correct small misalignment errors ([Formula: see text] in [Formula: see text] data) in the template; it relies on conventional least-squares (LS) adaptation to correct large-scale misalignment errors, such as wavelet mismatches and bulk shifts. Synthetic and real-data results illustrate that the CCT-based approach improves upon the LS approach and a curvelet-based approach described by Herrmann and Verschuur.


Geophysics ◽  
2021 ◽  
pp. 1-67
Author(s):  
Hossein Jodeiri Akbari Fam ◽  
Mostafa Naghizadeh ◽  
Oz Yilmaz

Two-dimensional seismic surveys often are conducted along crooked line traverses due to the inaccessibility of rugged terrains, logistical and environmental restrictions, and budget limitations. The crookedness of line traverses, irregular topography, and complex subsurface geology with steeply dipping and curved interfaces could adversely affect the signal-to-noise ratio of the data. The crooked-line geometry violates the assumption of a straight-line survey that is a basic principle behind the 2D multifocusing (MF) method and leads to crossline spread of midpoints. Additionally, the crooked-line geometry can give rise to potential pitfalls and artifacts, thus, leads to difficulties in imaging and velocity-depth model estimation. We develop a novel multifocusing algorithm for crooked-line seismic data and revise the traveltime equation accordingly to achieve better signal alignment before stacking. Specifically, we present a 2.5D multifocusing reflection traveltime equation, which explicitly takes into account the midpoint dispersion and cross-dip effects. The new formulation corrects for normal, inline, and crossline dip moveouts simultaneously, which is significantly more accurate than removing these effects sequentially. Applying NMO, DMO, and CDMO separately tends to result in significant errors, especially for large offsets. The 2.5D multifocusing method can perform automatically with a coherence-based global optimization search on data. We investigated the accuracy of the new formulation by testing it on different synthetic models and a real seismic data set. Applying the proposed approach to the real data led to a high-resolution seismic image with a significant quality improvement compared to the conventional method. Numerical tests show that the new formula can accurately focus the primary reflections at their correct location, remove anomalous dip-dependent velocities, and extract true dips from seismic data for structural interpretation. The proposed method efficiently projects and extracts valuable 3D structural information when applied to crooked-line seismic surveys.


Geophysics ◽  
2012 ◽  
Vol 77 (6) ◽  
pp. N17-N24 ◽  
Author(s):  
Zhaoyun Zong ◽  
Xingyao Yin ◽  
Guochen Wu

The fluid term in the Biot-Gassmann equation plays an important role in reservoir fluid discrimination. The density term imbedded in the fluid term, however, is difficult to estimate because it is less sensitive to seismic amplitude variations. We combined poroelasticity theory, amplitude variation with offset (AVO) inversion, and identification of P- and S-wave moduli to present a stable and physically meaningful method to estimate the fluid term, with no need for density information from prestack seismic data. We used poroelasticity theory to express the fluid term as a function of P- and S-wave moduli. The use of P- and S-wave moduli made the derivation physically meaningful and natural. Then we derived an AVO approximation in terms of these moduli, which can then be directly inverted from seismic data. Furthermore, this practical and robust AVO-inversion technique was developed in a Bayesian framework. The objective was to obtain the maximum a posteriori solution for the P-wave modulus, S-wave modulus, and density. Gaussian and Cauchy distributions were used for the likelihood and a priori probability distributions, respectively. The introduction of a low-frequency constraint and statistical probability information to the objective function rendered the inversion more stable and less sensitive to the initial model. Tests on synthetic data showed that all the parameters can be estimated well when no noise is present and the estimated P- and S-wave moduli were still reasonable with moderate noise and rather smooth initial model parameters. A test on a real data set showed that the estimated fluid term was in good agreement with the results of drilling.


2018 ◽  
Vol 9 (3) ◽  
pp. 472 ◽  
Author(s):  
Abdul Haris ◽  
Befriko Murdianto ◽  
Rochmad Susattyo ◽  
Agus Riyanto

Geophysics ◽  
2018 ◽  
Vol 83 (6) ◽  
pp. U79-U88 ◽  
Author(s):  
Mostafa Abbasi ◽  
Ali Gholami

Seismic velocity analysis is one of the most crucial and, at the same time, the most laborious tasks during seismic data processing. This becomes even more difficult and time-consuming when nonhyperbolicity has to be considered in the velocity analysis. Nonhyperbolic velocity analysis provides very useful information during the processing and interpretation of seismic data. The most common approach for considering anisotropy during velocity analysis is to describe the moveout based on a nonhyperbolic equation. The nonhyperbolic moveout equation in vertically transverse isotropic (VTI) media is defined by two parameters: normal moveout (NMO) velocity [Formula: see text] and anellipticity [Formula: see text] (or horizontal velocity [Formula: see text]). We have developed a new approach based on polynomial chaos (PC) expansion for automating nonhyperbolic velocity analysis of common-midpoint (CMP) data in VTI media. For this purpose, we use the PC expansion to approximate the nonhyperbolic semblance function with a very fast-to-simulate function in terms of [Formula: see text] and [Formula: see text]. Then, using particle swarm optimization, we stochastically look for the optimum NMO and horizontal velocities that provide the maximum semblance. In contrary to common approaches for nonhyperbolic velocity analysis in which the two parameters are estimated iteratively in an alternating fashion, we find [Formula: see text] and [Formula: see text] simultaneously. This approach is tested on various data including a simple convolutional model, an anisotropic benchmark model, and a real data set. In all cases, the new method provided acceptable results. Reflections in the CMP corrected using the optimum velocities are properly flattened, and almost no residual moveout is observed.


Geophysics ◽  
2021 ◽  
pp. 1-60
Author(s):  
Mohammad Mahdi Abedi ◽  
David Pardo

Normal moveout (NMO) correction is a fundamental step in seismic data processing. It consists of mapping seismic data from recorded traveltimes to corresponding zero-offset times. This process produces wavelet stretching as an undesired byproduct. We address the NMO stretching problem with two methods: 1) an exact stretch-free NMO correction that prevents the stretching of primary reflections, and 2) an approximate post-NMO stretch correction. Our stretch-free NMO produces parallel moveout trajectories for primary reflections. Our post-NMO stretch correction calculates the moveout of stretched wavelets as a function of offset. Both methods are based on the generalized moveout approximation and are suitable for application in complex anisotropic or heterogeneous environments. We use new moveout equations and modify the original parameter functions to be constant over the primary reflections, and then interpolate the seismogram amplitudes at the calculated traveltimes. For fast and automatic modification of the parameter functions, we use deep learning. We design a deep neural network (DNN) using convolutional layers and residual blocks. To train the DNN, we generate a set of 40,000 synthetic NMO corrected common midpoint gathers and the corresponding desired outputs of the DNN. The data set is generated using different velocity profiles, wavelets, and offset vectors, and includes multiples, ground roll, and band-limited random noise. The simplicity of the DNN task –a 1D identification of primary reflections– improves the generalization in practice. We use the trained DNN and show successful applications of our stretch-correction method on synthetic and different real data sets.


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. V61-V66 ◽  
Author(s):  
Yandong Li ◽  
Wenkai Lu ◽  
Huanqin Xiao ◽  
Shanwen Zhang ◽  
Yanda Li

The eigenstructure-based coherence algorithms are robust to noise and able to produce enhanced coherence images. However, the original eigenstructure coherence algorithm does not implement dip scanning; therefore, it produces less satisfactory results in areas with strong structural dips. The supertrace technique also improves the coherence algorithms’ robustness by concatenating multiple seismic traces to form a supertrace. In addition, the supertrace data cube preserves the structural-dip information that is contained in the original seismic data cube; thus, dip scanning can be performed effectively using a number of adjacent supertraces. We combine the eigenstructure analysis and the dip-scanning supertrace technique to obtain a new coherence-estimation algorithm. Application to the real data set shows that the new algorithm provides good coherence estimates in areas with strong structural dips. Furthermore, the algorithm is computationally efficient because of the small covariance matrix [Formula: see text] used for the eigenstructure analysis.


Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. R173-R187 ◽  
Author(s):  
Huaizhen Chen ◽  
Kristopher A. Innanen ◽  
Tiansheng Chen

P- and S-wave inverse quality factors quantify seismic wave attenuation, which is related to several key reservoir parameters (porosity, saturation, and viscosity). Estimating the inverse quality factors from observed seismic data provides additional and useful information during gas-bearing reservoir prediction. First, we have developed an approximate reflection coefficient and attenuative elastic impedance (QEI) in terms of the inverse quality factors, and then we established an approach to estimate elastic properties (P- and S-wave impedances, and density) and attenuation (P- and S-wave inverse quality factors) from seismic data at different incidence angles and frequencies. The approach is implemented as a two-step inversion: a model-based and damped least-squares inversion for QEI, and a Bayesian Markov chain Monte Carlo inversion for the inverse quality factors. Synthetic data tests confirm that P- and S-wave impedances and inverse quality factors are reasonably estimated in the case of moderate data error or noise. Applying the established approach to a real data set is suggestive of the robustness of the approach, and furthermore that physically meaningful inverse quality factors can be estimated from seismic data acquired over a gas-bearing reservoir.


Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. P87-P96 ◽  
Author(s):  
Wenbin Jiang ◽  
Jie Zhang ◽  
Lee Bell

Seismic geometry quality control (QC) and corrections are crucial but labor-intensive steps in seismic data preprocessing. Current methods to estimate the correct positions of sources and receivers are usually based on the first-break traveltimes, which may contain large errors, thereby affecting the accuracy of the results. We have applied a deep convolutional neural network to identify shots and receivers that have position error, and we searched for the correct position. Once an error in position is identified by scanning data, a grid search for the correct location is conducted and the result is evaluated by the system until an optimal position is found. The network is trained on 3200 training sets from real data that have been corrected by the traditional method. Through cross validation on 800 sets, the classifier achieves a precision of 99.5% and a recall rate of 1. The final errors between the true positions and corrected positions are less than 10% of the shot spacing. An uncorrected real data experiment reveals that the proposed machine-learning method for geometry QC and correction provides similar results to the conventional manual correction approach but without human interference. Because the wavefield pattern of the training data for this purpose is global, there is no need to train the system again when applying the method to correct receiver position or process another data set. This claim is verified with different real data.


Sign in / Sign up

Export Citation Format

Share Document