nmo correction
Recently Published Documents


TOTAL DOCUMENTS

57
(FIVE YEARS 14)

H-INDEX

12
(FIVE YEARS 2)

Geophysics ◽  
2021 ◽  
pp. 1-60
Author(s):  
Mohammad Mahdi Abedi ◽  
David Pardo

Normal moveout (NMO) correction is a fundamental step in seismic data processing. It consists of mapping seismic data from recorded traveltimes to corresponding zero-offset times. This process produces wavelet stretching as an undesired byproduct. We address the NMO stretching problem with two methods: 1) an exact stretch-free NMO correction that prevents the stretching of primary reflections, and 2) an approximate post-NMO stretch correction. Our stretch-free NMO produces parallel moveout trajectories for primary reflections. Our post-NMO stretch correction calculates the moveout of stretched wavelets as a function of offset. Both methods are based on the generalized moveout approximation and are suitable for application in complex anisotropic or heterogeneous environments. We use new moveout equations and modify the original parameter functions to be constant over the primary reflections, and then interpolate the seismogram amplitudes at the calculated traveltimes. For fast and automatic modification of the parameter functions, we use deep learning. We design a deep neural network (DNN) using convolutional layers and residual blocks. To train the DNN, we generate a set of 40,000 synthetic NMO corrected common midpoint gathers and the corresponding desired outputs of the DNN. The data set is generated using different velocity profiles, wavelets, and offset vectors, and includes multiples, ground roll, and band-limited random noise. The simplicity of the DNN task –a 1D identification of primary reflections– improves the generalization in practice. We use the trained DNN and show successful applications of our stretch-correction method on synthetic and different real data sets.


Geophysics ◽  
2021 ◽  
pp. 1-63
Author(s):  
Lasse Amundsen ◽  
Bjørn Ursin

An amplitude versus angle (AVA) inversion method is presented for estimating density and velocities of a stratified elastic medium from reflection seismograms in the intercept time-horizontal slowness domain. The elastic medium parameters are assumed to vary continuously with depth. The seismograms are Green’s function pre-critical incidence primary P-wave reflections of time length T assumed to obey differential equations of a model for elastic primary P-wave back-scattering, similar to seismograms representing the first term in the well-known Bremmer series/WKBJ iterative solution model. A relation is found between plane-wave Green’s function seismograms at each horizontal slowness and the medium properties in time. The Green’s function seismograms after NMO-correction are directly inverted for the medium parameters as function of zero-offset traveltime. It is documented theoretically and verified numerically that the signal at the fundamental frequency f=1/ T must be present in the seismograms for the AVA method to provide the parameter trends of the elastic medium, implying that ultra-low frequencies <1 Hz for T >1 s must be generated and recorded. Noise in the seismograms at ultra-low frequencies is not considered since the theoretical AVA model does not handle microseisms that would be measured in real data. The main mathematical findings are illustrated by using simple model seismograms.


Geophysics ◽  
2021 ◽  
pp. 1-41
Author(s):  
Ali Raeisdana ◽  
M. Javad Khoshnavaz ◽  
Hamid Reza Siahkoohi

Calculating an accurate seismic velocity model serves an important role in many seismic imaging techniques. The process of velocity model building is often time-consuming, specifically for anisotropic areas, where more than a single parameter is involved in the process. In the past few years, more time-efficient approaches have been considered to estimate seismic velocity as well as anellipticity parameters or heterogeneity factor using local event slopes. Nevertheless, some of these techniques are not practical due to curvature-dependency, or due to the lack of near-offset data. To address such limitations, we use a curvature-independent approach for normal-moveout correction as well as parameter estimation in vertical transverse isotropic media, which is based on local estimation of vertical traveltime using a shifted hyperbola approximation in the absence of near-offset data. The performance of the proposed approach is tested on synthetic and field common-midpoint gathers. It is also assessed in different signal-to-noise ratios and different missing-near-offset situations. Our findings are consistent with the results achieved by the previous methods that were not developed for sparse data.


2021 ◽  
Author(s):  
Koki Oikawa ◽  
Hirotaka Saito ◽  
Seiichiro Kuroda ◽  
Kazunori Takahashi

&lt;p&gt;As an array antenna ground penetrating radar (GPR) system electronically switches any antenna combinations sequentially in milliseconds, multi-offset gather data, such as common mid-point (CMP) data, can be acquired almost seamlessly. However, due to the inflexibility of changing the antenna offset, only a limited number of scans can be obtained. The array GPR system has been used to collect time-lapse GPR data, including CMP data during the field infiltration experiment (Iwasaki et al., 2016). CMP data obtained by the array GPR are, however, too sparse to obtain reliable velocity using a standard velocity analysis, such as semblance analysis. We attempted to interpolate the sparse CMP data based on projection onto convex sets (POCS) algorithm (Yi et al., 2016) coupled with NMO correction to automatically determine optimum EM wave velocity. Our previous numerical study showed that the proposed method allows us to determine the EM wave velocity during the infiltration experiment.&lt;/p&gt;&lt;p&gt;The main objective of this study was to evaluate the performance of the proposed method to interpolate sparse array antenna GPR CMP data collected during the in-situ infiltration experiment at Tottori sand dunes. The interpolated CMP data were then used in the semblance analysis to determine the EM wave velocity, which was further used to compute the infiltration front depth. The estimated infiltration depths agreed well with independently obtained depths. This study demonstrated the possibility of developing an automatic velocity analysis based on POCS interpolation coupled with NMO correction for sparse CMP collected with array antenna GPR.&lt;/p&gt;


Geophysics ◽  
2020 ◽  
pp. 1-48
Author(s):  
Danilo Velis

We propose an automated method for velocity picking that allows to estimate appropriate velocity functions for the normal moveout (NMO) correction of common depth point (CDP) gathers, valid for either hyperbolic or nonhyperbolic trajectories. In the hyperbolic velocity analysis case the process involves the simultaneous search (picking) of a certain number of time-velocity pairs where the semblance, or any other coherence measure, is high. In the nonhyperbolic velocity analysis case, a third parameter, usually associated with the layering and/or the anisotropy, is added to the searching process. The proposed technique relies on a simple but effective search of a piecewise linear curve defined by a certain number of nodes in a 2D or 3D space that follows the semblance maxima. The search is carried out efficiently using a constrained very fast simulated annealing algorithm. The constraints consist of static and dynamic bounding restrictions, which are viewed as a means to incorporate prior information about the picking process. This allows to avoid those maxima that correspond to multiples, spurious, and other meaningless events. Results using synthetic and field data show that the proposed technique permits to automatically obtain accurate and consistent velocity picks that lead to flattened events, in agreement with the manual picks. As an algorithm, the method is very flexible to accommodate additional constraints (e.g. preselected events) and depends on a limited number of parameters. These parameters are easily tuned according to data requirements, available prior information, and the user's needs. The computational costs are relatively low, ranging from a fraction of a second to, at most, 1-2 seconds per CDP gather, using a standard PC with a single processor.


2020 ◽  
Author(s):  
Hanlin Sheng ◽  
Xinming Wu ◽  
Bo Zhang

2020 ◽  
Vol 8 (4) ◽  
pp. T687-T699
Author(s):  
Swetal Patel ◽  
Francis Oyebanji ◽  
Kurt J. Marfurt

Because of their improved leverage against ground roll and multiples, as well as the ability to estimate azimuthal anisotropy, wide-azimuth 3D seismic surveys routinely now are acquired over most resource plays. For a relatively shallow target, most of these surveys can be considered to be long offset as well, containing incident angles up to 45°. Unfortunately, effective use of the far-offset data often is compromised by noise and normal moveout (NMO) (or, more accurately, prestack migration) stretch. The conventional NMO correction is well-known to decrease the frequency content and distort the seismic wavelet at far offsets, sometimes giving rise to tuning effects. Most quantitative interpreters work with prestack migrated gathers rather than unmigrated NMO-corrected gathers. However, prestack migration of flat reflectors suffers from the same limitation called migration stretch. Migration stretch leads to lower S-impedance ([Formula: see text]) and density ([Formula: see text]) resolution estimated from inversion, misclassification of amplitude variation with offset (AVO) types, and infidelity in amplitude variation with azimuth (AVAZ) inversion results. We have developed a matching pursuit algorithm commonly used in spectral decomposition to correct the migration stretch by scaling the stretched wavelets using a wavelet compensation factor. The method is based on hyperbolic moveout approximation. The corrected gathers show increased resolution and higher fidelity amplitudes at the far offsets leading to improvement in AVO classification. Correction for migration stretch rather than conventional “stretch-mute” corrections provides three advantages: (1) preservation of far angles required for accurate [Formula: see text] inversion, (2) improvement in the vertical resolution of [Formula: see text] and [Formula: see text] volumes, and (3) preservation of far angles that provide greater leverage against multiples. We apply our workflow to data acquired in the Fort Worth Basin and retain incident angles up to 42° at the Barnett Shale target. Comparing [Formula: see text], [Formula: see text], and [Formula: see text] of the original gather and migration stretch-compensated data, we find an insignificant improvement in [Formula: see text], but a moderate to significant improvement in resolution of [Formula: see text] and [Formula: see text]. The method is valid for reservoirs that exhibit a dip of no more than 2°. Consistent improvement is observed in resolving thick beds, but the method might introduce amplitude anomalies at far offsets for tuning beds.


2020 ◽  
Vol 17 (1) ◽  
pp. 162-166 ◽  
Author(s):  
Sanyi Yuan ◽  
Wanwan Wei ◽  
Di Wang ◽  
Peidong Shi ◽  
Shangxu Wang
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document