Automatic nonhyperbolic velocity analysis by polynomial chaos expansion

Geophysics ◽  
2018 ◽  
Vol 83 (6) ◽  
pp. U79-U88 ◽  
Author(s):  
Mostafa Abbasi ◽  
Ali Gholami

Seismic velocity analysis is one of the most crucial and, at the same time, the most laborious tasks during seismic data processing. This becomes even more difficult and time-consuming when nonhyperbolicity has to be considered in the velocity analysis. Nonhyperbolic velocity analysis provides very useful information during the processing and interpretation of seismic data. The most common approach for considering anisotropy during velocity analysis is to describe the moveout based on a nonhyperbolic equation. The nonhyperbolic moveout equation in vertically transverse isotropic (VTI) media is defined by two parameters: normal moveout (NMO) velocity [Formula: see text] and anellipticity [Formula: see text] (or horizontal velocity [Formula: see text]). We have developed a new approach based on polynomial chaos (PC) expansion for automating nonhyperbolic velocity analysis of common-midpoint (CMP) data in VTI media. For this purpose, we use the PC expansion to approximate the nonhyperbolic semblance function with a very fast-to-simulate function in terms of [Formula: see text] and [Formula: see text]. Then, using particle swarm optimization, we stochastically look for the optimum NMO and horizontal velocities that provide the maximum semblance. In contrary to common approaches for nonhyperbolic velocity analysis in which the two parameters are estimated iteratively in an alternating fashion, we find [Formula: see text] and [Formula: see text] simultaneously. This approach is tested on various data including a simple convolutional model, an anisotropic benchmark model, and a real data set. In all cases, the new method provided acceptable results. Reflections in the CMP corrected using the optimum velocities are properly flattened, and almost no residual moveout is observed.

Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2020 ◽  
Vol 85 (2) ◽  
pp. V223-V232 ◽  
Author(s):  
Zhicheng Geng ◽  
Xinming Wu ◽  
Sergey Fomel ◽  
Yangkang Chen

The seislet transform uses the wavelet-lifting scheme and local slopes to analyze the seismic data. In its definition, the designing of prediction operators specifically for seismic images and data is an important issue. We have developed a new formulation of the seislet transform based on the relative time (RT) attribute. This method uses the RT volume to construct multiscale prediction operators. With the new prediction operators, the seislet transform gets accelerated because distant traces get predicted directly. We apply our method to synthetic and real data to demonstrate that the new approach reduces computational cost and obtains excellent sparse representation on test data sets.


Geophysics ◽  
1996 ◽  
Vol 61 (6) ◽  
pp. 1846-1858 ◽  
Author(s):  
Claudio Bagaini ◽  
Umberto Spagnolini

Continuation to zero offset [better known as dip moveout (DMO)] is a standard tool for seismic data processing. In this paper, the concept of DMO is extended by introducing a set of operators: the continuation operators. These operators, which are implemented in integral form with a defined amplitude distribution, perform the mapping between common shot or common offset gathers for a given velocity model. The application of the shot continuation operator for dip‐independent velocity analysis allows a direct implementation in the acquisition domain by exploiting the comparison between real data and data continued in the shot domain. Shot and offset continuation allow the restoration of missing shot or missing offset by using a velocity model provided by common shot velocity analysis or another dip‐independent velocity analysis method.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
2007 ◽  
Vol 72 (2) ◽  
pp. S93-S103 ◽  
Author(s):  
Biondo Biondi

I develop the fundamental concepts for quantitatively relating perturbations in anisotropic parameters to the corresponding reflector movements in angle-domain common-image gathers (ADCIGs) after anisotropic wavefield-continuation migration. The proposed theory potentially enables the application of residual moveout (RMO) analysis of ADCIGs to velocity estimation in realistic anisotropic conditions. I demonstrate that linearization of the relationship between anisotropic velocity parameters and reflector movements can be derived by assuming stationary raypaths. This assumption leads to a fairly simple analytical derivation. I then apply the general method to the particular case of RMO analysis of reflections from flat reflectors in a vertical transverse isotropic (VTI) medium. This analysis yields expressions to predict RMO curves in migrated ADCIGs. These RMO expressions are functions of both the phase aperture angle and the group aperture angle. Several numerical examples demonstrate the accuracy of the RMO curves predicted by my kinematic analysis. The synthetic examples also show that approximating the group angles with the phase angles in the application of the RMO expressions may lead to substantial errors for events reflected at wide aperture angles. The results obtained by migrating a 2D line extracted from a Gulf of Mexico 3D data set confirm the accuracy of the proposed method. The RMO curves predicted by the theory match the RMO function observed in the ADCIGs computed from the real data.


Geophysics ◽  
2003 ◽  
Vol 68 (1) ◽  
pp. 225-231 ◽  
Author(s):  
Rongfeng Zhang ◽  
Tadeusz J. Ulrych

This paper deals with the design and implementation of a new wavelet frame for noise suppression based on the character of seismic data. In general, wavelet denoising methods widely used in image and acoustic processing use well‐known conventional wavelets which, although versatile, are often not optimal for seismic data. The new approach, physical wavelet frame denoising uses a wavelet frame that takes into account the characteristics of seismic data both in time and space. Synthetic and real data tests show that the approach is effective even for seismic signals contaminated by strong noise which may be random or coherent, such as ground roll or air waves.


Geophysics ◽  
2005 ◽  
Vol 70 (6) ◽  
pp. O39-O50 ◽  
Author(s):  
Øyvind Kvam ◽  
Martin Landrø

In an exploration context, pore-pressure prediction from seismic data relies on the fact that seismic velocities depend on pore pressure. Conventional velocity analysis is a tool that may form the basis for obtaining interval velocities for this purpose. However, velocity analysis is inaccurate, and in this paper we focus on the possibilities and limitations of using velocity analysis for pore-pressure prediction. A time-lapse seismic data set from a segment that has undergone a pore-pressure increase of 5 to 7 MPa between the two surveys is analyzed for velocity changes using detailed velocity analysis. A synthetic time-lapse survey is used to test the sensitivity of the velocity analysis with respect to noise. The analysis shows that the pore-pressure increase cannot be detected by conventional velocity analysis because the uncertainty is much greater than the expected velocity change for a reservoir of the given thickness and burial depth. Finally, by applying amplitude-variation-with-offset (AVO) analysis to the same data, we demonstrate that seismic amplitude analysis may yield more precise information about velocity changes than velocity analysis.


Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. V51-V60 ◽  
Author(s):  
Ramesh (Neelsh) Neelamani ◽  
Anatoly Baumstein ◽  
Warren S. Ross

We propose a complex-valued curvelet transform-based (CCT-based) algorithm that adaptively subtracts from seismic data those noises for which an approximate template is available. The CCT decomposes a geophysical data set in terms of small reflection pieces, with each piece having a different characteristic frequency, location, and dip. One can precisely change the amplitude and shift the location of each seismic reflection piece in a template by controlling the amplitude and phase of the template's CCT coefficients. Based on these insights, our approach uses the phase and amplitude of the data's and template's CCT coefficients to correct misalignment and amplitude errors in the noise template, thereby matching the adapted template with the actual noise in the seismic data, reflection event-by-event. We also extend our approach to subtract noises that require several templates to be approximated. By itself, the method can only correct small misalignment errors ([Formula: see text] in [Formula: see text] data) in the template; it relies on conventional least-squares (LS) adaptation to correct large-scale misalignment errors, such as wavelet mismatches and bulk shifts. Synthetic and real-data results illustrate that the CCT-based approach improves upon the LS approach and a curvelet-based approach described by Herrmann and Verschuur.


2021 ◽  
Vol 3 (1) ◽  
pp. 1-7
Author(s):  
Yadgar Sirwan Abdulrahman

Clustering is one of the essential strategies in data analysis. In classical solutions, all features are assumed to contribute equally to the data clustering. Of course, some features are more important than others in real data sets. As a result, essential features will have a more significant impact on identifying optimal clusters than other features. In this article, a fuzzy clustering algorithm with local automatic weighting is presented. The proposed algorithm has many advantages such as: 1) the weights perform features locally, meaning that each cluster's weight is different from the rest. 2) calculating the distance between the samples using a non-euclidian similarity criterion to reduce the noise effect. 3) the weight of the features is obtained comparatively during the learning process. In this study, mathematical analyzes were done to obtain the clustering centers well-being and the features' weights. Experiments were done on the data set range to represent the progressive algorithm's efficiency compared to other proposed algorithms with global and local features


Sign in / Sign up

Export Citation Format

Share Document