Oriented time-domain dip moveout correction for planar reflectors in common-source domain

Geophysics ◽  
2017 ◽  
Vol 82 (6) ◽  
pp. U87-U97 ◽  
Author(s):  
Mohammad Javad Khoshnavaz

Oriented time-domain imaging can be orders of magnitude faster than the routine techniques, which rely on velocity analysis. The term “oriented” refers to those techniques that use the information carried by local slopes. Time-domain dip moveout (DMO) correction, which has often been ignored by the seismic imaging community, has been coming back to attention within the last few years. I have developed an oriented time-domain DMO correction workflow that does not face the problematic loop between the dip-dependent and/or dip-independent velocities existing in the classic DMO correction algorithms. Use of the proposed approach is also advantageous over the previous oriented techniques; the proposed technique is independent of the wavefront curvature, and the input seismic data do not need to be sorted in two different domains. The application of the technique is limited to reflectors with a small curvature. The theory of the proposed technique is investigated on a simple synthetic data example and then applied to a 2D marine data set.

2012 ◽  
Vol 30 (4) ◽  
pp. 473 ◽  
Author(s):  
Felipe A. Terra ◽  
Jessé C. Costa ◽  
Amin Bassrei

O imageamento sísmico em profundidade é um desafio em áreas geologicamente complexas, onde a velocidade sísmica apresenta variação lateral. Porém, para se obter sucesso no imageamento sísmico em profundidade é necessário que se tenha uma estimativa confiável do modelo de velocidade. A estereotomografia é uma ferramenta efetiva para se alcançar esse propósito. Também denominada de tomografia de inclinação, ela utiliza as vagarosidades e os tempos de trânsito selecionados de famílias de fonte comum e de receptor comum. Nós avaliamos uma alternativa da estereotomografia para a construção do modelo de velocidades. O algoritmo foi validado no conjunto de dados sintéticos Marmousoft e também em dados reais provenientes da Bacia do Jequitinhonha, Brasil, numa região de talude continental. Este conjunto de dados com complexidade estrutural demandou um controle de alta qualidade na seleção de eventos, numa escolha criteriosa dos parâmetros de regularização, e a atenuação de múltiplas de superfície livre. Os resultados tanto para os dados sintéticos como para os reais mostraram a viabilidade computacional e precisão do método. ABSTRACT. Seismic imaging in depth is a challenge in geologically complex areas, where the seismic velocity varies laterally. The estimation of a reliable velocity model is necessary in order to succeed in seismic depth imaging. Stereotomography is an effective tool to achieve this purpose. Also called slope tomography, it uses the slowness and picked traveltimes from reflection events picked in common source and common receiver gathers. We evaluate an alternative implementation of stereotomography for velocity model building. The algorithm was validated in the Marmousoft synthetic data set and also used for velocity model estimation in acontinental slope region, using real data from Jequitinhonha Basin, Brazil. This data set of structural complexity demanded a high quality control of event selection forpicking, judicious choice of regularization parameters and free surface multiple attenuation. The results for both the synthetic and real data have shown the computational feasibility and accuracy of this method.Keywords: stereotomography, regularization, Jequitinhonha Basin


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2016 ◽  
Vol 81 (6) ◽  
pp. A17-A21 ◽  
Author(s):  
Juan I. Sabbione ◽  
Mauricio D. Sacchi

The coefficients that synthesize seismic data via the hyperbolic Radon transform (HRT) are estimated by solving a linear-inverse problem. In the classical HRT, the computational cost of the inverse problem is proportional to the size of the data and the number of Radon coefficients. We have developed a strategy that significantly speeds up the implementation of time-domain HRTs. For this purpose, we have defined a restricted model space of coefficients applying hard thresholding to an initial low-resolution Radon gather. Then, an iterative solver that operated on the restricted model space was used to estimate the group of coefficients that synthesized the data. The method is illustrated with synthetic data and tested with a marine data example.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. R199-R217 ◽  
Author(s):  
Xintao Chai ◽  
Shangxu Wang ◽  
Genyang Tang

Seismic data are nonstationary due to subsurface anelastic attenuation and dispersion effects. These effects, also referred to as the earth’s [Formula: see text]-filtering effects, can diminish seismic resolution. We previously developed a method of nonstationary sparse reflectivity inversion (NSRI) for resolution enhancement, which avoids the intrinsic instability associated with inverse [Formula: see text] filtering and generates superior [Formula: see text] compensation results. Applying NSRI to data sets that contain multiples (addressing surface-related multiples only) requires a demultiple preprocessing step because NSRI cannot distinguish primaries from multiples and will treat them as interference convolved with incorrect [Formula: see text] values. However, multiples contain information about subsurface properties. To use information carried by multiples, with the feedback model and NSRI theory, we adapt NSRI to the context of nonstationary seismic data with surface-related multiples. Consequently, not only are the benefits of NSRI (e.g., circumventing the intrinsic instability associated with inverse [Formula: see text] filtering) extended, but also multiples are considered. Our method is limited to be a 1D implementation. Theoretical and numerical analyses verify that given a wavelet, the input [Formula: see text] values primarily affect the inverted reflectivities and exert little effect on the estimated multiples; i.e., multiple estimation need not consider [Formula: see text] filtering effects explicitly. However, there are benefits for NSRI considering multiples. The periodicity and amplitude of the multiples imply the position of the reflectivities and amplitude of the wavelet. Multiples assist in overcoming scaling and shifting ambiguities of conventional problems in which multiples are not considered. Experiments using a 1D algorithm on a synthetic data set, the publicly available Pluto 1.5 data set, and a marine data set support the aforementioned findings and reveal the stability, capabilities, and limitations of the proposed method.


2015 ◽  
Vol 2015 (1) ◽  
pp. 1-4
Author(s):  
Mohammad Javad Khoshnavaz ◽  
Milovan Urosevic ◽  
Andrej Bona

2017 ◽  
Vol 5 (3) ◽  
pp. SJ81-SJ90 ◽  
Author(s):  
Kainan Wang ◽  
Jesse Lomask ◽  
Felix Segovia

Well-log-to-seismic tying is a key step in many interpretation workflows for oil and gas exploration. Synthetic seismic traces from the wells are often manually tied to seismic data; this process can be very time consuming and, in some cases, inaccurate. Automatic methods, such as dynamic time warping (DTW), can match synthetic traces to seismic data. Although these methods are extremely fast, they tend to create interval velocities that are not geologically realistic. We have described the modification of DTW to create a blocked dynamic warping (BDW) method. BDW generates an automatic, optimal well tie that honors geologically consistent velocity constraints. Consequently, it results in updated velocities that are more realistic than other methods. BDW constrains the updated velocity to be constant or linearly variable inside each geologic layer. With an optimal correlation between synthetic seismograms and surface seismic data, this algorithm returns an automatically updated time-depth curve and an updated interval velocity model that still retains the original geologic velocity boundaries. In other words, the algorithm finds the optimal solution for tying the synthetic to the seismic data while restricting the interval velocity changes to coincide with the initial input blocking. We have determined the application of the BDW technique on a synthetic data example and field data set.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
2016 ◽  
Vol 81 (1) ◽  
pp. V7-V16 ◽  
Author(s):  
Kenji Nose-Filho ◽  
André K. Takahata ◽  
Renato Lopes ◽  
João M. T. Romano

We have addressed blind deconvolution in a multichannel framework. Recently, a robust solution to this problem based on a Bayesian approach called sparse multichannel blind deconvolution (SMBD) was proposed in the literature with interesting results. However, its computational complexity can be high. We have proposed a fast algorithm based on the minimum entropy deconvolution, which is considerably less expensive. We designed the deconvolution filter to minimize a normalized version of the hybrid [Formula: see text]-norm loss function. This is in contrast to the SMBD, in which the hybrid [Formula: see text]-norm function is used as a regularization term to directly determine the deconvolved signal. Results with synthetic data determined that the performance of the obtained deconvolution filter was similar to the one obtained in a supervised framework. Similar results were also obtained in a real marine data set for both techniques.


2020 ◽  
Vol 8 (1) ◽  
pp. T141-T149
Author(s):  
Ritesh Kumar Sharma ◽  
Satinder Chopra ◽  
Larry R. Lines

Multicomponent seismic data offer several advantages for characterizing reservoirs with the use of the vertical component (PP) and mode-converted (PS) data. Joint impedance inversion inverts both of these data sets simultaneously; hence, it is considered superior to simultaneous impedance inversion. However, the success of joint impedance inversion depends on how accurately the PS data are mapped on the PP time domain. Normally, this is attempted by performing well-to-seismic ties for PP and PS data sets and matching different horizons picked on PP and PS data. Although it seems to be a straightforward approach, there are a few issues associated with it. One of them is the lower resolution of the PS data compared with the PP data that presents difficulties in the correlation of the equivalent reflection events on both the data sets. Even after a few consistent horizons get tracked, the horizon matching process introduces some artifacts on the PS data when mapped into PP time. We have evaluated such challenges using a data set from the Western Canadian Sedimentary Basin and then develop a novel workflow for addressing them. The importance of our workflow was determined by comparing data examples generated with and without its adoption.


Sign in / Sign up

Export Citation Format

Share Document