Simultaneous velocity filtering of hyperbolic reflections and balancing of offset‐dependent wavelets

Geophysics ◽  
1989 ◽  
Vol 54 (11) ◽  
pp. 1455-1465 ◽  
Author(s):  
William S. Harlan

Hyperbolic reflections and convolutional wavelets are fundamental models for seismic data processing. Each sample of a “stacked” zero‐offset section can parameterize an impulsive hyperbolic reflection in a midpoint gather. Convolutional wavelets can model source waveforms and near‐surface filtering at the shot and geophone positions. An optimized inversion of the combined modeling equations for hyperbolic traveltimes and convolutional wavelets makes explicit any interdependence and nonuniqueness in these two sets of parameters. I first estimate stacked traces that best model the recorded data and then find nonimpulsive wavelets to improve the fit with the data. These wavelets are used for a new estimate of the stacked traces, and so on. Estimated stacked traces model short average wavelets with a superposition of approximately parallel hyperbolas; estimated wavelets adjust the phases and amplitudes of inconsistent traces, including static shifts. Deconvolution of land data with estimated wavelets makes wavelets consistent over offset; remaining static shifts are midpoint‐consistent. This phase balancing improves the resolution of stacked data and of velocity analyses. If precise velocity functions are not known, then many stacked traces can be inverted simultaneously, each with a different velocity function. However, the increased number of overlain hyperbolas can more easily model the effects of inconsistent wavelets. As a compromise, I limit velocity functions to reasonable regions selected from a stacking velocity analysis—a few functions cover velocities of primary and multiple reflections. Multiple reflections are modeled separately and then subtracted from marine data. The model can be extended to include more complicated amplitude changes in reflectivity. Migrated reflectivity functions would add an extra constraint on the continuity of reflections over midpoint. Including the effect of dip moveout in the model would make stacking and migration velocities equivalent.

Geophysics ◽  
1996 ◽  
Vol 61 (6) ◽  
pp. 1846-1858 ◽  
Author(s):  
Claudio Bagaini ◽  
Umberto Spagnolini

Continuation to zero offset [better known as dip moveout (DMO)] is a standard tool for seismic data processing. In this paper, the concept of DMO is extended by introducing a set of operators: the continuation operators. These operators, which are implemented in integral form with a defined amplitude distribution, perform the mapping between common shot or common offset gathers for a given velocity model. The application of the shot continuation operator for dip‐independent velocity analysis allows a direct implementation in the acquisition domain by exploiting the comparison between real data and data continued in the shot domain. Shot and offset continuation allow the restoration of missing shot or missing offset by using a velocity model provided by common shot velocity analysis or another dip‐independent velocity analysis method.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. U1-U12
Author(s):  
Michelângelo G. Silva ◽  
Milton J. Porsani ◽  
Bjorn Ursin

Velocity-independent seismic data processing requires information about the local slope in the data. From estimates of local time and space derivatives of the data, a total least-squares algorithm gives an estimate of the local slope at each data point. Total least squares minimizes the orthogonal distance from the data points (the local time and space derivatives) to the fitted straight line defining the local slope. This gives a more consistent estimate of the local slope than standard least squares because it takes into account uncertainty in the temporal and spatial derivatives. The total least-squares slope estimate is the same as the one obtained from using the structure tensor with a rectangular window function. The estimate of the local slope field is used to extrapolate all traces in a seismic gather to the smallest recorded offset without using velocity information. Extrapolation to zero offset is done using a hyperbolic traveltime function in which slope information replaces the knowledge of the normal moveout (NMO) velocity. The new data processing method requires no velocity analysis and there is little stretch effect. All major reflections and diffractions that are present at zero offset will be reproduced in the output zero-offset section. Therefore, if multiple reflections are undesired in the output, they should be removed before data extrapolation to zero offset. The automatic method is sensitive to noise, so for poor signal-to-noise ratios, standard NMO velocities for primary reflections can be used to compute the slope field. Synthetic and field data examples indicate that compared with standard seismic data processing (velocity analysis, mute, NMO correction, and stack), our method provides an improved zero-offset section in complex data areas.


2021 ◽  
Vol 11 (1) ◽  
pp. 78
Author(s):  
Jianbo He ◽  
Zhenyu Wang ◽  
Mingdong Zhang

When the signal to noise ratio of seismic data is very low, velocity spectrum focusing will be poor., the velocity model obtained by conventional velocity analysis methods is not accurate enough, which results in inaccurate migration. For the low signal noise ratio (SNR) data, this paper proposes to use partial Common Reflection Surface (CRS) stack to build CRS gathers, making full use of all of the reflection information of the first Fresnel zone, and improves the signal to noise ratio of pre-stack gathers by increasing the number of folds. In consideration of the CRS parameters of the zero-offset rays emitted angle and normal wave front curvature radius are searched on zero offset profile, we use ellipse evolving stacking to improve the zero offset section quality, in order to improve the reliability of CRS parameters. After CRS gathers are obtained, we use principal component analysis (PCA) approach to do velocity analysis, which improves the noise immunity of velocity analysis. Models and actual data results demonstrate the effectiveness of this method.


Geophysics ◽  
1998 ◽  
Vol 63 (4) ◽  
pp. 1332-1338 ◽  
Author(s):  
Gregory S. Baker ◽  
Don W. Steeples ◽  
Matt Drake

A 300-m near‐surface seismic reflection profile was collected in southeastern Kansas to locate a fault(s) associated with a recognized stratigraphic offset on either side of a region of unexposed bedrock. A substantial increase in the S/N ratio of the final stacked section was achieved by muting all data arriving in time after the airwave. Methods of applying traditional seismic data processing techniques to near‐surface data (200 ms of data or less) often differ notably from hydrocarbon exploration‐scale processing (3–4 s of data or more). The example of noise cone muting used is contrary to normal exploration‐scale seismic data processing philosophy, which is to include all data containing signal. The noise cone mute applied to the data removed more than one‐third of the total data volume, some of which contains signal. In this case, however, the severe muting resulted in a higher S/N ratio in the final stacked section, even though some signal could be identified within the muted data. This example supports the suggestion that nontraditional techniques sometimes need to be considered when processing near‐surface seismic data.


2002 ◽  
Vol 21 (8) ◽  
pp. 730-735 ◽  
Author(s):  
Panos G. Kelamis ◽  
Kevin E. Erickson ◽  
Dirk J. Verschuur ◽  
A. J. Berkhout

2013 ◽  
Vol 373-375 ◽  
pp. 694-697 ◽  
Author(s):  
Guang Xun Chen ◽  
Yan Hui Du ◽  
Lei Zhang ◽  
Pan Ke Qin

The commonly used method for high resolution velocity analysis in seismic data processing and interpreting is based on signal estimation algorithm. However, the numerical realization of this method is complicated and time-consuming due to the process of signal-noise separation requiring enormous loop calculations before constructing the energy function. In this paper, we improved the method on the base of multi-trace signal estimation. This improved method made full use of amplitude information that can enhance the anti-noise ability and improve the resolution greatly. Meanwhile, this method has more economical calculation cost than other methods for it didnt require multiple loop calculations.


Geophysics ◽  
1967 ◽  
Vol 32 (2) ◽  
pp. 207-224 ◽  
Author(s):  
John D. Marr ◽  
Edward F. Zagst

The more recent developments in common‐depth‐point techniques to attenuate multiple reflections have resulted in an exploration capability comparable to the development of the seismic reflection method. The combination of new concepts in digital seismic data processing with CDP techniques is creating unforeseen exploration horizons with vastly improved seismic data. Major improvements in multiple reflection and reverberation attenuation are now attainable with appropriate CDP geometry and special CDP stacking procedures. Further major improvements are clearly evident in the very near future with the use of multichannel digital filtering‐stacking techniques and the application of deconvolution as the first step in seismic data processing. CDP techniques are briefly reviewed and evaluated with real and experimental data. Synthetic data are used to illustrate that all seismic reflection data should be deconvolved as the first processing step.


2021 ◽  
pp. 555-564
Author(s):  
Kamal K. Ali ◽  
Ahmed Wanas ◽  
Mohanad E. Mahdi

     In the current study, 2D seismic data in west An-Najaf (WN-36 line) were received after many steps of processing by Oil Exploration Company in 2018. Surface Consistent Amplitude Compensation (SCAC) was applied on the seismic data. The processing sequence in our study started by sorting data in a common mid-point (CMP) gather, in order to apply the velocity analysis using Interactive Velocity Analysis Application (INVA) with Omega system. Semblance of velocity was prepared to preform normal move-out (NMO) vs. Time. Accurate root mean square velocity (VRMS) was selected, which was controlled by flatness of the primary events. The resultant seismic velocity section for the study area shows that the velocity analysis became smother and provided an accurate seismic section.


2014 ◽  
Vol 32 (3) ◽  
pp. 395 ◽  
Author(s):  
Silmara L.R. Oliveira ◽  
Rosângela Corrêa Maciel ◽  
Michelângelo G. da Silva ◽  
Milton José Porsani

ABSTRACT. Short-period multiples attenuation is a difficult problem for shallow water marine seismic data processing. In the past few decades many filteringmethods have been developed to solve this problem and to improve the quality of seismic imaging. The Wiener-Levinson predictive deconvolution method is one of themost useful and well known filter methods used in the seismic data processing flow. It is a statistical approach to reduce redundancy along the time variable seismictrace, allowing us to both improve the time resolution and also attenuate multiple reflections of the seismic traces. One of the assumptions of the Wiener-Levinsonmethod is that the seismic wavelet is stationary along the entire seismic trace. However, this is not true for real seismic data and to bypass this limitation the methodis normally applied using fixed time windows, distributed along the seismic trace. The present study tested a new adaptive predictive deconvolution approach for theattenuation of short-period multiples. The new approach is based on a sliding window of fixed length that is shifted sample by sample along the entire seismic trace.At each position, a new filter is computed and applied. The implied systems of equations are solved by using a recursive Levinson-type algorithm. The main differencewith respect to the conventional Wiener-Levinson approach is that the filter is updated for each data sample along the trace and no assumption is imposed on the dataoutside the considered window. The new adaptive predictive deconvolution approach was tested using a seismic line of the Jequitinhonha Basin acquired by Petrobras.The results demonstrated that the new approach is very precise for the attenuation of short-period multiples, producing better results than the ones obtained fromthe conventional Wiener-Levinson predictive deconvolution approach. The results were obtained with filters of 25 coefficients, predictive distance of 5 samples andwindow length equal to 55 samples.Keywords: seismic processing, Jequitinhonha Basin, adaptive predictive deconvolution, multiple of attenuation, Wiener-Levinson deconvolution.RESUMO. A atenuação de reflexões múltiplas de curto período, presentes nos dados sísmicos adquiridos sobre lâmina d’água rasa, representa um grande problemado processamento de dados sísmicos marítimos. Nas últimas décadas, vários métodos de filtragem de dados sísmicos têm sido desenvolvidos com o propósito deatenuar reflexões múltiplas e melhorar a qualidade das seções sísmicas. O método de filtragem conhecido como deconvolução preditiva de Wiener-Levinson é bastante utilizado na indústria do petróleo. Ele permite melhorar a resolução temporal dos dados sísmicos e atenuar reflexões múltiplas, podendo ser visto como um método estatístico que remove a coerência temporal dos traços sísmicos. O método de Wiener-Levinson pressupõe que o pulso sísmico é estacionário, fato este que não ocorrenos dados sísmicos reais. Para contornar este problema, o método de Wiener-Levinson é normalmente aplicado utilizando-se janelas de tempo fixas, distribuídas ao longo do tempo de registro. No presente trabalho, empregamos um método de deconvolução preditiva adaptativa no qual as janelas de tempo deslizantes são deslocadas amostra a amostra ao longo de todo o traço sísmico. Os sistemas de equações são resolvidos com o algoritmo recursivo tipo-Levinson. Na deconvolução de Wiener-Levinson, com janelas de tempo fixa, os filtros são gerados e aplicados dentro de cada janela. Já na deconvolução preditiva adaptativa o algoritmo calcula um novo filtro a cada posição da janela deslizante. Para teste da nova abordagem utilizamos os dados sísmicos da Bacia de Jequitinhonha, cedidos pela Petrobras. Os melhores resultados foram obtidos com filtros de 25 coeficientes, distância de predição igual a 5 amostras e janela móvel de 55 amostras. Os resultados obtidos com a nova abordagem demonstram que a deconvolução preditiva adaptativa atua com eficácia na atenuação de múltiplas de curto período, apresentando resultados melhores queos gerados pelo método de deconvolução preditiva de Wiener-Levinson.Palavras-chave: processamento sísmico, Bacia do Jequitinhonha, deconvolução adaptativa, atenuação de múltiplas, deconvolução de Wiener-Levinson.


Sign in / Sign up

Export Citation Format

Share Document