Pitfalls in seismic processing: An application of seismic modeling to investigate acquisition footprint

2016 ◽  
Vol 4 (2) ◽  
pp. SG1-SG9 ◽  
Author(s):  
Marcus P. Cahoj ◽  
Sumit Verma ◽  
Bryce Hutchinson ◽  
Kurt J. Marfurt

The term acquisition footprint is commonly used to define patterns in seismic time and horizon slices that are closely correlated to the acquisition geometry. Seismic attributes often exacerbate footprint artifacts and may pose pitfalls to the less experienced interpreter. Although removal of the acquisition footprint is the focus of considerable research, the sources of such artifact acquisition footprint are less commonly discussed or illustrated. Based on real data examples, we have hypothesized possible causes of footprint occurrence and created them through synthetic prestack modeling. Then, we processed these models using the same workflows used for the real data. Computation of geometric attributes from the migrated synthetics found the same footprint artifacts as the real data. These models showed that acquisition footprint could be caused by residual ground roll, inaccurate velocities, and far-offset migration stretch. With this understanding, we have examined the real seismic data volume and found that the key cause of acquisition footprint was inaccurate velocity analysis.

Geophysics ◽  
2000 ◽  
Vol 65 (2) ◽  
pp. 368-376 ◽  
Author(s):  
Bruce S. Hart ◽  
Robert S. Balch

Much industry interest is centered on how to integrate well data and attributes derived from 3-D seismic data sets in the hope of defining reservoir properties in interwell areas. Unfortunately, the statistical underpinnings of the methods become less robust in areas where only a few wells are available, as might be the case in a new or small field. Especially in areas of limited well availability, we suggest that the physical basis of the attributes selected during the correlation procedure be validated by generating synthetic seismic sections from geologic models, then deriving attributes from the sections. We demonstrate this approach with a case study from Appleton field of southwestern Alabama. In this small field, dolomites of the Jurassic Smackover Formation produce from an anticlinal feature about 3800 m deep. We used available geologic information to generate synthetic seismic sections that showed the expected seismic response of the target formation; then we picked the relevant horizons in a 3-D seismic data volume that spanned the study area. Using multiple regression, we derived an empirical relationship between three seismic attributes of this 3-D volume and a log‐derived porosity indicator. Our choice of attributes was validated by deriving complex trace attributes from our seismic modeling results and confirming that the relationships between well properties and real‐data attributes were physically valid. Additionally, the porosity distribution predicted by the 3-D seismic data was reasonable within the context of the depositional model used for the area. Results from a new well drilled after our study validated our porosity prediction, although our structural prediction for the top of the porosity zone was erroneous. These results remind us that seismic interpretations should be viewed as works in progress which need to be updated when new data become available.


2013 ◽  
Vol 31 (4) ◽  
pp. 619 ◽  
Author(s):  
Luiz Eduardo Soares Ferreira ◽  
Milton José Porsani ◽  
Michelângelo G. Da Silva ◽  
Giovani Lopes Vasconcelos

ABSTRACT. Seismic processing aims to provide an adequate image of the subsurface geology. During seismic processing, the filtering of signals considered noise is of utmost importance. Among these signals is the surface rolling noise, better known as ground-roll. Ground-roll occurs mainly in land seismic data, masking reflections, and this roll has the following main features: high amplitude, low frequency and low speed. The attenuation of this noise is generally performed through so-called conventional methods using 1-D or 2-D frequency filters in the fk domain. This study uses the empirical mode decomposition (EMD) method for ground-roll attenuation. The EMD method was implemented in the programming language FORTRAN 90 and applied in the time and frequency domains. The application of this method to the processing of land seismic line 204-RL-247 in Tacutu Basin resulted in stacked seismic sections that were of similar or sometimes better quality compared with those obtained using the fk and high-pass filtering methods.Keywords: seismic processing, empirical mode decomposition, seismic data filtering, ground-roll. RESUMO. O processamento sísmico tem como principal objetivo fornecer uma imagem adequada da geologia da subsuperfície. Nas etapas do processamento sísmico a filtragem de sinais considerados como ruídos é de fundamental importância. Dentre esses ruídos encontramos o ruído de rolamento superficial, mais conhecido como ground-roll . O ground-roll ocorre principalmente em dados sísmicos terrestres, mascarando as reflexões e possui como principais características: alta amplitude, baixa frequência e baixa velocidade. A atenuação desse ruído é geralmente realizada através de métodos de filtragem ditos convencionais, que utilizam filtros de frequência 1D ou filtro 2D no domínio fk. Este trabalho utiliza o método de Decomposição em Modos Empíricos (DME) para a atenuação do ground-roll. O método DME foi implementado em linguagem de programação FORTRAN 90, e foi aplicado no domínio do tempo e da frequência. Sua aplicação no processamento da linha sísmica terrestre 204-RL-247 da Bacia do Tacutu gerou como resultados, seções sísmicas empilhadas de qualidade semelhante e por vezes melhor, quando comparadas as obtidas com os métodos de filtragem fk e passa-alta.Palavras-chave: processamento sísmico, decomposição em modos empíricos, filtragem dados sísmicos, atenuação do ground-roll.


Geophysics ◽  
1996 ◽  
Vol 61 (6) ◽  
pp. 1846-1858 ◽  
Author(s):  
Claudio Bagaini ◽  
Umberto Spagnolini

Continuation to zero offset [better known as dip moveout (DMO)] is a standard tool for seismic data processing. In this paper, the concept of DMO is extended by introducing a set of operators: the continuation operators. These operators, which are implemented in integral form with a defined amplitude distribution, perform the mapping between common shot or common offset gathers for a given velocity model. The application of the shot continuation operator for dip‐independent velocity analysis allows a direct implementation in the acquisition domain by exploiting the comparison between real data and data continued in the shot domain. Shot and offset continuation allow the restoration of missing shot or missing offset by using a velocity model provided by common shot velocity analysis or another dip‐independent velocity analysis method.


Geophysics ◽  
2003 ◽  
Vol 68 (1) ◽  
pp. 225-231 ◽  
Author(s):  
Rongfeng Zhang ◽  
Tadeusz J. Ulrych

This paper deals with the design and implementation of a new wavelet frame for noise suppression based on the character of seismic data. In general, wavelet denoising methods widely used in image and acoustic processing use well‐known conventional wavelets which, although versatile, are often not optimal for seismic data. The new approach, physical wavelet frame denoising uses a wavelet frame that takes into account the characteristics of seismic data both in time and space. Synthetic and real data tests show that the approach is effective even for seismic signals contaminated by strong noise which may be random or coherent, such as ground roll or air waves.


Geophysics ◽  
1993 ◽  
Vol 58 (3) ◽  
pp. 383-392 ◽  
Author(s):  
Peter W. Cary ◽  
Gary A. Lorentz

When performing four‐component surface‐consistent deconvolution, it is assumed that the decomposition of amplitude spectra into source, receiver, offset, and common‐depth‐point components enables accurate deconvolution filters to be derived. However, relatively little effort has been put into the verification of this assumption. Some verification of the assumption is available by analyzing the results of the surface‐consistent decomposition of real seismic data. The surface‐consistent log‐amplitude spectra of land seismic data are able to provide convincing evidence that the source component collects effects of the source signature and near‐source structural effects, and that the receiver component collects receiver characteristics and near‐receiver structural effects. In addition, the offset component collects effects due to ground roll and average reflectivity, and the CDP component collects mostly random noise unless it is constrained to be smooth. Based on the results of this analysis, deconvolution filters should be constructed from the source and receiver components, while the offset and CDP components are discarded. The four‐component surface‐consistent decomposition can be performed efficiently by making use of a simple rearrangement of the Gauss‐Seidel matrix inversion equations. The algorithm requires just two passes through the prestack data volume, regardless of the sorted order of the data, so it is useful for both two‐dimensional and three‐dimensional (2-D and 3-D) data volumes.


Geophysics ◽  
2018 ◽  
Vol 83 (1) ◽  
pp. V39-V48 ◽  
Author(s):  
Ali Gholami ◽  
Toktam Zand

The focusing power of the conventional hyperbolic Radon transform decreases for long-offset seismic data due to the nonhyperbolic behavior of moveout curves at far offsets. Furthermore, conventional Radon transforms are ineffective for processing data sets containing events of different shapes. The shifted hyperbola is a flexible three-parameter (zero-offset traveltime, slowness, and focusing-depth) function, which is capable of generating linear and hyperbolic shapes and improves the accuracy of the seismic traveltime approximation at far offsets. Radon transform based on shifted hyperbolas thus improves the focus of seismic events in the transform domain. We have developed a new method for effective decomposition of seismic data by using such three-parameter Radon transform. A very fast algorithm is constructed for high-resolution calculations of the new Radon transform using the recently proposed generalized Fourier slice theorem (GFST). The GFST establishes an analytic expression between the [Formula: see text] coefficients of the data and the [Formula: see text] coefficients of its Radon transform, with which a very fast switching between the model and data spaces is possible by means of interpolation procedures and fast Fourier transforms. High performance of the new algorithm is demonstrated on synthetic and real data sets for trace interpolation and linear (ground roll) noise attenuation.


Geophysics ◽  
2012 ◽  
Vol 77 (4) ◽  
pp. U63-U72 ◽  
Author(s):  
Raanan Dafni ◽  
Moshe Reshef

We developed a nonconventional approach to interval velocity analysis. The motivation for this approach is based on the argument that when the subsurface structure is complex, velocity error cannot be related to a single parameter. The suggested analysis uses multiparameter common image gathers (MPCIGs), generated by standard prestack depth migration. The parameterization of these multiparameter gathers is directly related to the structural characteristics of the subsurface image points. The undesirable summation, which is usually involved in the generation of conventional common image gathers, is avoided. During the velocity analysis procedure, depth slices taken out of the calculated MPCIGs are examined. Each depth slice contains all seismic data that were migrated into a single image point associated with the specific depth slice. When the MPCIGs are generated with the correct velocity function, each depth slice holds all structural information associated with the corresponding image point. Through detailed analysis of 2D synthetic and real data examples, the influence of migration velocity errors on the accuracy of the migrated multiparameter gathers is demonstrated. A Kirchhoff-based algorithm is used for the migration along with a layer-stripping method, relying on velocity scans, for the analysis. A velocity correctness criterion was also verified, along with some suggestions on the practical usage of the method.


2019 ◽  
Vol 37 (4) ◽  
Author(s):  
Marcelo Souza ◽  
Milton Porsani

ABSTRACTThe conventional velocity analysis does not consider AVO effects in reflection seismic data. These conditions lead to obtaining of inadequate velocity fields, making it difficult to execute other steps in seismic processing. To overcome this problem, researchers developed the Weighted AB semblance method, a coherence measure which deals with AVO effects in velocity spectra. It is based on the application of two sigmoid weighting functions to AB semblance, which depend on four coefficients. The values of these coefficients directly influence the resolution of the resulting velocity spectrum. In this work, we apply the inversion algorithm Very Fast Simulated Annealing (VFSA) to obtain these values. Numerical experiments show that VFSA is a quite effective method, obtaining correct coefficient values and allowing the generation of the velocity spectrum with an excellent resolution for both synthetic and real data. Results also proved that Weighted AB semblance is an optimal coherence measure to be used in velocity spectrum, because it is insensitive to AVO effects and reversal polarity and presents considerably a better resolution than conventional semblance.Keywords: velocity analysis, AVO, high-resolution velocity spectra RESUMOA análise de velocidades convencional não considera efeitos de AVO em dados sísmicos de reflexão. Essas condições levam à obtenção de campos de velocidades inadequados, dificultando a execução de outras etapas do processamento sísmico. Para superar esse problema, pesquisadores desenvolveram o método AB semblance Ponderado, uma medida de coerência que lida com efeitos de AVO em espectros de velocidades. Ela ´e baseada na aplicação de duas funções sigmoides à AB semblance, que depende de quatro coeficientes. Os valores desses coeficientes influenciam diretamente a resolução do espectro de velocidade resultante. Nesse trabalho, n´os aplicamos o algoritmo de inversão Very Fast Simulated Annealing (VFSA) para obter esses valores. Experimentos numéricos mostram que VFSA é um método bastante eficaz, obtendo valores corretos dos coeficientes e permitindo a geração do espectro de velocidade com uma excelente resolução tanto para dados sintéticos quanto para dados reais. Resultados também provam que o AB semblance Ponderado ´e uma medida de coerência ótima para ser usada no espectro de velocidade, porque ela é insensível aos efeitos de AVO e apresenta resolução consideravelmente melhor do que a semblance convencional.Palavras-chave: análise de velocidades, AVO, espectro de velocidades de alta resolução.


Geophysics ◽  
2018 ◽  
Vol 83 (6) ◽  
pp. U79-U88 ◽  
Author(s):  
Mostafa Abbasi ◽  
Ali Gholami

Seismic velocity analysis is one of the most crucial and, at the same time, the most laborious tasks during seismic data processing. This becomes even more difficult and time-consuming when nonhyperbolicity has to be considered in the velocity analysis. Nonhyperbolic velocity analysis provides very useful information during the processing and interpretation of seismic data. The most common approach for considering anisotropy during velocity analysis is to describe the moveout based on a nonhyperbolic equation. The nonhyperbolic moveout equation in vertically transverse isotropic (VTI) media is defined by two parameters: normal moveout (NMO) velocity [Formula: see text] and anellipticity [Formula: see text] (or horizontal velocity [Formula: see text]). We have developed a new approach based on polynomial chaos (PC) expansion for automating nonhyperbolic velocity analysis of common-midpoint (CMP) data in VTI media. For this purpose, we use the PC expansion to approximate the nonhyperbolic semblance function with a very fast-to-simulate function in terms of [Formula: see text] and [Formula: see text]. Then, using particle swarm optimization, we stochastically look for the optimum NMO and horizontal velocities that provide the maximum semblance. In contrary to common approaches for nonhyperbolic velocity analysis in which the two parameters are estimated iteratively in an alternating fashion, we find [Formula: see text] and [Formula: see text] simultaneously. This approach is tested on various data including a simple convolutional model, an anisotropic benchmark model, and a real data set. In all cases, the new method provided acceptable results. Reflections in the CMP corrected using the optimum velocities are properly flattened, and almost no residual moveout is observed.


Sign in / Sign up

Export Citation Format

Share Document