CONTINUOUS VELOCITY ESTIMATION AND SEISMIC WAVELET PROCESSING

Geophysics ◽  
1972 ◽  
Vol 37 (5) ◽  
pp. 769-787 ◽  
Author(s):  
J. W. C. Sherwood ◽  
P. H. Poe

An economic computer program can stack the data from several adjoining common depth points over a wide range of both dip and normal moveout. We can extract from this a set of seismic wavelets, each possessing a determined dip and normal moveout, which represent the original seismic data in an approximate and compressed form. The seismic wavelets resulting from the processing of a complete seismic line are stored for a variety of subsequent uses, such as the following: 1) Superimpose the wavelets, or a subset of them, to form a record section analogous to a conventional common‐depth‐point stacked section. This facilitates the construction of record sections consisting dominantly of either multiple or primary reflections. Other benefits can arise from improved signal‐to‐random‐noise ratio, the concurrent display of overlapping primary wavelets with widely different normal moveouts, and the elimination of the waveform stretching that occurs on the long offset traces with conventional normal moveout removal. 2) By displaying each picked wavelet as a short dip‐bar located at the correct time and spatial position and annotated with the estimated rms velocity, we can exhibit essentially continuous rms‐velocity data along each reflection. This information can be utilized for the estimation of interval and average velocities. For comparative purposes this velocity‐annotated dip‐bar display is normally formed on the same scale as the conventional common‐depth‐point stack section.

1975 ◽  
Vol 15 (1) ◽  
pp. 81
Author(s):  
W. Pailthorpe ◽  
J. Wardell

During the past two years, much publicity has been given to the direct indication of hydrocarbon accumulations by "Bright Spot" reflections: the very high amplitude reflections from a shale to gas-sand or gas-sand to water-sand interface. It was soon generally realised, however, that this phenomenon was of limited occurrence, being mostly restricted to young, shallow, sand and shale sequences such as the United States Gulf Coast. A more widely detectable indication of hydrocarbons was found to be the reflection from a fluid interface, such as the gas to water interface, within the reservoir. This reflection is characterised by its flatness, being a fluid interface, and is often called the "Flat Spot".Model studies show that the flat spots have a wide range of amplitudes, from very high for shallow gas to water contacts, to very low for deep oil to water contacts. However, many of the weaker flat spots on good recent marine seismic data have an adequate signal to random noise ratio for detection, and the problem is to separate and distinguish them from the other stronger reflections close by. In this respect the unique flatness of the fluid contact reflection can be exploited by dip discriminant processes, such as velocity filtering, to separate it from the generally dipping reflectors at its boundaries. A limiting factor in the detection of the deeper flat spots is the frequency bandwidth of the seismic data. Since the separation between the flat spot reflection and the upper and lower boundary reflections of the reservoir is often small, relatively high frequency data are needed to resolve these separate reflections. Correct display of the seismic data can be critical to flat spot detection, and some degree of vertical exaggeration of the seismic section is often required to increase apparent dips, and thus make the flat spots more noticeable.The flat spot is generally a smaller target than the structural features that conventional seismic surveys are designed to find and map, and so a denser than normal grid of seismic lines is required adequately to map most flat spots.


Geophysics ◽  
1981 ◽  
Vol 46 (2) ◽  
pp. 106-120 ◽  
Author(s):  
Frank J. Feagin

Relatively little attention has been paid to the final output of today’s sophisticated seismic data processing procedures—the seismic section display. We first examine significant factors relating to those displays and then describe a series of experiments that, by varying those factors, let us specify displays that maximize interpreters’ abilities to detect reflections buried in random noise. The study.—From psychology of perception and image enhancement literature and from our own research, these conclusions were reached: (1) Seismic reflection perceptibility is best for time scales in the neighborhood of 1.875 inches/sec because, for common seismic frequencies, the eye‐brain spatial frequency response is a maximum near that value. (2) An optimized gray scale for variable density sections is nonlinearly related to digital data values on a plot tape. The nonlinearity is composed to two parts (a) that which compensates for nonlinearity inherent in human perception, and (b) the nonlinearity required to produce histogram equalization, a modern image enhancement technique. The experiments.—The experiments involved 37 synthetic seismic sections composed of simple reflections embedded in filtered random noise. Reflection signal‐to‐noise (S/N) ratio was varied over a wide range, as were other display parameters, such as scale, plot mode, photographic density contrast, gray scale, and reflection dip angle. Twenty‐nine interpreters took part in the experiments. The sections were presented, one at a time, to each interpreter; the interpreter then proceeded to mark all recognizable events. Marked events were checked against known data and errors recorded. Detectability thresholds in terms of S/N ratios were measured as a function of the various display parameters. Some of the more important conclusions are: (1) With our usual types of displays, interpreters can pick reflections about 6 or 7 dB below noise with a 50 percent probability. (2) Perceptibility varies from one person to another by 2.5 to 3.0 dB. (3) For displays with a 3.75 inch/sec scale and low contrast photographic paper (a common situation), variable density (VD) and variable area‐wiggly trace (VA‐WT) sections are about equally effective from a perceptibility standpoint. (4) However, for displays with small scales and for displays with higher contrast, variable density is significantly superior. A VD section with all parameters optimized shows about 8 dB perceptibility advantage over an optimized VA‐WT section. (5) Detectability drops as dip angle increases. VD is slightly superior to VA‐WT, even at large scales, for steep dip angles. (6) An interpreter gains typically about 2 dB by foreshortening, although there is a wide variation from one individual to another.


2015 ◽  
Vol 3 (3) ◽  
pp. SS1-SS13 ◽  
Author(s):  
Huailai Zhou ◽  
Yuanjun Wang ◽  
Tengfei Lin ◽  
Fangyu Li ◽  
Kurt J. Marfurt

Seismic data with enhanced resolution allow interpreters to effectively delineate and interpret architectural components of stratigraphically thin geologic features. We used a recently developed time-frequency domain deconvolution method to spectrally balance nonstationary seismic data. The method was based on polynomial fitting of seismic wavelet magnitude spectra. The deconvolution increased the spectral bandwidth but did not amplify random noise. We compared our new spectral modeling algorithm with existing time-variant spectral-whitening and inverse [Formula: see text]-filtering algorithms using a 3D offshore survey acquired over Bohai Gulf, China. We mapped these improvements spatially using a suite of 3D volumetric coherence, energy, curvature, and frequency attributes. The resulting images displayed improved lateral resolution of channel edges and fault edges with few, if any artifacts associated with amplification of random noise.


Geophysics ◽  
1993 ◽  
Vol 58 (3) ◽  
pp. 383-392 ◽  
Author(s):  
Peter W. Cary ◽  
Gary A. Lorentz

When performing four‐component surface‐consistent deconvolution, it is assumed that the decomposition of amplitude spectra into source, receiver, offset, and common‐depth‐point components enables accurate deconvolution filters to be derived. However, relatively little effort has been put into the verification of this assumption. Some verification of the assumption is available by analyzing the results of the surface‐consistent decomposition of real seismic data. The surface‐consistent log‐amplitude spectra of land seismic data are able to provide convincing evidence that the source component collects effects of the source signature and near‐source structural effects, and that the receiver component collects receiver characteristics and near‐receiver structural effects. In addition, the offset component collects effects due to ground roll and average reflectivity, and the CDP component collects mostly random noise unless it is constrained to be smooth. Based on the results of this analysis, deconvolution filters should be constructed from the source and receiver components, while the offset and CDP components are discarded. The four‐component surface‐consistent decomposition can be performed efficiently by making use of a simple rearrangement of the Gauss‐Seidel matrix inversion equations. The algorithm requires just two passes through the prestack data volume, regardless of the sorted order of the data, so it is useful for both two‐dimensional and three‐dimensional (2-D and 3-D) data volumes.


1974 ◽  
Vol 14 (1) ◽  
pp. 107
Author(s):  
John Wardell

Since the introduction of the common depth point method of seismic reflection shooting, we have seen a continued increase in the multiplicity of subsurface coverage, to the point where nowadays a large proportion of offshore shooting uses a 48 fold 48 trace configuration. Of the many benefits obtained from this multiplicity of coverage, the attenuation of multiple reflections during the common depth point stacking process is one of the most important.Examinations of theoretical response curves for multiple attenuation in common depth point stacking shows that although increased multiplicity does give improved multiple attenuation, this improvement occurs at higher and higher frequencies and residual moveouts (of the multiples) as the multiplicity continues to increase. For multiplicities greater than 12, the improvement is at relatively high frequencies and residual moveouts, while there is no significant improvement for the lower frequencies of multiples with smaller residual moveouts, which unfortunately are those most likely to remain visible after the stacking process.The simple process of zeroing, or muting, certain selected traces (mostly the shorter offset traces) before stacking can give an average 6 to 9 decibels improvement over a wide range of the low frequency and residual moveout part of the stack response, with 9-15 decibels improvement over parts of this range. The cost of this improvement is an increase in random noise level of 1-2 decibels. With digital processing methods, it is easy to zero the necessary traces over selected portions of the seismic section if so desired.The process does not require a detailed knowledge of the multiple residual moveouts, but can be used on a routine basis in areas where strong multiples are a problem, and a high stacking multiplicity is being used.


2013 ◽  
Vol 31 (4) ◽  
pp. 619 ◽  
Author(s):  
Luiz Eduardo Soares Ferreira ◽  
Milton José Porsani ◽  
Michelângelo G. Da Silva ◽  
Giovani Lopes Vasconcelos

ABSTRACT. Seismic processing aims to provide an adequate image of the subsurface geology. During seismic processing, the filtering of signals considered noise is of utmost importance. Among these signals is the surface rolling noise, better known as ground-roll. Ground-roll occurs mainly in land seismic data, masking reflections, and this roll has the following main features: high amplitude, low frequency and low speed. The attenuation of this noise is generally performed through so-called conventional methods using 1-D or 2-D frequency filters in the fk domain. This study uses the empirical mode decomposition (EMD) method for ground-roll attenuation. The EMD method was implemented in the programming language FORTRAN 90 and applied in the time and frequency domains. The application of this method to the processing of land seismic line 204-RL-247 in Tacutu Basin resulted in stacked seismic sections that were of similar or sometimes better quality compared with those obtained using the fk and high-pass filtering methods.Keywords: seismic processing, empirical mode decomposition, seismic data filtering, ground-roll. RESUMO. O processamento sísmico tem como principal objetivo fornecer uma imagem adequada da geologia da subsuperfície. Nas etapas do processamento sísmico a filtragem de sinais considerados como ruídos é de fundamental importância. Dentre esses ruídos encontramos o ruído de rolamento superficial, mais conhecido como ground-roll . O ground-roll ocorre principalmente em dados sísmicos terrestres, mascarando as reflexões e possui como principais características: alta amplitude, baixa frequência e baixa velocidade. A atenuação desse ruído é geralmente realizada através de métodos de filtragem ditos convencionais, que utilizam filtros de frequência 1D ou filtro 2D no domínio fk. Este trabalho utiliza o método de Decomposição em Modos Empíricos (DME) para a atenuação do ground-roll. O método DME foi implementado em linguagem de programação FORTRAN 90, e foi aplicado no domínio do tempo e da frequência. Sua aplicação no processamento da linha sísmica terrestre 204-RL-247 da Bacia do Tacutu gerou como resultados, seções sísmicas empilhadas de qualidade semelhante e por vezes melhor, quando comparadas as obtidas com os métodos de filtragem fk e passa-alta.Palavras-chave: processamento sísmico, decomposição em modos empíricos, filtragem dados sísmicos, atenuação do ground-roll.


Author(s):  
Richard Wright ◽  
James Carter ◽  
Deric Cameron ◽  
Tom Neugebauer ◽  
Jerry Witney ◽  
...  

2021 ◽  
Vol 11 (11) ◽  
pp. 4874
Author(s):  
Milan Brankovic ◽  
Eduardo Gildin ◽  
Richard L. Gibson ◽  
Mark E. Everett

Seismic data provides integral information in geophysical exploration, for locating hydrocarbon rich areas as well as for fracture monitoring during well stimulation. Because of its high frequency acquisition rate and dense spatial sampling, distributed acoustic sensing (DAS) has seen increasing application in microseimic monitoring. Given large volumes of data to be analyzed in real-time and impractical memory and storage requirements, fast compression and accurate interpretation methods are necessary for real-time monitoring campaigns using DAS. In response to the developments in data acquisition, we have created shifted-matrix decomposition (SMD) to compress seismic data by storing it into pairs of singular vectors coupled with shift vectors. This is achieved by shifting the columns of a matrix of seismic data before applying singular value decomposition (SVD) to it to extract a pair of singular vectors. The purpose of SMD is data denoising as well as compression, as reconstructing seismic data from its compressed form creates a denoised version of the original data. By analyzing the data in its compressed form, we can also run signal detection and velocity estimation analysis. Therefore, the developed algorithm can simultaneously compress and denoise seismic data while also analyzing compressed data to estimate signal presence and wave velocities. To show its efficiency, we compare SMD to local SVD and structure-oriented SVD, which are similar SVD-based methods used only for denoising seismic data. While the development of SMD is motivated by the increasing use of DAS, SMD can be applied to any seismic data obtained from a large number of receivers. For example, here we present initial applications of SMD to readily available marine seismic data.


Sign in / Sign up

Export Citation Format

Share Document