IMPROVED MULTIPLE ATTENUATION IN COMMON DEPTH POINT STACKING

1974 ◽  
Vol 14 (1) ◽  
pp. 107
Author(s):  
John Wardell

Since the introduction of the common depth point method of seismic reflection shooting, we have seen a continued increase in the multiplicity of subsurface coverage, to the point where nowadays a large proportion of offshore shooting uses a 48 fold 48 trace configuration. Of the many benefits obtained from this multiplicity of coverage, the attenuation of multiple reflections during the common depth point stacking process is one of the most important.Examinations of theoretical response curves for multiple attenuation in common depth point stacking shows that although increased multiplicity does give improved multiple attenuation, this improvement occurs at higher and higher frequencies and residual moveouts (of the multiples) as the multiplicity continues to increase. For multiplicities greater than 12, the improvement is at relatively high frequencies and residual moveouts, while there is no significant improvement for the lower frequencies of multiples with smaller residual moveouts, which unfortunately are those most likely to remain visible after the stacking process.The simple process of zeroing, or muting, certain selected traces (mostly the shorter offset traces) before stacking can give an average 6 to 9 decibels improvement over a wide range of the low frequency and residual moveout part of the stack response, with 9-15 decibels improvement over parts of this range. The cost of this improvement is an increase in random noise level of 1-2 decibels. With digital processing methods, it is easy to zero the necessary traces over selected portions of the seismic section if so desired.The process does not require a detailed knowledge of the multiple residual moveouts, but can be used on a routine basis in areas where strong multiples are a problem, and a high stacking multiplicity is being used.

1975 ◽  
Vol 15 (1) ◽  
pp. 81
Author(s):  
W. Pailthorpe ◽  
J. Wardell

During the past two years, much publicity has been given to the direct indication of hydrocarbon accumulations by "Bright Spot" reflections: the very high amplitude reflections from a shale to gas-sand or gas-sand to water-sand interface. It was soon generally realised, however, that this phenomenon was of limited occurrence, being mostly restricted to young, shallow, sand and shale sequences such as the United States Gulf Coast. A more widely detectable indication of hydrocarbons was found to be the reflection from a fluid interface, such as the gas to water interface, within the reservoir. This reflection is characterised by its flatness, being a fluid interface, and is often called the "Flat Spot".Model studies show that the flat spots have a wide range of amplitudes, from very high for shallow gas to water contacts, to very low for deep oil to water contacts. However, many of the weaker flat spots on good recent marine seismic data have an adequate signal to random noise ratio for detection, and the problem is to separate and distinguish them from the other stronger reflections close by. In this respect the unique flatness of the fluid contact reflection can be exploited by dip discriminant processes, such as velocity filtering, to separate it from the generally dipping reflectors at its boundaries. A limiting factor in the detection of the deeper flat spots is the frequency bandwidth of the seismic data. Since the separation between the flat spot reflection and the upper and lower boundary reflections of the reservoir is often small, relatively high frequency data are needed to resolve these separate reflections. Correct display of the seismic data can be critical to flat spot detection, and some degree of vertical exaggeration of the seismic section is often required to increase apparent dips, and thus make the flat spots more noticeable.The flat spot is generally a smaller target than the structural features that conventional seismic surveys are designed to find and map, and so a denser than normal grid of seismic lines is required adequately to map most flat spots.


Geophysics ◽  
1977 ◽  
Vol 42 (4) ◽  
pp. 868-871 ◽  
Author(s):  
Jerry A. Ware

Confirmation that a bright spot zone in question is low velocity can sometimes be made by looking at constant velocity stacks or the common‐depth‐point gathers. When this confirmation does exist, then it is usually possible to do simple ray theory to get a reasonable estimate of the pay thickness, especially if the water‐sand velocity and the gas‐sand velocity are either known or can be predicted for the area. The confirmation referred to can take the form of under‐removal of the primary events or be exhibited by multiple reflections from the bright spot zone. Such under‐removals or multiple reflections will not be seen on the stacked sections but are sometimes obvious on the raw data, such as the common‐depth‐point gathers, or can be implied by looking at constant velocity stacks of the zone in question at different stacking velocities.


Geophysics ◽  
1981 ◽  
Vol 46 (2) ◽  
pp. 106-120 ◽  
Author(s):  
Frank J. Feagin

Relatively little attention has been paid to the final output of today’s sophisticated seismic data processing procedures—the seismic section display. We first examine significant factors relating to those displays and then describe a series of experiments that, by varying those factors, let us specify displays that maximize interpreters’ abilities to detect reflections buried in random noise. The study.—From psychology of perception and image enhancement literature and from our own research, these conclusions were reached: (1) Seismic reflection perceptibility is best for time scales in the neighborhood of 1.875 inches/sec because, for common seismic frequencies, the eye‐brain spatial frequency response is a maximum near that value. (2) An optimized gray scale for variable density sections is nonlinearly related to digital data values on a plot tape. The nonlinearity is composed to two parts (a) that which compensates for nonlinearity inherent in human perception, and (b) the nonlinearity required to produce histogram equalization, a modern image enhancement technique. The experiments.—The experiments involved 37 synthetic seismic sections composed of simple reflections embedded in filtered random noise. Reflection signal‐to‐noise (S/N) ratio was varied over a wide range, as were other display parameters, such as scale, plot mode, photographic density contrast, gray scale, and reflection dip angle. Twenty‐nine interpreters took part in the experiments. The sections were presented, one at a time, to each interpreter; the interpreter then proceeded to mark all recognizable events. Marked events were checked against known data and errors recorded. Detectability thresholds in terms of S/N ratios were measured as a function of the various display parameters. Some of the more important conclusions are: (1) With our usual types of displays, interpreters can pick reflections about 6 or 7 dB below noise with a 50 percent probability. (2) Perceptibility varies from one person to another by 2.5 to 3.0 dB. (3) For displays with a 3.75 inch/sec scale and low contrast photographic paper (a common situation), variable density (VD) and variable area‐wiggly trace (VA‐WT) sections are about equally effective from a perceptibility standpoint. (4) However, for displays with small scales and for displays with higher contrast, variable density is significantly superior. A VD section with all parameters optimized shows about 8 dB perceptibility advantage over an optimized VA‐WT section. (5) Detectability drops as dip angle increases. VD is slightly superior to VA‐WT, even at large scales, for steep dip angles. (6) An interpreter gains typically about 2 dB by foreshortening, although there is a wide variation from one individual to another.


2021 ◽  
Vol 20 (9) ◽  
pp. 34-43
Author(s):  
Elizaveta S. Onufrieva ◽  
Irina V. Tresorukova

This paper discusses the problems of lexicographical representation of Modern Greek constructional phrasemes – productive phraseological patterns with one or more variable components (slots). The analysis of Modern Greek general and phraseological dictionaries has shown that, in Modern Greek lexicography, there is no unified approach towards the description of this type of phraseologisms. One of the significant problems associated with lexicographical treatment of Modern Greek constructional phrasemes is that some of them are registered in dictionaries as fully fixed expressions with their slot(s) filled with a specific lexeme or a specific proposition, without any indication that these expressions possess a variable component. Such lexicographical representation of productive phraseological patterns does not reflect the real linguistic usage and does not allow the reader of the dictionary to understand that the expressions described in the dictionary as fully fixed show considerable variation and possess one or two slots that can be filled with a wide range of words or word combinations. The corpus analysis of the constructional phraseme Ούτε να Ρ (literally, ‘neither if’), which is registered in Modern Greek dictionaries in five different, all fully lexically specified forms, has shown that the specific realizations of this productive phraseological pattern included in the dictionaries either have relatively low frequency of occurrence in the corpus, or are not encountered in the corpus at all. Other realizations of this phraseological pattern account for over 92 % of all the cases of its use in the corpus, but the common pattern behind them can hardly be identified with the help of the existing lexicographical descriptions, as it is registered in the dictionaries under the lemmas of five different lexemes that do not form part of its fixed component. Based on the findings of this study, the paper raises the issue of developing a new approach towards the description of productive phraseological patterns that currently pose a significant challenge for adequate lexicographical representation.


1977 ◽  
Vol 99 (4) ◽  
pp. 284-292 ◽  
Author(s):  
A. J. Healey ◽  
E. Nathman ◽  
C. C. Smith

This paper presents the results of an analytical and experimental study of ride vibrations in an automobile over roads of various degrees of roughness. Roadway roughness inputs were measured. Three different linear mathematical models were employed to predict the acceleration response of the vehicle body. The models used included two, four, and seven degrees of freedom, primarily for vertical direction motion. The results show that the prime source of errors in predicting responses of this type lies in the common assumptions made for roadway roughness spectra. With adequate description of the roadway inputs, the results showed that the seven degree of freedom model accurately predicted the low frequency response (up to 10 Hz). Using the seven degree of freedom model, predicted accelerations compare well with measured data for a wide range of roadways in the low frequency range. Higher frequency components in the measured acceleration response are significant and are illustrated here.


Geophysics ◽  
2016 ◽  
Vol 81 (5) ◽  
pp. V387-V401 ◽  
Author(s):  
Tiago A. Coimbra ◽  
Amélia Novais ◽  
Jörg Schleicher

The offset-continuation operation (OCO) is a seismic configuration transform designed to simulate a seismic section, as if obtained with a certain source-receiver offset using the data measured with another offset. Based on this operation, we have introduced the OCO stack, which is a multiparameter stacking technique that transforms 2D/2.5D prestack multicoverage data into a stacked common-offset (CO) section. Similarly to common-midpoint and common-reflection-surface stacks, the OCO stack does not rely on an a priori velocity model but provided velocity information itself. Because OCO is dependent on the velocity model used in the process, the method can be combined with trial-stacking techniques for a set of models, thus allowing for the extraction of velocity information. The algorithm consists of data stacking along so-called OCO trajectories, which approximate the common-reflection-point trajectory, i.e., the position of a reflection event in the multicoverage data as a function of source-receiver offset in dependence on the medium velocity and the local event slope. These trajectories are the ray-theoretical solutions to the OCO image-wave equation, which describes the continuous transformation of a CO reflection event from one offset to another. Stacking along trial OCO trajectories for different values of average velocity and local event slope allows us to determine horizon-based optimal parameter pairs and a final stacked section at arbitrary offset. Synthetic examples demonstrate that the OCO stack works as predicted, almost completely removing random noise added to the data and successfully recovering the reflection events.


Geophysics ◽  
1972 ◽  
Vol 37 (5) ◽  
pp. 769-787 ◽  
Author(s):  
J. W. C. Sherwood ◽  
P. H. Poe

An economic computer program can stack the data from several adjoining common depth points over a wide range of both dip and normal moveout. We can extract from this a set of seismic wavelets, each possessing a determined dip and normal moveout, which represent the original seismic data in an approximate and compressed form. The seismic wavelets resulting from the processing of a complete seismic line are stored for a variety of subsequent uses, such as the following: 1) Superimpose the wavelets, or a subset of them, to form a record section analogous to a conventional common‐depth‐point stacked section. This facilitates the construction of record sections consisting dominantly of either multiple or primary reflections. Other benefits can arise from improved signal‐to‐random‐noise ratio, the concurrent display of overlapping primary wavelets with widely different normal moveouts, and the elimination of the waveform stretching that occurs on the long offset traces with conventional normal moveout removal. 2) By displaying each picked wavelet as a short dip‐bar located at the correct time and spatial position and annotated with the estimated rms velocity, we can exhibit essentially continuous rms‐velocity data along each reflection. This information can be utilized for the estimation of interval and average velocities. For comparative purposes this velocity‐annotated dip‐bar display is normally formed on the same scale as the conventional common‐depth‐point stack section.


Geophysics ◽  
1965 ◽  
Vol 30 (3) ◽  
pp. 348-362 ◽  
Author(s):  
William A. Schneider ◽  
E. R. Prince ◽  
Ben F. Giles

A new data‐processing technique is presented which utilizes optimum multichannel digital filtering in conjunction with common subsurface horizontal stacking for the efficient rejection of multiple reflections. The method exploits the differential normal moveout between primary and multiple reflections that results from an increase in average velocity with depth. Triple subsurface coverage is obtained in the field; the common subsurface traces are individually prefiltered with different filters and stacked. The digital filters are designed on the least‐mean‐square‐error criteria to preserve primaries (signal) in the presence of multiples (noise) of predictable normal moveout, and random noise. The method achieves wide‐band separation of primary and multiple energy with only a three‐point stack; it can work effectively with small normal moveout differences eliminating the need for long offsets and the attendant signal degradation due to wide‐angle reflections; it does not require equal multiple moveout on the triplet of traces stacked; and finally the method is not sensitive to small errors in statics or predicted normal moveout. The technique is illustrated in terms of synthetic examples selected to encompass realistic field situations, and the parameter specification necessary for the multichannel filter design.


2021 ◽  
Vol 40 (11) ◽  
pp. 831-836
Author(s):  
Aina Juell Bugge ◽  
Andreas K. Evensen ◽  
Jan Erik Lie ◽  
Espen H. Nilsen

Some of the key tasks in seismic processing involve suppressing multiples and noise that interfere with primary events. Conventional multiple attenuation on seismic prestack data is time-consuming and subjective. As an alternative, we propose model-driven processing using a convolutional neural network trained on synthetically modeled training data. The crucial part of our approach is to generate appropriate training data. Here, we compute a generic data set with pairs of synthetic gathers with and without multiples. Because we generate the primaries first and then add multiples, we ensure that we have perfect target data without any multiple energy. To compute generic and realistic training data, we include elements of wave propagation physics and implement a randomized flexibility of settings such as the wavelet, frequency content, degree of random noise, and amplitude variation with offset effects with each gather pair. A fully convolutional neural network is trained on the synthetic data in order to learn to suppress the noise and multiples. Evaluations of the approach on benchmark data indicate that our trained network is faster than conventional multiple attenuation because it can be run efficiently on a modern GPU, and it has the potential to better preserve primary amplitudes. Multiple removal with model-driven processing is demonstrated on seismic field data, and the results are compared to conventional multiple attenuation using a commercial Radon algorithm. The model-driven approach performs well when applied to real common-depth point gathers, and it successfully removes multiples, even where the multiples interfere with the primary signals on the near offsets.


2016 ◽  
Vol 11 (1) ◽  
pp. 33
Author(s):  
Subarsyah Subarsyah ◽  
Sahudin Sahudin

Keberadaan water-bottom multiple merupakan hal yang tidak bisa dihindari dalam akuisisi data seismik laut, tentu saja hal ini akan menurunkan tingkat perbandingan sinyal dan noise. Beberapa metode atenuasi telah dikembangkan dalam menekan noise ini. Metode atenuasi multiple diklasifikasikan dalam tiga kelompok meliputi metode dekonvolusi yang mengidentifikasi multiple berdasarkan periodisitasnya, metode filtering yang memisahkan refleksi primer dan multiple dalam domain tertentu (F-K,Tau-P dan Radon domain) serta metode prediksi medan gelombang. Penerapan metode F-K demultiple yang masuk kategori kedua akan diterapkan terhadap data seismik PPPGL tahun 2010 di perairan Teluk Tomini. Atenuasi terhadap water-bottom multiple berhasil dilakukan akan tetapi pada beberapa bagian multiple masih terlihat dengan amplitude relatif lebih kecil. F-K demultiple tidak efektif dalam mereduksi multiple pada offset yang pendek dan multiple pada zona ini yang memberikan kontribusi terhadap keberadaan multiple pada penampang akhir. Kata kunci : F-K demultiple, multiple, atenuasi The presence of water-bottom multiple is unavoidable in marine seismic acquisition, of course, this will reduce signal to noise ratio. Several attenuation methods have been developed to suppress this noise. Multiple attenuation methods are classified into three groups first deconvolution method based on periodicity, second filtering method that separates the primary and multiple reflections in certain domains (FK, Tau-P and the Radon domain) ang the third method based on wavefield prediction. Application of F-K demultiple incoming second category will be applied to the seismic data in 2010 PPPGL at Tomini Gulf waters. Attenuation of the water-bottom multiple successful in reduce multiple but in some parts of seismic section multiple still visible with relatively smaller amplitude. FK demultiple not effective in reducing multiple at near offset and multiple in this zone contribute to the existence of multiple in final section. Key words : F-K demultiple, multiple, attenuation


Sign in / Sign up

Export Citation Format

Share Document