DIRECT SEISMIC DETECTION OF HYDROCARBONS

1975 ◽  
Vol 15 (1) ◽  
pp. 81
Author(s):  
W. Pailthorpe ◽  
J. Wardell

During the past two years, much publicity has been given to the direct indication of hydrocarbon accumulations by "Bright Spot" reflections: the very high amplitude reflections from a shale to gas-sand or gas-sand to water-sand interface. It was soon generally realised, however, that this phenomenon was of limited occurrence, being mostly restricted to young, shallow, sand and shale sequences such as the United States Gulf Coast. A more widely detectable indication of hydrocarbons was found to be the reflection from a fluid interface, such as the gas to water interface, within the reservoir. This reflection is characterised by its flatness, being a fluid interface, and is often called the "Flat Spot".Model studies show that the flat spots have a wide range of amplitudes, from very high for shallow gas to water contacts, to very low for deep oil to water contacts. However, many of the weaker flat spots on good recent marine seismic data have an adequate signal to random noise ratio for detection, and the problem is to separate and distinguish them from the other stronger reflections close by. In this respect the unique flatness of the fluid contact reflection can be exploited by dip discriminant processes, such as velocity filtering, to separate it from the generally dipping reflectors at its boundaries. A limiting factor in the detection of the deeper flat spots is the frequency bandwidth of the seismic data. Since the separation between the flat spot reflection and the upper and lower boundary reflections of the reservoir is often small, relatively high frequency data are needed to resolve these separate reflections. Correct display of the seismic data can be critical to flat spot detection, and some degree of vertical exaggeration of the seismic section is often required to increase apparent dips, and thus make the flat spots more noticeable.The flat spot is generally a smaller target than the structural features that conventional seismic surveys are designed to find and map, and so a denser than normal grid of seismic lines is required adequately to map most flat spots.

Geophysics ◽  
1981 ◽  
Vol 46 (2) ◽  
pp. 106-120 ◽  
Author(s):  
Frank J. Feagin

Relatively little attention has been paid to the final output of today’s sophisticated seismic data processing procedures—the seismic section display. We first examine significant factors relating to those displays and then describe a series of experiments that, by varying those factors, let us specify displays that maximize interpreters’ abilities to detect reflections buried in random noise. The study.—From psychology of perception and image enhancement literature and from our own research, these conclusions were reached: (1) Seismic reflection perceptibility is best for time scales in the neighborhood of 1.875 inches/sec because, for common seismic frequencies, the eye‐brain spatial frequency response is a maximum near that value. (2) An optimized gray scale for variable density sections is nonlinearly related to digital data values on a plot tape. The nonlinearity is composed to two parts (a) that which compensates for nonlinearity inherent in human perception, and (b) the nonlinearity required to produce histogram equalization, a modern image enhancement technique. The experiments.—The experiments involved 37 synthetic seismic sections composed of simple reflections embedded in filtered random noise. Reflection signal‐to‐noise (S/N) ratio was varied over a wide range, as were other display parameters, such as scale, plot mode, photographic density contrast, gray scale, and reflection dip angle. Twenty‐nine interpreters took part in the experiments. The sections were presented, one at a time, to each interpreter; the interpreter then proceeded to mark all recognizable events. Marked events were checked against known data and errors recorded. Detectability thresholds in terms of S/N ratios were measured as a function of the various display parameters. Some of the more important conclusions are: (1) With our usual types of displays, interpreters can pick reflections about 6 or 7 dB below noise with a 50 percent probability. (2) Perceptibility varies from one person to another by 2.5 to 3.0 dB. (3) For displays with a 3.75 inch/sec scale and low contrast photographic paper (a common situation), variable density (VD) and variable area‐wiggly trace (VA‐WT) sections are about equally effective from a perceptibility standpoint. (4) However, for displays with small scales and for displays with higher contrast, variable density is significantly superior. A VD section with all parameters optimized shows about 8 dB perceptibility advantage over an optimized VA‐WT section. (5) Detectability drops as dip angle increases. VD is slightly superior to VA‐WT, even at large scales, for steep dip angles. (6) An interpreter gains typically about 2 dB by foreshortening, although there is a wide variation from one individual to another.


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. V79-V86 ◽  
Author(s):  
Hakan Karsli ◽  
Derman Dondurur ◽  
Günay Çifçi

Time-dependent amplitude and phase information of stacked seismic data are processed independently using complex trace analysis in order to facilitate interpretation by improving resolution and decreasing random noise. We represent seismic traces using their envelopes and instantaneous phases obtained by the Hilbert transform. The proposed method reduces the amplitudes of the low-frequency components of the envelope, while preserving the phase information. Several tests are performed in order to investigate the behavior of the present method for resolution improvement and noise suppression. Applications on both 1D and 2D synthetic data show that the method is capable of reducing the amplitudes and temporal widths of the side lobes of the input wavelets, and hence, the spectral bandwidth of the input seismic data is enhanced, resulting in an improvement in the signal-to-noise ratio. The bright-spot anomalies observed on the stacked sections become clearer because the output seismic traces have a simplified appearance allowing an easier data interpretation. We recommend applying this simple signal processing for signal enhancement prior to interpretation, especially for single channel and low-fold seismic data.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. V137-V148 ◽  
Author(s):  
Pierre Turquais ◽  
Endrias G. Asgedom ◽  
Walter Söllner

We have addressed the seismic data denoising problem, in which the noise is random and has an unknown spatiotemporally varying variance. In seismic data processing, random noise is often attenuated using transform-based methods. The success of these methods in denoising depends on the ability of the transform to efficiently describe the signal features in the data. Fixed transforms (e.g., wavelets, curvelets) do not adapt to the data and might fail to efficiently describe complex morphologies in the seismic data. Alternatively, dictionary learning methods adapt to the local morphology of the data and provide state-of-the-art denoising results. However, conventional denoising by dictionary learning requires a priori information on the noise variance, and it encounters difficulties when applied for denoising seismic data in which the noise variance is varying in space or time. We have developed a coherence-constrained dictionary learning (CDL) method for denoising that does not require any a priori information related to the signal or noise. To denoise a given window of a seismic section using CDL, overlapping small 2D patches are extracted and a dictionary of patch-sized signals is trained to learn the elementary features embedded in the seismic signal. For each patch, using the learned dictionary, a sparse optimization problem is solved, and a sparse approximation of the patch is computed to attenuate the random noise. Unlike conventional dictionary learning, the sparsity of the approximation is constrained based on coherence such that it does not need a priori noise variance or signal sparsity information and is still optimal to filter out Gaussian random noise. The denoising performance of the CDL method is validated using synthetic and field data examples, and it is compared with the K-SVD and FX-Decon denoising. We found that CDL gives better denoising results than K-SVD and FX-Decon for removing noise when the variance varies in space or time.


Geophysics ◽  
1972 ◽  
Vol 37 (5) ◽  
pp. 769-787 ◽  
Author(s):  
J. W. C. Sherwood ◽  
P. H. Poe

An economic computer program can stack the data from several adjoining common depth points over a wide range of both dip and normal moveout. We can extract from this a set of seismic wavelets, each possessing a determined dip and normal moveout, which represent the original seismic data in an approximate and compressed form. The seismic wavelets resulting from the processing of a complete seismic line are stored for a variety of subsequent uses, such as the following: 1) Superimpose the wavelets, or a subset of them, to form a record section analogous to a conventional common‐depth‐point stacked section. This facilitates the construction of record sections consisting dominantly of either multiple or primary reflections. Other benefits can arise from improved signal‐to‐random‐noise ratio, the concurrent display of overlapping primary wavelets with widely different normal moveouts, and the elimination of the waveform stretching that occurs on the long offset traces with conventional normal moveout removal. 2) By displaying each picked wavelet as a short dip‐bar located at the correct time and spatial position and annotated with the estimated rms velocity, we can exhibit essentially continuous rms‐velocity data along each reflection. This information can be utilized for the estimation of interval and average velocities. For comparative purposes this velocity‐annotated dip‐bar display is normally formed on the same scale as the conventional common‐depth‐point stack section.


1974 ◽  
Vol 14 (1) ◽  
pp. 107
Author(s):  
John Wardell

Since the introduction of the common depth point method of seismic reflection shooting, we have seen a continued increase in the multiplicity of subsurface coverage, to the point where nowadays a large proportion of offshore shooting uses a 48 fold 48 trace configuration. Of the many benefits obtained from this multiplicity of coverage, the attenuation of multiple reflections during the common depth point stacking process is one of the most important.Examinations of theoretical response curves for multiple attenuation in common depth point stacking shows that although increased multiplicity does give improved multiple attenuation, this improvement occurs at higher and higher frequencies and residual moveouts (of the multiples) as the multiplicity continues to increase. For multiplicities greater than 12, the improvement is at relatively high frequencies and residual moveouts, while there is no significant improvement for the lower frequencies of multiples with smaller residual moveouts, which unfortunately are those most likely to remain visible after the stacking process.The simple process of zeroing, or muting, certain selected traces (mostly the shorter offset traces) before stacking can give an average 6 to 9 decibels improvement over a wide range of the low frequency and residual moveout part of the stack response, with 9-15 decibels improvement over parts of this range. The cost of this improvement is an increase in random noise level of 1-2 decibels. With digital processing methods, it is easy to zero the necessary traces over selected portions of the seismic section if so desired.The process does not require a detailed knowledge of the multiple residual moveouts, but can be used on a routine basis in areas where strong multiples are a problem, and a high stacking multiplicity is being used.


Geophysics ◽  
1995 ◽  
Vol 60 (5) ◽  
pp. 1398-1408 ◽  
Author(s):  
Christopher P. Ross ◽  
Daniel L. Kinman

The use of amplitude variation with offset (AVO) attribute sections such as the product of the normal incidence trace (A) and the gradient trace (B) have been used extensively in bright spot AVO analysis and interpretation. However, while these sections have often worked well with low acoustic impedance bright spot responses, they are not reliable indicators of nonbright‐spot seismic anomalies. Analyzing nonbright‐spot seismic data with common AVO attribute sections will: (1) not detect the gas‐charged reservoir because of near‐zero acoustic impedance contrast between the sands and encasing shales, or (2) yield an incorrect (negative) AVO product if the normal incidence and gradient values are opposite in sign. We divide nonbright‐spot AVO offset responses into two subcategories: those with phase reversals and those without. An AVO analysis procedure for these anomalies is presented through two examples. The procedure exploits the nature of the prestack response, yielding a more definitive AVO attribute section, and this technique is adaptive to both subcategories of nonbright‐spot AVO responses. This technique identifies the presence of gas‐charged pore fluids within the reservoir when compared to a conventionally processed, relative amplitude seismic section with characteristically low amplitude responses for near‐zero acoustic impedance contrast sands.


Author(s):  
Gerald B. Feldewerth

In recent years an increasing emphasis has been placed on the study of high temperature intermetallic compounds for possible aerospace applications. One group of interest is the B2 aiuminides. This group of intermetaliics has a very high melting temperature, good high temperature, and excellent specific strength. These qualities make it a candidate for applications such as turbine engines. The B2 aiuminides exist over a wide range of compositions and also have a large solubility for third element substitutional additions, which may allow alloying additions to overcome their major drawback, their brittle nature.One B2 aluminide currently being studied is cobalt aluminide. Optical microscopy of CoAl alloys produced at the University of Missouri-Rolla showed a dramatic decrease in the grain size which affects the yield strength and flow stress of long range ordered alloys, and a change in the grain shape with the addition of 0.5 % boron.


2004 ◽  
pp. 21-29
Author(s):  
G.V. Pyrog

In domestic scientific and public opinion, interest in religion as a new worldview paradigm is very high. Today's attention to the Christian religion in our society is connected, in our opinion, with the specificity of its value system, which distinguishes it from other forms of consciousness: the idea of ​​God, the absolute, the eternity of moral norms. That is why its historical forms do not receive accurate characteristics and do not matter in the mass consciousness. Modern religious beliefs do not always arise as a result of the direct influence of church preaching. The emerging religious values ​​are absorbed in a wide range of philosophical, artistic, ethical ideas, acting as a compensation for what is generally defined as spirituality. At the same time, the appeal to Christian values ​​became very popular.


Author(s):  
Tim Rutherford-Johnson

By the start of the 21st century many of the foundations of postwar culture had disappeared: Europe had been rebuilt and, as the EU, had become one of the world’s largest economies; the United States’ claim to global dominance was threatened; and the postwar social democratic consensus was being replaced by market-led neoliberalism. Most importantly of all, the Cold War was over, and the World Wide Web had been born. Music After The Fall considers contemporary musical composition against this changed backdrop, placing it in the context of globalization, digitization, and new media. Drawing on theories from the other arts, in particular art and architecture, it expands the definition of Western art music to include forms of composition, experimental music, sound art, and crossover work from across the spectrum, inside and beyond the concert hall. Each chapter considers a wide range of composers, performers, works, and institutions are considered critically to build up a broad and rich picture of the new music ecosystem, from North American string quartets to Lebanese improvisers, from South American electroacoustic studios to pianos in the Australian outback. A new approach to the study of contemporary music is developed that relies less on taxonomies of style and technique, and more on the comparison of different responses to common themes, among them permission, fluidity, excess, and loss.


Author(s):  
Kathryn A. Sloan

Popular culture has long conflated Mexico with the macabre. Some persuasive intellectuals argue that Mexicans have a special relationship with death, formed in the crucible of their hybrid Aztec-European heritage. Death is their intimate friend; death is mocked and accepted with irony and fatalistic abandon. The commonplace nature of death desensitizes Mexicans to suffering. Death, simply put, defines Mexico. There must have been historical actors who looked away from human misery, but to essentialize a diverse group of people as possessing a unique death cult delights those who want to see the exotic in Mexico or distinguish that society from its peers. Examining tragic and untimely death—namely self-annihilation—reveals a counter narrative. What could be more chilling than suicide, especially the violent death of the young? What desperation or madness pushed the victim to raise the gun to the temple or slip the noose around the neck? A close examination of a wide range of twentieth-century historical documents proves that Mexicans did not accept death with a cavalier chuckle nor develop a unique death cult, for that matter. Quite the reverse, Mexicans behaved just as their contemporaries did in Austria, France, England, and the United States. They devoted scientific inquiry to the malady and mourned the loss of each life to suicide.


Sign in / Sign up

Export Citation Format

Share Document