Seismic data display and reflection perceptibility

Geophysics ◽  
1981 ◽  
Vol 46 (2) ◽  
pp. 106-120 ◽  
Author(s):  
Frank J. Feagin

Relatively little attention has been paid to the final output of today’s sophisticated seismic data processing procedures—the seismic section display. We first examine significant factors relating to those displays and then describe a series of experiments that, by varying those factors, let us specify displays that maximize interpreters’ abilities to detect reflections buried in random noise. The study.—From psychology of perception and image enhancement literature and from our own research, these conclusions were reached: (1) Seismic reflection perceptibility is best for time scales in the neighborhood of 1.875 inches/sec because, for common seismic frequencies, the eye‐brain spatial frequency response is a maximum near that value. (2) An optimized gray scale for variable density sections is nonlinearly related to digital data values on a plot tape. The nonlinearity is composed to two parts (a) that which compensates for nonlinearity inherent in human perception, and (b) the nonlinearity required to produce histogram equalization, a modern image enhancement technique. The experiments.—The experiments involved 37 synthetic seismic sections composed of simple reflections embedded in filtered random noise. Reflection signal‐to‐noise (S/N) ratio was varied over a wide range, as were other display parameters, such as scale, plot mode, photographic density contrast, gray scale, and reflection dip angle. Twenty‐nine interpreters took part in the experiments. The sections were presented, one at a time, to each interpreter; the interpreter then proceeded to mark all recognizable events. Marked events were checked against known data and errors recorded. Detectability thresholds in terms of S/N ratios were measured as a function of the various display parameters. Some of the more important conclusions are: (1) With our usual types of displays, interpreters can pick reflections about 6 or 7 dB below noise with a 50 percent probability. (2) Perceptibility varies from one person to another by 2.5 to 3.0 dB. (3) For displays with a 3.75 inch/sec scale and low contrast photographic paper (a common situation), variable density (VD) and variable area‐wiggly trace (VA‐WT) sections are about equally effective from a perceptibility standpoint. (4) However, for displays with small scales and for displays with higher contrast, variable density is significantly superior. A VD section with all parameters optimized shows about 8 dB perceptibility advantage over an optimized VA‐WT section. (5) Detectability drops as dip angle increases. VD is slightly superior to VA‐WT, even at large scales, for steep dip angles. (6) An interpreter gains typically about 2 dB by foreshortening, although there is a wide variation from one individual to another.

1975 ◽  
Vol 15 (1) ◽  
pp. 81
Author(s):  
W. Pailthorpe ◽  
J. Wardell

During the past two years, much publicity has been given to the direct indication of hydrocarbon accumulations by "Bright Spot" reflections: the very high amplitude reflections from a shale to gas-sand or gas-sand to water-sand interface. It was soon generally realised, however, that this phenomenon was of limited occurrence, being mostly restricted to young, shallow, sand and shale sequences such as the United States Gulf Coast. A more widely detectable indication of hydrocarbons was found to be the reflection from a fluid interface, such as the gas to water interface, within the reservoir. This reflection is characterised by its flatness, being a fluid interface, and is often called the "Flat Spot".Model studies show that the flat spots have a wide range of amplitudes, from very high for shallow gas to water contacts, to very low for deep oil to water contacts. However, many of the weaker flat spots on good recent marine seismic data have an adequate signal to random noise ratio for detection, and the problem is to separate and distinguish them from the other stronger reflections close by. In this respect the unique flatness of the fluid contact reflection can be exploited by dip discriminant processes, such as velocity filtering, to separate it from the generally dipping reflectors at its boundaries. A limiting factor in the detection of the deeper flat spots is the frequency bandwidth of the seismic data. Since the separation between the flat spot reflection and the upper and lower boundary reflections of the reservoir is often small, relatively high frequency data are needed to resolve these separate reflections. Correct display of the seismic data can be critical to flat spot detection, and some degree of vertical exaggeration of the seismic section is often required to increase apparent dips, and thus make the flat spots more noticeable.The flat spot is generally a smaller target than the structural features that conventional seismic surveys are designed to find and map, and so a denser than normal grid of seismic lines is required adequately to map most flat spots.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. V137-V148 ◽  
Author(s):  
Pierre Turquais ◽  
Endrias G. Asgedom ◽  
Walter Söllner

We have addressed the seismic data denoising problem, in which the noise is random and has an unknown spatiotemporally varying variance. In seismic data processing, random noise is often attenuated using transform-based methods. The success of these methods in denoising depends on the ability of the transform to efficiently describe the signal features in the data. Fixed transforms (e.g., wavelets, curvelets) do not adapt to the data and might fail to efficiently describe complex morphologies in the seismic data. Alternatively, dictionary learning methods adapt to the local morphology of the data and provide state-of-the-art denoising results. However, conventional denoising by dictionary learning requires a priori information on the noise variance, and it encounters difficulties when applied for denoising seismic data in which the noise variance is varying in space or time. We have developed a coherence-constrained dictionary learning (CDL) method for denoising that does not require any a priori information related to the signal or noise. To denoise a given window of a seismic section using CDL, overlapping small 2D patches are extracted and a dictionary of patch-sized signals is trained to learn the elementary features embedded in the seismic signal. For each patch, using the learned dictionary, a sparse optimization problem is solved, and a sparse approximation of the patch is computed to attenuate the random noise. Unlike conventional dictionary learning, the sparsity of the approximation is constrained based on coherence such that it does not need a priori noise variance or signal sparsity information and is still optimal to filter out Gaussian random noise. The denoising performance of the CDL method is validated using synthetic and field data examples, and it is compared with the K-SVD and FX-Decon denoising. We found that CDL gives better denoising results than K-SVD and FX-Decon for removing noise when the variance varies in space or time.


Geophysics ◽  
1972 ◽  
Vol 37 (5) ◽  
pp. 769-787 ◽  
Author(s):  
J. W. C. Sherwood ◽  
P. H. Poe

An economic computer program can stack the data from several adjoining common depth points over a wide range of both dip and normal moveout. We can extract from this a set of seismic wavelets, each possessing a determined dip and normal moveout, which represent the original seismic data in an approximate and compressed form. The seismic wavelets resulting from the processing of a complete seismic line are stored for a variety of subsequent uses, such as the following: 1) Superimpose the wavelets, or a subset of them, to form a record section analogous to a conventional common‐depth‐point stacked section. This facilitates the construction of record sections consisting dominantly of either multiple or primary reflections. Other benefits can arise from improved signal‐to‐random‐noise ratio, the concurrent display of overlapping primary wavelets with widely different normal moveouts, and the elimination of the waveform stretching that occurs on the long offset traces with conventional normal moveout removal. 2) By displaying each picked wavelet as a short dip‐bar located at the correct time and spatial position and annotated with the estimated rms velocity, we can exhibit essentially continuous rms‐velocity data along each reflection. This information can be utilized for the estimation of interval and average velocities. For comparative purposes this velocity‐annotated dip‐bar display is normally formed on the same scale as the conventional common‐depth‐point stack section.


1974 ◽  
Vol 14 (1) ◽  
pp. 107
Author(s):  
John Wardell

Since the introduction of the common depth point method of seismic reflection shooting, we have seen a continued increase in the multiplicity of subsurface coverage, to the point where nowadays a large proportion of offshore shooting uses a 48 fold 48 trace configuration. Of the many benefits obtained from this multiplicity of coverage, the attenuation of multiple reflections during the common depth point stacking process is one of the most important.Examinations of theoretical response curves for multiple attenuation in common depth point stacking shows that although increased multiplicity does give improved multiple attenuation, this improvement occurs at higher and higher frequencies and residual moveouts (of the multiples) as the multiplicity continues to increase. For multiplicities greater than 12, the improvement is at relatively high frequencies and residual moveouts, while there is no significant improvement for the lower frequencies of multiples with smaller residual moveouts, which unfortunately are those most likely to remain visible after the stacking process.The simple process of zeroing, or muting, certain selected traces (mostly the shorter offset traces) before stacking can give an average 6 to 9 decibels improvement over a wide range of the low frequency and residual moveout part of the stack response, with 9-15 decibels improvement over parts of this range. The cost of this improvement is an increase in random noise level of 1-2 decibels. With digital processing methods, it is easy to zero the necessary traces over selected portions of the seismic section if so desired.The process does not require a detailed knowledge of the multiple residual moveouts, but can be used on a routine basis in areas where strong multiples are a problem, and a high stacking multiplicity is being used.


2000 ◽  
Vol 179 ◽  
pp. 403-406
Author(s):  
M. Karovska ◽  
B. Wood ◽  
J. Chen ◽  
J. Cook ◽  
R. Howard

AbstractWe applied advanced image enhancement techniques to explore in detail the characteristics of the small-scale structures and/or the low contrast structures in several Coronal Mass Ejections (CMEs) observed by SOHO. We highlight here the results from our studies of the morphology and dynamical evolution of CME structures in the solar corona using two instruments on board SOHO: LASCO and EIT.


2021 ◽  
Vol 9 (2) ◽  
pp. 225
Author(s):  
Farong Gao ◽  
Kai Wang ◽  
Zhangyi Yang ◽  
Yejian Wang ◽  
Qizhong Zhang

In this study, an underwater image enhancement method based on local contrast correction (LCC) and multi-scale fusion is proposed to resolve low contrast and color distortion of underwater images. First, the original image is compensated using the red channel, and the compensated image is processed with a white balance. Second, LCC and image sharpening are carried out to generate two different image versions. Finally, the local contrast corrected images are fused with sharpened images by the multi-scale fusion method. The results show that the proposed method can be applied to water degradation images in different environments without resorting to an image formation model. It can effectively solve color distortion, low contrast, and unobvious details of underwater images.


2021 ◽  
Vol 11 (11) ◽  
pp. 5055
Author(s):  
Hong Liang ◽  
Ankang Yu ◽  
Mingwen Shao ◽  
Yuru Tian

Due to the characteristics of low signal-to-noise ratio and low contrast, low-light images will have problems such as color distortion, low visibility, and accompanying noise, which will cause the accuracy of the target detection problem to drop or even miss the detection target. However, recalibrating the dataset for this type of image will face problems such as increased cost or reduced model robustness. To solve this kind of problem, we propose a low-light image enhancement model based on deep learning. In this paper, the feature extraction is guided by the illumination map and noise map, and then the neural network is trained to predict the local affine model coefficients in the bilateral space. Through these methods, our network can effectively denoise and enhance images. We have conducted extensive experiments on the LOL datasets, and the results show that, compared with traditional image enhancement algorithms, the model is superior to traditional methods in image quality and speed.


2013 ◽  
Vol 56 (7) ◽  
pp. 1200-1208 ◽  
Author(s):  
Yue Li ◽  
BaoJun Yang ◽  
HongBo Lin ◽  
HaiTao Ma ◽  
PengFei Nie

Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. V79-V86 ◽  
Author(s):  
Hakan Karsli ◽  
Derman Dondurur ◽  
Günay Çifçi

Time-dependent amplitude and phase information of stacked seismic data are processed independently using complex trace analysis in order to facilitate interpretation by improving resolution and decreasing random noise. We represent seismic traces using their envelopes and instantaneous phases obtained by the Hilbert transform. The proposed method reduces the amplitudes of the low-frequency components of the envelope, while preserving the phase information. Several tests are performed in order to investigate the behavior of the present method for resolution improvement and noise suppression. Applications on both 1D and 2D synthetic data show that the method is capable of reducing the amplitudes and temporal widths of the side lobes of the input wavelets, and hence, the spectral bandwidth of the input seismic data is enhanced, resulting in an improvement in the signal-to-noise ratio. The bright-spot anomalies observed on the stacked sections become clearer because the output seismic traces have a simplified appearance allowing an easier data interpretation. We recommend applying this simple signal processing for signal enhancement prior to interpretation, especially for single channel and low-fold seismic data.


2014 ◽  
Vol 672-674 ◽  
pp. 1964-1967
Author(s):  
Jun Qiu Wang ◽  
Jun Lin ◽  
Xiang Bo Gong

Vibroseis obtained the seismic record by cross-correlation detection calculation. compared with dynamite source, cross-correlation detection can suppress random noise, but produce more correlation noise. This paper studies Radon transform to remove correlation noise produced by electromagnetic drive vibroseis and impact rammer. From the results of processing field seismic records, we can see that Radon transform can remove correlation noise by vibroseis, the SNR of vibroseis seismic data is effectively improved.


Sign in / Sign up

Export Citation Format

Share Document