Stacking seismic data using local correlation

Geophysics ◽  
2009 ◽  
Vol 74 (3) ◽  
pp. V43-V48 ◽  
Author(s):  
Guochang Liu ◽  
Sergey Fomel ◽  
Long Jin ◽  
Xiaohong Chen

Stacking plays an important role in improving signal-to-noise ratio and imaging quality of seismic data. However, for low-fold-coverage seismic profiles, the result of conventional stacking is not always satisfactory. To address this problem, we have developed a method of stacking in which we use local correlation as a weight for stacking common-midpoint gathers after NMO processing or common-image-point gathers after prestack migration. Application of the method to synthetic and field data showed that stacking using local correlation can be more effective in suppressing random noise and artifacts than other stacking methods.

Geophysics ◽  
2013 ◽  
Vol 78 (6) ◽  
pp. V229-V237 ◽  
Author(s):  
Hongbo Lin ◽  
Yue Li ◽  
Baojun Yang ◽  
Haitao Ma

Time-frequency peak filtering (TFPF) may efficiently suppress random noise and hence improve the signal-to-noise ratio. However, the errors are not always satisfactory when applying the TFPF to fast-varying seismic signals. We begin with an error analysis for the TFPF by using the spread factor of the phase and cumulants of noise. This analysis shows that the nonlinear signal component and non-Gaussian random noise lead to the deviation of the pseudo-Wigner-Ville distribution (PWVD) peaks from the instantaneous frequency. The deviation introduces the signal distortion and random oscillations in the result of the TFPF. We propose a weighted reassigned smoothed PWVD with less deviation than PWVD. The proposed method adopts a frequency window to smooth away the residual oscillations in the PWVD, and incorporates a weight function in the reassignment which sharpens the time-frequency distribution for reducing the deviation. Because the weight function is determined by the lateral coherence of seismic data, the smoothed PWVD is assigned to the accurate instantaneous frequency for desired signal components by weighted frequency reassignment. As a result, the TFPF based on the weighted reassigned PWVD (TFPF_WR) can be more effective in suppressing random noise and preserving signal as compared with the TFPF using the PWVD. We test the proposed method on synthetic and field seismic data, and compare it with a wavelet-transform method and [Formula: see text] prediction filter. The results show that the proposed method provides better performance over the other methods in signal preserving under low signal-to-noise ratio.


Geophysics ◽  
2013 ◽  
Vol 78 (5) ◽  
pp. U53-U63 ◽  
Author(s):  
Andrea Tognarelli ◽  
Eusebio Stucchi ◽  
Alessia Ravasio ◽  
Alfredo Mazzotti

We tested the properties of three different coherency functionals for the velocity analysis of seismic data relative to subbasalt exploration. We evaluated the performance of the standard semblance algorithm and two high-resolution coherency functionals based on the use of analytic signals and of the covariance estimation along hyperbolic traveltime trajectories. Approximate knowledge of the wavelet was exploited to design appropriate filters that matched the primary reflections, thereby further improving the ability of the functionals to highlight the events of interest. The tests were carried out on two synthetic seismograms computed on models reproducing the geologic setting of basaltic intrusions and on common midpoint gathers from a 3D survey. Synthetic and field data had a very low signal-to-noise ratio, strong multiple contamination, and weak primary subbasalt signals. The results revealed that high-resolution coherency functionals were more suitable than semblance algorithms to detect primary signals and to distinguish them from multiples and other interfering events. This early discrimination between primaries and multiples could help to target specific signal enhancement and demultiple operations.


2022 ◽  
Vol 14 (2) ◽  
pp. 263
Author(s):  
Haixia Zhao ◽  
Tingting Bai ◽  
Zhiqiang Wang

Seismic field data are usually contaminated by random or complex noise, which seriously affect the quality of seismic data contaminating seismic imaging and seismic interpretation. Improving the signal-to-noise ratio (SNR) of seismic data has always been a key step in seismic data processing. Deep learning approaches have been successfully applied to suppress seismic random noise. The training examples are essential in deep learning methods, especially for the geophysical problems, where the complete training data are not easy to be acquired due to high cost of acquisition. In this work, we propose a natural images pre-trained deep learning method to suppress seismic random noise through insight of the transfer learning. Our network contains pre-trained and post-trained networks: the former is trained by natural images to obtain the preliminary denoising results, while the latter is trained by a small amount of seismic images to fine-tune the denoising effects by semi-supervised learning to enhance the continuity of geological structures. The results of four types of synthetic seismic data and six field data demonstrate that our network has great performance in seismic random noise suppression in terms of both quantitative metrics and intuitive effects.


2019 ◽  
pp. 1297-1303
Author(s):  
Kamal K. Ali ◽  
Reem K. Ibrahim ◽  
Hassan A. Thabit

The frequency dependent noise attenuation (FDNAT) filter was applied on 2D seismic data line DE21 in east Diwaniya, south eastern Iraq to improve the signal to noise ratio. After applied FDNAT on the seismic data, it gives good results and caused to remove a lot of random noise. This processing is helpful in enhancement the picking of the signal of the reflectors and therefore the interpretation of data will be easy later. The quality control by using spectrum analysis is used as a quality factor in proving the effects of FDNAT filter to remove the random noise.


Geophysics ◽  
2006 ◽  
Vol 71 (6) ◽  
pp. S273-S283 ◽  
Author(s):  
Jan Thorbecke ◽  
A. J. Berkhout

The common-focus-point technology (CFP) describes prestack migration by focusing in two steps: emission and detection. The output of the first focusing step represents a CFP gather. This gather defines a shot record that represents the subsurface response resulting from a focused source wavefield. We propose applying the recursive shot-record, depth-migration algorithm to the CFP gathers of a seismic data volume and refer to this process as CFP-gather migration. In the situation of complex geology and/or low signal-to-noise ratio, CFP-based image gathers are easier to interpret for nonalignment than the conventional image gathers. This makes the CFP-based image gathers better suited for velocity analysis. This important property is illustrated by examples on the Marmousi model.


Geophysics ◽  
2009 ◽  
Vol 74 (3) ◽  
pp. V49-V58 ◽  
Author(s):  
Mikhail Baykulov ◽  
Dirk Gajewski

We developed a new partial common-reflection-surface (CRS) stacking method to enhance the quality of sparse low-fold seismic data. For this purpose, we use kinematic wavefield attributes computed during the automatic CRS stack. We apply a multiparameter CRS traveltime formula to compute partial stacked CRS supergathers. Our algorithm allows us to generate NMO-uncorrected gathers without the application of inverse NMO/DMO. Gathers obtained by this approach are regularized and have better signal-to-noise ratio compared with original common-midpoint gathers. Instead of the original data, these improved prestack data can be used in many conventional processing steps, e.g., velocity analysis or prestack depth migration, providing enhanced images and better quality control. We verified the method on 2D synthetic data and applied it to low-fold land data from northern Germany. The synthetic examples show the robustness of the partial CRS stack in the presence of noise. Sparse land data became regularized, and the signal-to-noise ratio of the seismograms increased as a result of the partial CRS stack. Prestack depth migration of the generated partially stacked CRS supergathers produced significantly improved common-image gathers as well as depth-migrated sections.


2015 ◽  
Vol 2015 ◽  
pp. 1-7 ◽  
Author(s):  
Guxi Wang ◽  
Ling Chen ◽  
Si Guo ◽  
Yu Peng ◽  
Ke Guo

Seismic data processing is an important aspect to improve the signal to noise ratio. The main work of this paper is to combine the characteristics of seismic data, using wavelet transform method, to eliminate and control such random noise, aiming to improve the signal to noise ratio and the technical methods used in large data systems, so that there can be better promotion and application. In recent years, prestack data denoising of all-digital three-dimensional seismic data is the key to data processing. Contrapose the characteristics of all-digital three-dimensional seismic data, and, on the basis of previous studies, a new threshold function is proposed. Comparing between conventional hard threshold and soft threshold, this function not only is easy to compute, but also has excellent mathematical properties and a clear physical meaning. The simulation results proved that this method can well remove the random noise. Using this threshold function in actual seismic processing of unconventional lithologic gas reservoir with low porosity, low permeability, low abundance, and strong heterogeneity, the results show that the denoising method can availably improve seismic processing effects and enhance the signal to noise ratio (SNR).


2015 ◽  
Vol 37 (3) ◽  
pp. 41-48
Author(s):  
Ewa Kawalec-Latała

Abstract Acoustic inversion is useful to extract information from seismic data. Inhomogeneities of salt deposits should be predicted before the decision of underground storage location is made. The work concerns the possibility of detecting anhydrite intercalation in the rock salt from seismic dataset. The resolution strongly depends on signal to noise ratio. The synthetic pseudoacoustic impedance sections are generated for efficiency test of predictive and minimum entropy deconvolution process, when random noise distorts the seismic traces.


2014 ◽  
Vol 2 (2) ◽  
pp. 47-58
Author(s):  
Ismail Sh. Baqer

A two Level Image Quality enhancement is proposed in this paper. In the first level, Dualistic Sub-Image Histogram Equalization DSIHE method decomposes the original image into two sub-images based on median of original images. The second level deals with spikes shaped noise that may appear in the image after processing. We presents three methods of image enhancement GHE, LHE and proposed DSIHE that improve the visual quality of images. A comparative calculations is being carried out on above mentioned techniques to examine objective and subjective image quality parameters e.g. Peak Signal-to-Noise Ratio PSNR values, entropy H and mean squared error MSE to measure the quality of gray scale enhanced images. For handling gray-level images, convenient Histogram Equalization methods e.g. GHE and LHE tend to change the mean brightness of an image to middle level of the gray-level range limiting their appropriateness for contrast enhancement in consumer electronics such as TV monitors. The DSIHE methods seem to overcome this disadvantage as they tend to preserve both, the brightness and contrast enhancement. Experimental results show that the proposed technique gives better results in terms of Discrete Entropy, Signal to Noise ratio and Mean Squared Error values than the Global and Local histogram-based equalization methods


Author(s):  
Mourad Talbi ◽  
Med Salim Bouhlel

Background: In this paper, we propose a secure image watermarking technique which is applied to grayscale and color images. It consists in applying the SVD (Singular Value Decomposition) in the Lifting Wavelet Transform domain for embedding a speech image (the watermark) into the host image. Methods: It also uses signature in the embedding and extraction steps. Its performance is justified by the computation of PSNR (Pick Signal to Noise Ratio), SSIM (Structural Similarity), SNR (Signal to Noise Ratio), SegSNR (Segmental SNR) and PESQ (Perceptual Evaluation Speech Quality). Results: The PSNR and SSIM are used for evaluating the perceptual quality of the watermarked image compared to the original image. The SNR, SegSNR and PESQ are used for evaluating the perceptual quality of the reconstructed or extracted speech signal compared to the original speech signal. Conclusion: The Results obtained from computation of PSNR, SSIM, SNR, SegSNR and PESQ show the performance of the proposed technique.


Sign in / Sign up

Export Citation Format

Share Document