A denoising framework for microseismic and reflection seismic data based on block matching

Geophysics ◽  
2018 ◽  
Vol 83 (5) ◽  
pp. V283-V292 ◽  
Author(s):  
Chao Zhang ◽  
Mirko van der Baan

Microseismic and seismic data with a low signal-to-noise ratio affect the accuracy and reliability of processing results and their subsequent interpretation. Thus, denoising is of great importance. We have developed an effective denoising framework for surface (micro)-seismic data using block matching. The novel idea of the proposed framework is to enhance coherent features by grouping similar 2D data blocks into 3D data arrays. The high similarities in the 3D data arrays benefit any filtering strategy suitable for multidimensional noise suppression. We test the performance of this framework on synthetic and field data with different noise levels. The results demonstrate that the block-matching-based framework achieves state-of-the-art denoising performance in terms of incoherent-noise attenuation and signal preservation.

Geophysics ◽  
2006 ◽  
Vol 71 (4) ◽  
pp. SI177-SI187 ◽  
Author(s):  
Brad Artman

Imaging passive seismic data is the process of synthesizing the wealth of subsurface information available from reflection seismic experiments by recording ambient sound using an array of geophones distributed at the surface. Crosscorrelating the traces of such a passive experiment can synthesize data that are identical to actively collected reflection seismic data. With a correlation-based imaging condition, wave-equation shot-profile depth migration can use raw transmission wavefields as input for producing a subsurface image. Migration is even more important for passively acquired data than for active data because with passive data, the source wavefields are likely to be weak compared with background and instrument noise — a condition that leads to a low signal-to-noise ratio. Fourier analysis of correlating long field records shows that aliasing of the wavefields from distinct shots is unavoidable. Although this reduces the order of computations for correlation by the length of the original trace, the aliasing produces an output volume that may not be substantially more useful than the raw data because of the introduction of crosstalk between multiple sources. Direct migration of raw field data still can produce an accurate image, even when the transmission wavefields from individual sources are not separated. To illustrate direct migration, I use images from a shallow passive seismic investigation targeting a buried hollow pipe and the water-table reflection. These images show a strong anomaly at the 1-m depth of the pipe and faint events that could be the water table at a depth of around [Formula: see text]. The images are not clear enough to be irrefutable. I identify deficiencies in survey design and execution to aid future efforts.


Geophysics ◽  
1997 ◽  
Vol 62 (4) ◽  
pp. 1310-1314 ◽  
Author(s):  
Qing Li ◽  
Kris Vasudevan ◽  
Frederick A. Cook

Coherency filtering is a tool used commonly in 2-D seismic processing to isolate desired events from noisy data. It assumes that phase‐coherent signal can be separated from background incoherent noise on the basis of coherency estimates, and coherent noise from coherent signal on the basis of different dips. It is achieved by searching for the maximum coherence direction for each data point of a seismic event and enhancing the event along this direction through stacking; it suppresses the incoherent events along other directions. Foundations for a 2-D coherency filtering algorithm were laid out by several researchers (Neidell and Taner, 1971; McMechan, 1983; Leven and Roy‐Chowdhury, 1984; Kong et al., 1985; Milkereit and Spencer, 1989). Milkereit and Spencer (1989) have applied 2-D coherency filtering successfully to 2-D deep crustal seismic data for the improvement of visualization and interpretation. Work on random noise attenuation using frequency‐space or time‐space prediction filters both in two or three dimensions to increase the signal‐to‐noise ratio of the data can be found in geophysical literature (Canales, 1984; Hornbostel, 1991; Abma and Claerbout, 1995).


2019 ◽  
pp. 2664-2671
Author(s):  
Ahmed Hussein Ali ◽  
Ali M. Al-Rahim

Tau-P linear noise attenuation filter (TPLNA) was applied on the 3D seismic data of Al-Samawah area south west of Iraq with the aim of attenuating linear noise. TPLNA transforms the data from time domain to tau-p domain in order to increase signal to noise ratio. Applying TPLNA produced very good results considering the 3D data that usually have a large amount of linear noise from different sources and in different azimuths and directions. This processing is very important in later interpretation due to the fact that the signal was covered by different kinds of noise in which the linear noise take a large part.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA115-WA136 ◽  
Author(s):  
Hao Zhang ◽  
Xiuyan Yang ◽  
Jianwei Ma

We have developed an interpolation method based on the denoising convolutional neural network (CNN) for seismic data. It provides a simple and efficient way to break through the problem of the scarcity of geophysical training labels that are often required by deep learning methods. This new method consists of two steps: (1) training a set of CNN denoisers to learn denoising from natural image noisy-clean pairs and (2) integrating the trained CNN denoisers into the project onto convex set (POCS) framework to perform seismic data interpolation. We call it the CNN-POCS method. This method alleviates the demands of seismic data that require shared similar features in the applications of end-to-end deep learning for seismic data interpolation. Additionally, the adopted method is flexible and applicable for different types of missing traces because the missing or down-sampling locations are not involved in the training step; thus, it is of a plug-and-play nature. These indicate the high generalizability of the proposed method and a reduction in the necessity of problem-specific training. The primary results of synthetic and field data show promising interpolation performances of the adopted CNN-POCS method in terms of the signal-to-noise ratio, dealiasing, and weak-feature reconstruction, in comparison with the traditional [Formula: see text]-[Formula: see text] prediction filtering, curvelet transform, and block-matching 3D filtering methods.


2015 ◽  
Vol 3 (1) ◽  
pp. T1-T4 ◽  
Author(s):  
Saleh Al-Dossary

Random seismic noise, present in all 3D seismic data sets, hampers manual interpretation by geoscientists and automatic analysis by a computer program. As a result, many noise-suppression techniques have been developed to enhance image quality. Accurately suppressing seismic noise without damaging image details is crucial in preserving small-scale geologic features for channel detection. The automatic detection of channel patterns theoretically should be easy because of their unique spatial signatures and scales, which differentiate them from other common 3D geobodies. For example, one notable channel characteristic has high local linearity: Spatial coherency is much greater in one direction than in other directions. A variety of techniques, such as spatial filters, can be used to enhance this “slender” channel feature in areas of high signal-to-noise ratio (S/N). Unfortunately, these spatial filters may also reduce the edge detectability in areas of low S/N. In this paper, I compared the effectiveness of three noise reduction filters: (1) running average, (2) redundant wavelet transform (RWT), and (3) polynomial fitting. I demonstrated the usefulness of these filters prior to edge detection to enhance channel patterns in seismic data collected from Saudi Arabia. The data examples demonstrated that RWT and polynomial fitting can successfully preserve, enhance, and delineate channel edges that were not visible in conventional seismic amplitude displays, whereas the running average filter further smeared the detectability of channel edges.


2022 ◽  
Vol 14 (2) ◽  
pp. 263
Author(s):  
Haixia Zhao ◽  
Tingting Bai ◽  
Zhiqiang Wang

Seismic field data are usually contaminated by random or complex noise, which seriously affect the quality of seismic data contaminating seismic imaging and seismic interpretation. Improving the signal-to-noise ratio (SNR) of seismic data has always been a key step in seismic data processing. Deep learning approaches have been successfully applied to suppress seismic random noise. The training examples are essential in deep learning methods, especially for the geophysical problems, where the complete training data are not easy to be acquired due to high cost of acquisition. In this work, we propose a natural images pre-trained deep learning method to suppress seismic random noise through insight of the transfer learning. Our network contains pre-trained and post-trained networks: the former is trained by natural images to obtain the preliminary denoising results, while the latter is trained by a small amount of seismic images to fine-tune the denoising effects by semi-supervised learning to enhance the continuity of geological structures. The results of four types of synthetic seismic data and six field data demonstrate that our network has great performance in seismic random noise suppression in terms of both quantitative metrics and intuitive effects.


Geophysics ◽  
2020 ◽  
Vol 85 (3) ◽  
pp. V283-V296 ◽  
Author(s):  
Andrey Bakulin ◽  
Ilya Silvestrov ◽  
Maxim Dmitriev ◽  
Dmitry Neklyudov ◽  
Maxim Protasov ◽  
...  

We have developed nonlinear beamforming (NLBF), a method for enhancing modern 3D prestack seismic data acquired onshore with small field arrays or single sensors in which weak reflected signals are buried beneath the strong scattered noise induced by a complex near surface. The method is based on the ideas of multidimensional stacking techniques, such as the common-reflection-surface stack and multifocusing, but it is designed specifically to improve the prestack signal-to-noise ratio of modern 3D land seismic data. Essentially, NLBF searches for coherent local events in the prestack data and then performs beamforming along the estimated surfaces. Comparing different gathers that can be extracted from modern 3D data acquired with orthogonal acquisition geometries, we determine that the cross-spread domain (CSD) is typically the most convenient and efficient. Conventional noise removal applied to modern data from small arrays or single sensors does not adequately reveal the underlying reflection signal. Instead, NLBF supplements these conventional tools and performs final aggregation of weak and still broken reflection signals, where the strength is controlled by the summation aperture. We have developed the details of the NLBF algorithm in CSD and determined the capabilities of the method on real 3D land data with the focus on enhancing reflections and early arrivals. We expect NLBF to help streamline seismic processing of modern high-channel-count and single-sensor data, leading to improved images as well as better prestack data for estimation of reservoir properties.


2020 ◽  
Vol 58 (12) ◽  
pp. 8874-8887
Author(s):  
Mi Zhang ◽  
Yang Liu ◽  
Haoran Zhang ◽  
Yangkang Chen

Geophysics ◽  
2011 ◽  
Vol 76 (4) ◽  
pp. S151-S155 ◽  
Author(s):  
Mikhail Baykulov ◽  
Stefan Dümmong ◽  
Dirk Gajewski

A processing workflow was introduced for reflection seismic data that is based entirely on common-reflection-surface (CRS) stacking attributes. This workflow comprises the CRS stack, multiple attenuation, velocity model building, prestack data enhancement, trace interpolation, and data regularization. Like other methods, its limitation is the underlying hyperbolic assumption. The CRS workflow provides an alternative processing path in case conventional common midpoint (CMP) processing is unsatisfactory. Particularly for data with poor signal-to-noise ratio and low-fold acquisition, the CRS workflow is advantageous. The data regularization feature and the ability of prestack data enhancement provide quality control in velocity model building and improve prestack depth-migrated images.


2002 ◽  
Vol 42 (1) ◽  
pp. 607
Author(s):  
C.R.T Ramsden ◽  
A.S Long

3D seismic technologies have advanced rapidly during the 1990s. The new generation of seismic vessels such as the Ramform design with their massive towing capacities has changed the way in which modern seismic data is acquired. This has resulted in a large increase worldwide in the use of 3D seismic data during the exploration phase because of the reduction in the cost of 3D data. A statistical database has emerged showing that drilling on 3D data will double the commercial success rate compared to drilling on 2D data.Historically, dual-source acquisition has dominated exploration (by comparison to single-source acquisition) due to cost savings associated with the fact that singlesource acquisition implies a geophysical requirement to tow the streamers at half the separation of dual-source acquisition. Data quality associated with single-source acquisition, however, is typically much superior to dualsource data. The ability now to tow 12–16 streamers has reduced costs so that single-source acquisition is now cost effective. The surveys using single-source acquisition allow 3D data to be acquired with significantly higher trace densities and crew efficiencies than industry standard, and are called High Density 3D or HD3D. These surveys have benefits of increased fold, improved spatial resolution and improved imaging quality, and can now be routinely conducted, especially in difficult data areas.The North West Shelf of Australia is a difficult data area because of the presence of strong multiple noise trains that often mask or interfere with the primary reflections (Ramdsen et al, 1988). Standard multiple attenuation techniques have had only limited success. HD3D with its higher trace density and 40% improvement in signal-to-noise ratio has resulted in improved data quality in difficult data areas, and should result in data improvements on the North West Shelf as well.Furthermore, the Continuous Long Offset (CLO) recording technique using Ramform technology is a dualvessel operation that has demonstrated significant operational efficiency improvements in long offset (typically deep water/targets) 3D seismic acquisition. Survey turnaround times can be reduced by as much as half of those using conventional techniques. The CLO technique is particularly well suited for deepwater recording.


Sign in / Sign up

Export Citation Format

Share Document