Edge-preserving frequency-offset denoising of seismic data

Geophysics ◽  
2018 ◽  
Vol 83 (5) ◽  
pp. V293-V303 ◽  
Author(s):  
Julián L. Gómez ◽  
Danilo R. Velis

We have developed new algorithms for denoising 2D or 3D poststack seismic-amplitude data that use simple edge-preserving smoothing operators in the frequency-offset domain. The algorithms are aimed to attenuate random and coherent noise, to enhance the signal energy and lateral continuity, and to preserve structural discontinuities such as faults. The methods consist of fitting the frequency slices of the data in the spatial dimension by means of low-order polynomials. We use an overlapping window operator to select the fitting parameters for each point of the slice from the neighborhood with minimum fitting error to provide edge preservation. Various synthetic examples and a field data set are used to demonstrate the strengths and limitations of the algorithms. The denoised outputs indicate enhanced edge preservation of seismic features, which reflects clearer details of semblance attributes.

Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. V355-V365
Author(s):  
Julián L. Gómez ◽  
Danilo R. Velis

Dictionary learning (DL) is a machine learning technique that can be used to find a sparse representation of a given data set by means of a relatively small set of atoms, which are learned from the input data. DL allows for the removal of random noise from seismic data very effectively. However, when seismic data are contaminated with footprint noise, the atoms of the learned dictionary are often a mixture of data and coherent noise patterns. In this scenario, DL requires carrying out a morphological attribute classification of the atoms to separate the noisy atoms from the dictionary. Instead, we have developed a novel DL strategy for the removal of footprint patterns in 3D seismic data that is based on an augmented dictionary built upon appropriately filtering the learned atoms. The resulting augmented dictionary, which contains the filtered atoms and their residuals, has a high discriminative power in separating signal and footprint atoms, thus precluding the use of any statistical classification strategy to segregate the atoms of the learned dictionary. We filter the atoms using a domain transform filtering approach, a very efficient edge-preserving smoothing algorithm. As in the so-called coherence-constrained DL method, the proposed DL strategy does not require the user to know or adjust the noise level or the sparsity of the solution for each data set. Furthermore, it only requires one pass of DL and is shown to produce successful transfer learning. This increases the speed of the denoising processing because the augmented dictionary does not need to be calculated for each time slice of the input data volume. Results on synthetic and 3D public-domain poststack field data demonstrate effective footprint removal with accurate edge preservation.


Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. V1-V10
Author(s):  
Julián L. Gómez ◽  
Danilo R. Velis ◽  
Juan I. Sabbione

We have developed an empirical-mode decomposition (EMD) algorithm for effective suppression of random and coherent noise in 2D and 3D seismic amplitude data. Unlike other EMD-based methods for seismic data processing, our approach does not involve the time direction in the computation of the signal envelopes needed for the iterative sifting process. Instead, we apply the sifting algorithm spatially in the inline-crossline plane. At each time slice, we calculate the upper and lower signal envelopes by means of a filter whose length is adapted dynamically at each sifting iteration according to the spatial distribution of the extrema. The denoising of a 3D volume is achieved by removing the most oscillating modes of each time slice from the noisy data. We determine the performance of the algorithm by using three public-domain poststack field data sets: one 2D line of the well-known Alaska 2D data set, available from the US Geological Survey; a subset of the Penobscot 3D volume acquired offshore by the Nova Scotia Department of Energy, Canada; and a subset of the Stratton 3D land data from South Texas, available from the Bureau of Economic Geology at the University of Texas at Austin. The results indicate that random and coherent noise, such as footprint signatures, can be mitigated satisfactorily, enhancing the reflectors with negligible signal leakage in most cases. Our method, called empirical-mode filtering (EMF), yields improved results compared to other 2D and 3D techniques, such as [Formula: see text] EMD filter, [Formula: see text] deconvolution, and [Formula: see text]-[Formula: see text]-[Formula: see text] adaptive prediction filtering. EMF exploits the flexibility of EMD on seismic data and is presented as an efficient and easy-to-apply alternative for denoising seismic data with mild to moderate structural complexity.


Geophysics ◽  
1985 ◽  
Vol 50 (12) ◽  
pp. 2697-2708 ◽  
Author(s):  
Gary Yu

The partition of plane seismic waves at plane interfaces introduces changes in seismic amplitude which vary with angle of incidence. These amplitude variations are a function of the elastic parameters of rocks on either side of the interface. Controlled‐amplitude processing is designed to obtain the true amplitude information which is geologic in origin. The offset‐amplitude information may be successfully used to predict the fluid type in reservoir sands. Various tests were carried out on a seismic profile from the Gulf Coast. The processing comparison emphasized the effects and pitfalls of trace equalization, coherent noise, offset, and surface‐related problems. Two wells drilled at amplitude anomaly locations confirmed the prediction of hydrocarbons from offset‐amplitude analysis. Furthermore, controlled‐amplitude processing provided clues in evaluating reservoir quality, which was not evident on the conventional relative amplitude data.


Geophysics ◽  
2009 ◽  
Vol 74 (4) ◽  
pp. V69-V73 ◽  
Author(s):  
Yan-hong Lu ◽  
Wen-kai Lu

This paper focuses on suppressing random seismic noise while preserving signals and edges. We propose an edge-preserving polynomial fitting (EPPF) method leading to good signal and edge preservation. The EPPF method assumes that a 1D signal can be modeled by a polynomial. A series of shifted windows are used to estimate any sample in a 1D signal. After that, the window with the minimum fitting error is selected and its output is assigned as the final estimate for this sample. For a point in 2D seismic data, several 1D signals are extracted along different directions first and then are processed by the EPPF method. After that, we select the direction with a minimum fitting error and assign its output as the final estimate for this point. Applications with synthetic and real data sets show that the EPPF method suppresses the random seismic noise effectively while preserving the signals and edges. Comparisons of results obtained by the EPPF method, the edge-preserving smoothing (EPS) method, and the polynomial fitting (PF) method show that the EPPF method outperforms EPS and PF methods in these tests.


2017 ◽  
Vol 2017 ◽  
pp. 1-8
Author(s):  
Cem Bozkus ◽  
Basilio B. Fraguela

In recent years, vast amounts of data of different kinds, from pictures and videos from our cameras to software logs from sensor networks and Internet routers operating day and night, are being generated. This has led to new big data problems, which require new algorithms to handle these large volumes of data and as a result are very computationally demanding because of the volumes to process. In this paper, we parallelize one of these new algorithms, namely, the HyperLogLog algorithm, which estimates the number of different items in a large data set with minimal memory usage, as it lowers the typical memory usage of this type of calculation from O(n) to O(1). We have implemented parallelizations based on OpenMP and OpenCL and evaluated them in a standard multicore system, an Intel Xeon Phi, and two GPUs from different vendors. The results obtained in our experiments, in which we reach a speedup of 88.6 with respect to an optimized sequential implementation, are very positive, particularly taking into account the need to run this kind of algorithm on large amounts of data.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 4057-4067

Designing of Median filter that can process 36 pixels at a time with edge preservation similar to a filter of size 9. Median sorting is done using Modified minimum exchange sorting method which attracts double the amount of inputs in order to reduce number of comparators used for median filtering. For the same reason i.e. double the amount of inputs switching loss is high in the circuit therefore data driven clock gating (DDCG) is applied for SRAM to form data driven FIFO. Considering space radiation that could excite memory state, Addition of DMR (Double Modular Redundancy) in FPIC would rectify the soft error that could possibly occur due to radiation in space. Therefore proposed method is capable of producing sharp image, controlling switching loss, minimizes area, and reduces soft errors.


Geophysics ◽  
2021 ◽  
pp. 1-51
Author(s):  
Chao Wang ◽  
Yun Wang

Reduced-rank filtering is a common method for attenuating noise in seismic data. As conventional reduced-rank filtering distinguishes signals from noises only according to singular values, it performs poorly when the signal-to-noise ratio is very low, or when data contain high levels of isolate or coherent noise. Therefore, we developed a novel and robust reduced-rank filtering based on the singular value decomposition in the time-space domain. In this method, noise is recognized and attenuated according to the characteristics of both singular values and singular vectors. The left and right singular vectors corresponding to large singular values are selected firstly. Then, the right singular vectors are classified into different categories according to their curve characteristics, such as jump, pulse, and smooth. Each kind of right singular vector is related to a type of noise or seismic event, and is corrected by using a different filtering technology, such as mean filtering, edge-preserving smoothing or edge-preserving median filtering. The left singular vectors are also corrected by using the filtering methods based on frequency attributes like main-frequency and frequency bandwidth. To process seismic data containing a variety of events, local data are extracted along the local dip of event. The optimal local dip is identified according to the singular values and singular vectors of the data matrices that are extracted along different trial directions. This new filtering method has been applied to synthetic and field seismic data, and its performance is compared with that of several conventional filtering methods. The results indicate that the new method is more robust for data with a low signal-to-noise ratio, strong isolate noise, or coherent noise. The new method also overcomes the difficulties associated with selecting an optimal rank.


2020 ◽  
Vol 8 (4) ◽  
pp. SQ1-SQ13
Author(s):  
Christoph G. Eichkitz ◽  
Sarah Schneider ◽  
Andreas B. Hölker ◽  
Philip Birkhäuser ◽  
Herfried Madritsch

The identification and characterization of tectonic faults in the subsurface represent key aspects of geologic exploration activities across the world. We have evaluated the impact of alternative seismic time imaging methods on initial subsurface fault mapping in three dimensions in the form of a case study situated in the most external foreland of the European Central Alps (the northernmost Molasse Basin). Four different seismic amplitude volumes of one and the same 3D seismic data set, differing in imaging technologies and parameterizations applied, were considered for the interpretation of a fault zone dissecting a Mesozoic sedimentary sequence that is characterized by a pronounced mechanical stratigraphy and has witnessed a multiphase tectonic evolution. For this purpose, we interpreted each seismic amplitude volume separately. In addition, we computed a series of seismic attributes individually for each volume. Comparison of the different data interpretations revealed consistent results concerning the mapping of the seismic marker horizons and main fault segments. Deviations concern the apparent degree of vertical and lateral fault zone segmentation and the occurrence of small-scale fault strands that may be regarded as important fault kinematic indicators. The compilation of all fault interpretations in map form allows the critical assessment of the robustness of the initial seismic fault mapping, highlighting well-constrained from poorly defined fault zone elements. We conclude that the consideration of multiple seismic processing products for subsurface fault mapping is advisable to evaluate general imaging uncertainties and potentially guide the development of fault zone model variants to tackle previously discussed aspects of conceptual interpretation uncertainties.


2016 ◽  
Vol 2016 (4) ◽  
pp. 21-36 ◽  
Author(s):  
Tao Wang ◽  
Ian Goldberg

Abstract Website fingerprinting allows a local, passive observer monitoring a web-browsing client’s encrypted channel to determine her web activity. Previous attacks have shown that website fingerprinting could be a threat to anonymity networks such as Tor under laboratory conditions. However, there are significant differences between laboratory conditions and realistic conditions. First, in laboratory tests we collect the training data set together with the testing data set, so the training data set is fresh, but an attacker may not be able to maintain a fresh data set. Second, laboratory packet sequences correspond to a single page each, but for realistic packet sequences the split between pages is not obvious. Third, packet sequences may include background noise from other types of web traffic. These differences adversely affect website fingerprinting under realistic conditions. In this paper, we tackle these three problems to bridge the gap between laboratory and realistic conditions for website fingerprinting. We show that we can maintain a fresh training set with minimal resources. We demonstrate several classification-based techniques that allow us to split full packet sequences effectively into sequences corresponding to a single page each. We describe several new algorithms for tackling background noise. With our techniques, we are able to build the first website fingerprinting system that can operate directly on packet sequences collected in the wild.


Sign in / Sign up

Export Citation Format

Share Document