Learning the blending spikes using sparse dictionaries

2020 ◽  
Vol 222 (3) ◽  
pp. 1846-1863
Author(s):  
Yangkang Chen ◽  
Shaohuan Zu ◽  
Wei Chen ◽  
Mi Zhang ◽  
Zhe Guan

SUMMARY Deblending plays an important role in preparing high-quality seismic data from modern blended simultaneous-source seismic acquisition. State-of-the-art deblending is based on the sparsity-constrained iterative inversion. Inversion-based deblending assumes that the ambient noise level is low and the data misfit during iterative inversion accounts for the random ambient noise. The traditional method becomes problematic when the random ambient noise becomes extremely strong and the inversion iteratively fits the random noise instead of the signal and blending interference. We propose a constrained inversion model that takes the strong random noise into consideration and can achieve satisfactory result even when strong random noise exists. The principle of this new method is that we use sparse dictionaries to learn the blending spikes and thus the learned dictionary atoms are able to distinguish between blending spikes and random noise. The separated signal and blending spikes can then be better fitted by the iterative inversion framework. Synthetic and field data examples are used to demonstrate the performance of the new approach.

Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. V137-V148 ◽  
Author(s):  
Pierre Turquais ◽  
Endrias G. Asgedom ◽  
Walter Söllner

We have addressed the seismic data denoising problem, in which the noise is random and has an unknown spatiotemporally varying variance. In seismic data processing, random noise is often attenuated using transform-based methods. The success of these methods in denoising depends on the ability of the transform to efficiently describe the signal features in the data. Fixed transforms (e.g., wavelets, curvelets) do not adapt to the data and might fail to efficiently describe complex morphologies in the seismic data. Alternatively, dictionary learning methods adapt to the local morphology of the data and provide state-of-the-art denoising results. However, conventional denoising by dictionary learning requires a priori information on the noise variance, and it encounters difficulties when applied for denoising seismic data in which the noise variance is varying in space or time. We have developed a coherence-constrained dictionary learning (CDL) method for denoising that does not require any a priori information related to the signal or noise. To denoise a given window of a seismic section using CDL, overlapping small 2D patches are extracted and a dictionary of patch-sized signals is trained to learn the elementary features embedded in the seismic signal. For each patch, using the learned dictionary, a sparse optimization problem is solved, and a sparse approximation of the patch is computed to attenuate the random noise. Unlike conventional dictionary learning, the sparsity of the approximation is constrained based on coherence such that it does not need a priori noise variance or signal sparsity information and is still optimal to filter out Gaussian random noise. The denoising performance of the CDL method is validated using synthetic and field data examples, and it is compared with the K-SVD and FX-Decon denoising. We found that CDL gives better denoising results than K-SVD and FX-Decon for removing noise when the variance varies in space or time.


2019 ◽  
Vol 38 (8) ◽  
pp. 625-629 ◽  
Author(s):  
Jiawen Song ◽  
Peiming Li ◽  
Zhongping Qian ◽  
Mugang Zhang ◽  
Pengyuan Sun ◽  
...  

Compared with conventional seismic acquisition methods, simultaneous-source acquisition utilizes independent shooting that allows for source interference, which reduces the time and cost of acquisition. However, additional processing is required to separate the interfering sources. Here, we present an inversion-based deblending method, which distinguishes signal from blending noise based on coherency differences in 3D receiver gathers. We first transform the seismic data into the frequency-wavenumber-wavenumber domain and impose a sparse constraint to estimate the coherent signal. We then subtract the estimated signal from the original input to predict the interference noise. Driven by data residuals, the signal is updated iteratively with shrinking thresholds until the signal and noise fully separate. We test our presented method on two 3D field data sets to demonstrate how the method proficiently separates interfering vibroseis sources with high fidelity.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. V355-V365
Author(s):  
Julián L. Gómez ◽  
Danilo R. Velis

Dictionary learning (DL) is a machine learning technique that can be used to find a sparse representation of a given data set by means of a relatively small set of atoms, which are learned from the input data. DL allows for the removal of random noise from seismic data very effectively. However, when seismic data are contaminated with footprint noise, the atoms of the learned dictionary are often a mixture of data and coherent noise patterns. In this scenario, DL requires carrying out a morphological attribute classification of the atoms to separate the noisy atoms from the dictionary. Instead, we have developed a novel DL strategy for the removal of footprint patterns in 3D seismic data that is based on an augmented dictionary built upon appropriately filtering the learned atoms. The resulting augmented dictionary, which contains the filtered atoms and their residuals, has a high discriminative power in separating signal and footprint atoms, thus precluding the use of any statistical classification strategy to segregate the atoms of the learned dictionary. We filter the atoms using a domain transform filtering approach, a very efficient edge-preserving smoothing algorithm. As in the so-called coherence-constrained DL method, the proposed DL strategy does not require the user to know or adjust the noise level or the sparsity of the solution for each data set. Furthermore, it only requires one pass of DL and is shown to produce successful transfer learning. This increases the speed of the denoising processing because the augmented dictionary does not need to be calculated for each time slice of the input data volume. Results on synthetic and 3D public-domain poststack field data demonstrate effective footprint removal with accurate edge preservation.


2020 ◽  
pp. 1-16
Author(s):  
Meriem Khelifa ◽  
Dalila Boughaci ◽  
Esma Aïmeur

The Traveling Tournament Problem (TTP) is concerned with finding a double round-robin tournament schedule that minimizes the total distances traveled by the teams. It has attracted significant interest recently since a favorable TTP schedule can result in significant savings for the league. This paper proposes an original evolutionary algorithm for TTP. We first propose a quick and effective constructive algorithm to construct a Double Round Robin Tournament (DRRT) schedule with low travel cost. We then describe an enhanced genetic algorithm with a new crossover operator to improve the travel cost of the generated schedules. A new heuristic for ordering efficiently the scheduled rounds is also proposed. The latter leads to significant enhancement in the quality of the schedules. The overall method is evaluated on publicly available standard benchmarks and compared with other techniques for TTP and UTTP (Unconstrained Traveling Tournament Problem). The computational experiment shows that the proposed approach could build very good solutions comparable to other state-of-the-art approaches or better than the current best solutions on UTTP. Further, our method provides new valuable solutions to some unsolved UTTP instances and outperforms prior methods for all US National League (NL) instances.


Cybersecurity ◽  
2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Shushan Arakelyan ◽  
Sima Arasteh ◽  
Christophe Hauser ◽  
Erik Kline ◽  
Aram Galstyan

AbstractTackling binary program analysis problems has traditionally implied manually defining rules and heuristics, a tedious and time consuming task for human analysts. In order to improve automation and scalability, we propose an alternative direction based on distributed representations of binary programs with applicability to a number of downstream tasks. We introduce Bin2vec, a new approach leveraging Graph Convolutional Networks (GCN) along with computational program graphs in order to learn a high dimensional representation of binary executable programs. We demonstrate the versatility of this approach by using our representations to solve two semantically different binary analysis tasks – functional algorithm classification and vulnerability discovery. We compare the proposed approach to our own strong baseline as well as published results, and demonstrate improvement over state-of-the-art methods for both tasks. We evaluated Bin2vec on 49191 binaries for the functional algorithm classification task, and on 30 different CWE-IDs including at least 100 CVE entries each for the vulnerability discovery task. We set a new state-of-the-art result by reducing the classification error by 40% compared to the source-code based inst2vec approach, while working on binary code. For almost every vulnerability class in our dataset, our prediction accuracy is over 80% (and over 90% in multiple classes).


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 230 ◽  
Author(s):  
Slavisa Tomic ◽  
Marko Beko

This work addresses the problem of target localization in adverse non-line-of-sight (NLOS) environments by using received signal strength (RSS) and time of arrival (TOA) measurements. It is inspired by a recently published work in which authors discuss about a critical distance below and above which employing combined RSS-TOA measurements is inferior to employing RSS-only and TOA-only measurements, respectively. Here, we revise state-of-the-art estimators for the considered target localization problem and study their performance against their counterparts that employ each individual measurement exclusively. It is shown that the hybrid approach is not the best one by default. Thus, we propose a simple heuristic approach to choose the best measurement for each link, and we show that it can enhance the performance of an estimator. The new approach implicitly relies on the concept of the critical distance, but does not assume certain link parameters as given. Our simulations corroborate with findings available in the literature for line-of-sight (LOS) to a certain extent, but they indicate that more work is required for NLOS environments. Moreover, they show that the heuristic approach works well, matching or even improving the performance of the best fixed choice in all considered scenarios.


2018 ◽  
Vol 15 (1) ◽  
pp. 58-62 ◽  
Author(s):  
Weilin Huang ◽  
Runqiu Wang ◽  
Xiangbo Gong ◽  
Yangkang Chen

Geophysics ◽  
2012 ◽  
Vol 77 (3) ◽  
pp. A9-A12 ◽  
Author(s):  
Kees Wapenaar ◽  
Joost van der Neut ◽  
Jan Thorbecke

Deblending of simultaneous-source data is usually considered to be an underdetermined inverse problem, which can be solved by an iterative procedure, assuming additional constraints like sparsity and coherency. By exploiting the fact that seismic data are spatially band-limited, deblending of densely sampled sources can be carried out as a direct inversion process without imposing these constraints. We applied the method with numerically modeled data and it suppressed the crosstalk well, when the blended data consisted of responses to adjacent, densely sampled sources.


2013 ◽  
Vol 56 (7) ◽  
pp. 1200-1208 ◽  
Author(s):  
Yue Li ◽  
BaoJun Yang ◽  
HongBo Lin ◽  
HaiTao Ma ◽  
PengFei Nie

Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. V79-V86 ◽  
Author(s):  
Hakan Karsli ◽  
Derman Dondurur ◽  
Günay Çifçi

Time-dependent amplitude and phase information of stacked seismic data are processed independently using complex trace analysis in order to facilitate interpretation by improving resolution and decreasing random noise. We represent seismic traces using their envelopes and instantaneous phases obtained by the Hilbert transform. The proposed method reduces the amplitudes of the low-frequency components of the envelope, while preserving the phase information. Several tests are performed in order to investigate the behavior of the present method for resolution improvement and noise suppression. Applications on both 1D and 2D synthetic data show that the method is capable of reducing the amplitudes and temporal widths of the side lobes of the input wavelets, and hence, the spectral bandwidth of the input seismic data is enhanced, resulting in an improvement in the signal-to-noise ratio. The bright-spot anomalies observed on the stacked sections become clearer because the output seismic traces have a simplified appearance allowing an easier data interpretation. We recommend applying this simple signal processing for signal enhancement prior to interpretation, especially for single channel and low-fold seismic data.


Sign in / Sign up

Export Citation Format

Share Document