Retrieving the leaked signals from noise using a fast dictionary learning method

Geophysics ◽  
2021 ◽  
pp. 1-86
Author(s):  
Wei Chen ◽  
Omar M. Saad ◽  
Yapo Abolé Serge Innocent Oboué ◽  
Liuqing Yang ◽  
Yangkang Chen

Most traditional seismic denoising algorithms will cause damages to useful signals, which are visible from the removed noise profiles and are known as signal leakage. The local signal-and-noise orthogonalization method is an effective method for retrieving the leaked signals from the removed noise. Retrieving leaked signals while rejecting the noise is compromised by the smoothing radius parameter in the local orthogonalization method. It is not convenient to adjust the smoothing radius because it is a global parameter while the seismic data is highly variable locally. To retrieve the leaked signals adaptively, we propose a new dictionary learning method. Because of the patch-based nature of the dictionary learning method, it can adapt to the local feature of seismic data. We train a dictionary of atoms that represent the features of the useful signals from the initially denoised data. Based on the learned features, we retrieve the weak leaked signals from the noise via a sparse co ding step. Considering the large computational cost when training a dictionary from high-dimensional seismic data, we leverage a fast dictionary up dating algorithm, where the singular value decomposition (SVD) is replaced via the algebraic mean to update the dictionary atom. We test the performance of the proposed method on several synthetic and field data examples, and compare it with that from the state-of-the-art local orthogonalization method.

Geophysics ◽  
2021 ◽  
pp. 1-97
Author(s):  
Dawei Liu ◽  
Lei Gao ◽  
Xiaokai Wang ◽  
wenchao Chen

Acquisition footprint causes serious interference with seismic attribute analysis, which severely hinders accurate reservoir characterization. Therefore, acquisition footprint suppression has become increasingly important in industry and academia. In this work, we assume that the time slice of 3D post-stack migration seismic data mainly comprises two components, i.e., useful signals and acquisition footprint. Useful signals describe the spatial distributions of geological structures with local piecewise smooth morphological features. However, acquisition footprint often behaves as periodic artifacts in the time-slice domain. In particular, the local morphological features of the acquisition footprint in the marine seismic acquisition appear as stripes. As useful signals and acquisition footprint have different morphological features, we can train an adaptive dictionary and divide the atoms of the dictionary into two sub-dictionaries to reconstruct these two components. We propose an adaptive dictionary learning method for acquisition footprint suppression in the time slice of 3D post-stack migration seismic data. To obtain an adaptive dictionary, we use the K-singular value decomposition algorithm to sparsely represent the patches in the time slice of 3D post-stack migration seismic data. Each atom of the trained dictionary represents certain local morphological features of the time slice. According to the difference in the variation level between the horizontal and vertical directions, the atoms of the trained dictionary are divided into two types. One type significantly represents the local morphological features of the acquisition footprint, whereas the other type represents the local morphological features of useful signals. Then, these two components are reconstructed using morphological component analysis based on different types of atoms, respectively. Synthetic and field data examples indicate that the proposed method can effectively suppress the acquisition footprint with fidelity to the original data.


Author(s):  
Yuki Takashima ◽  
Toru Nakashika ◽  
Tetsuya Takiguchi ◽  
Yasuo Ariki

Abstract Voice conversion (VC) is a technique of exclusively converting speaker-specific information in the source speech while preserving the associated phonemic information. Non-negative matrix factorization (NMF)-based VC has been widely researched because of the natural-sounding voice it achieves when compared with conventional Gaussian mixture model-based VC. In conventional NMF-VC, models are trained using parallel data which results in the speech data requiring elaborate pre-processing to generate parallel data. NMF-VC also tends to be an extensive model as this method has several parallel exemplars for the dictionary matrix, leading to a high computational cost. In this study, an innovative parallel dictionary-learning method using non-negative Tucker decomposition (NTD) is proposed. The proposed method uses tensor decomposition and decomposes an input observation into a set of mode matrices and one core tensor. The proposed NTD-based dictionary-learning method estimates the dictionary matrix for NMF-VC without using parallel data. The experimental results show that the proposed method outperforms other methods in both parallel and non-parallel settings.


Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. V133-V142 ◽  
Author(s):  
Hojjat Haghshenas Lari ◽  
Mostafa Naghizadeh ◽  
Mauricio D. Sacchi ◽  
Ali Gholami

We have developed an adaptive singular spectrum analysis (ASSA) method for seismic data denoising and interpolation purposes. Our algorithm iteratively updates the singular-value decomposition (SVD) of current spatial patches using the most recently added spatial sample. The method reduces the computational cost of classic singular spectrum analysis (SSA) by requiring QR decompositions on smaller matrices rather than the factorization of the entire Hankel matrix of the data. A comparison between results obtained by the ASSA and SSA methods, in which the SVD applies to all of the traces at once, proves that the ASSA method is a valid way to cope with spatially varying dips. In addition, a comparison of the ASSA method with the windowed SSA method indicates gains in efficiency and accuracy. Synthetic and real data examples illustrate the effectiveness of our method.


2020 ◽  
Vol 222 (3) ◽  
pp. 1717-1727 ◽  
Author(s):  
Yangkang Chen

SUMMARY The K-SVD algorithm has been successfully utilized for adaptively learning the sparse dictionary in 2-D seismic denoising. Because of the high computational cost of many singular value decompositions (SVDs) in the K-SVD algorithm, it is not applicable in practical situations, especially in 3-D or 5-D problems. In this paper, I extend the dictionary learning based denoising approach from 2-D to 3-D. To address the computational efficiency problem in K-SVD, I propose a fast dictionary learning approach based on the sequential generalized K-means (SGK) algorithm for denoising multidimensional seismic data. The SGK algorithm updates each dictionary atom by taking an arithmetic average of several training signals instead of calculating an SVD as used in K-SVD algorithm. I summarize the sparse dictionary learning algorithm using K-SVD, and introduce SGK algorithm together with its detailed mathematical implications. 3-D synthetic, 2-D and 3-D field data examples are used to demonstrate the performance of both K-SVD and SGK algorithms. It has been shown that SGK algorithm can significantly increase the computational efficiency while only slightly degrading the denoising performance.


Geophysics ◽  
2021 ◽  
pp. 1-83
Author(s):  
Mohammed Outhmane Faouzi Zizi ◽  
Pierre Turquais

For a marine seismic survey, the recorded and processed data size can reach several terabytes. Storing seismic data sets is costly and transferring them between storage devices can be challenging. Dictionary learning has been shown to provide representations with a high level of sparsity. This method stores the shape of the redundant events once, and represents each occurrence of these events with a single sparse coefficient. Therefore, an efficient dictionary learning based compression workflow, which is specifically designed for seismic data, is developed here. This compression method differs from conventional compression methods in three respects: 1) the transform domain is not predefined but data-driven; 2) the redundancy in seismic data is fully exploited by learning small-sized dictionaries from local windows of the seismic shot gathers; 3) two modes are proposed depending on the geophysical application. Based on a test seismic data set, we demonstrate superior performance of the proposed workflow in terms of compression ratio for a wide range of signal-to-residual ratios, compared to standard seismic data methods, such as the zfp software or algorithms from the Seismic Unix package. Using a more realistic data set of marine seismic acquisition, we evaluate the capability of the proposed workflow to preserve the seismic signal for different applications. For applications such as near-real time transmission and long-term data storage, we observe insignificant signal leakage on a 2D line stack when the dictionary learning method reaches a compression ratio of 24.85. For other applications such as visual QC of shot gathers, our method preserves the visual aspect of the data even when a compression ratio of 95 is reached.


2016 ◽  
Vol 2016 ◽  
pp. 1-15 ◽  
Author(s):  
Zhongrong Shi

Discriminative dictionary learning, playing a critical role in sparse representation based classification, has led to state-of-the-art classification results. Among the existing discriminative dictionary learning methods, two different approaches, shared dictionary and class-specific dictionary, which associate each dictionary atom to all classes or a single class, have been studied. The shared dictionary is a compact method but with lack of discriminative information; the class-specific dictionary contains discriminative information but consists of redundant atoms among different class dictionaries. To combine the advantages of both methods, we propose a new weighted block dictionary learning method. This method introduces proto dictionary and class dictionary. The proto dictionary is a base dictionary without label information. The class dictionary is a class-specific dictionary, which is a weighted proto dictionary. The weight value indicates the contribution of each proto dictionary block when constructing a class dictionary. These weight values can be computed conveniently as they are designed to adapt sparse coefficients. Different class dictionaries have different weight vectors but share the same proto dictionary, which results in higher discriminative power and lower redundancy. Experimental results demonstrate that the proposed algorithm has better classification results compared with several dictionary learning algorithms.


Geophysics ◽  
2018 ◽  
Vol 83 (3) ◽  
pp. V215-V231 ◽  
Author(s):  
Lina Liu ◽  
Jianwei Ma ◽  
Gerlind Plonka

We have developed a new regularization method for the sparse representation and denoising of seismic data. Our approach is based on two components: a sparse data representation in a learned dictionary and a similarity measure for image patches that is evaluated using the Laplacian matrix of a graph. Dictionary-learning (DL) methods aim to find a data-dependent basis or a frame that admits a sparse data representation while capturing the characteristics of the given data. We have developed two algorithms for DL based on clustering and singular-value decomposition, called the first and second dictionary constructions. Besides using an adapted dictionary, we also consider a similarity measure for the local geometric structures of the seismic data using the Laplacian matrix of a graph. Our method achieves better denoising performance than existing denoising methods, in terms of peak signal-to-noise ratio values and visual estimation of weak-event preservation. Comparisons of experimental results on field data using traditional [Formula: see text]-[Formula: see text] deconvolution (FX-Decon) and curvelet thresholding methods are also provided.


Genes ◽  
2021 ◽  
Vol 12 (8) ◽  
pp. 1117
Author(s):  
Waleed Alam ◽  
Hilal Tayara ◽  
Kil To Chong

DNA is subject to epigenetic modification by the molecule N4-methylcytosine (4mC). N4-methylcytosine plays a crucial role in DNA repair and replication, protects host DNA from degradation, and regulates DNA expression. However, though current experimental techniques can identify 4mC sites, such techniques are expensive and laborious. Therefore, computational tools that can predict 4mC sites would be very useful for understanding the biological mechanism of this vital type of DNA modification. Conventional machine-learning-based methods rely on hand-crafted features, but the new method saves time and computational cost by making use of learned features instead. In this study, we propose i4mC-Deep, an intelligent predictor based on a convolutional neural network (CNN) that predicts 4mC modification sites in DNA samples. The CNN is capable of automatically extracting important features from input samples during training. Nucleotide chemical properties and nucleotide density, which together represent a DNA sequence, act as CNN input data. The outcome of the proposed method outperforms several state-of-the-art predictors. When i4mC-Deep was used to analyze G. subterruneus DNA, the accuracy of the results was improved by 3.9% and MCC increased by 10.5% compared to a conventional predictor.


Sign in / Sign up

Export Citation Format

Share Document