Dictionary Learning With Convolutional Structure for Seismic Data Denoising and Interpolation

Geophysics ◽  
2021 ◽  
pp. 1-102
Author(s):  
Murad Almadani ◽  
Umair bin Waheed ◽  
Mudassir Masood ◽  
Yangkang Chen

Seismic data inevitably suffers from random noise and missing traces in field acquisition. This limits the utilization of seismic data for subsequent imaging or inversion applications. Recently, dictionary learning has gained remarkable success in seismic data denoising and interpolation. Variants of the patch-based learning technique, such as the K-SVD algorithm, have been shown to improve denoising and interpolation performance compared to the analytic transform-based methods. However, patch-based learning algorithms work on overlapping patches of data and do not take the full data into account during reconstruction. By contrast, the Convolutional Sparse Coding (CSC) model treats signals globally and, therefore, has shown superior performance over patch-based methods in several image processing applications. In consequence, we test the use of CSC model for seismic data denoising and interpolation. In particular, we use the Local Block Coordinate Descent (LoBCoD) algorithm to reconstruct missing traces and clean seismic data from noisy input. The denoising and interpolation performance of the LoBCoD algorithm has been compared with that of K-SVD and Orthogonal Matching Pursuit (OMP) algorithms using synthetic and field data examples. We use three quality measures to test the denoising accuracy: the peak signal-to-noise ratio (PSNR), the relative L2-norm of the error (RLNE), and the structural similarity index (SSIM). We find that LoBCoD performs better than K-SVD and OMP for all test cases in improving PSNR and SSIM, and in reducing RLNE. These observations suggest enormous potential of the CSC model in seismic data denoising and interpolation applications.

Geophysics ◽  
2017 ◽  
Vol 82 (6) ◽  
pp. V385-V396 ◽  
Author(s):  
Mohammad Amir Nazari Siahsar ◽  
Saman Gholtashi ◽  
Amin Roshandel Kahoo ◽  
Wei Chen ◽  
Yangkang Chen

Representation of a signal in a sparse way is a useful and popular methodology in signal-processing applications. Among several widely used sparse transforms, dictionary learning (DL) algorithms achieve most attention due to their ability in making data-driven nonanalytical (nonfixed) atoms. Various DL methods are well-established in seismic data processing due to the inherent low-rank property of this kind of data. We have introduced a novel data-driven 3D DL algorithm that is extended from the 2D nonnegative DL scheme via the multitasking strategy for random noise attenuation of seismic data. In addition to providing parts-based learning, we exploit nonnegativity constraint to induce sparsity on the data transformation and reduce the space of the solution and, consequently, the computational cost. In 3D data, we consider each slice as a task. Whereas 3D seismic data exhibit high correlation between slices, a multitask learning approach is used to enhance the performance of the method by sharing a common sparse coefficient matrix for the whole related tasks of the data. Basically, in the learning process, each task can help other tasks to learn better and thus a sparser representation is obtained. Furthermore, different from other DL methods that use a limited random number of patches to learn a dictionary, the proposed algorithm can take the whole data information into account with a reasonable time cost and thus can obtain an efficient and effective denoising performance. We have applied the method on synthetic and real 3D data, which demonstrated superior performance in random noise attenuation when compared with state-of-the-art denoising methods such as MSSA, BM4D, and FXY predictive filtering, especially in amplitude and continuity preservation in low signal-to-noise ratio cases and fault zones.


Author(s):  
Mohamed Attia ◽  
Mohammed Hossny ◽  
Hailing Zhou ◽  
Anosha Yazdabadi ◽  
Hamed Asadi ◽  
...  

Automated skin lesion analysis is one of the trending fields that has gained attention among the dermatologists and healthcare practitioners. Skin lesion restoration is an essential preprocessing step for lesion enhancements for accurate automated analysis and diagnosis. Digital hair removal is a non-invasive method for image enhancement by solving the hair-occlusion artefact in previously captured images. Several hair removal methods were proposed for skin delineation and removal. However, manual annotation is one of the main challenges that hinder the validation of these proposed methods on a large number of images or using benchmarking datasets for comparison purposes. In the presented work, we propose a realistic hair simulator based on context-aware image synthesis using image-to-image translation techniques via conditional adversarial generative networks for generation of different hair occlusions in skin images, along with the ground-truth mask for hair location. Besides, we explored using three loss functions including L1-norm, L2-norm and structural similarity index (SSIM) to maximise the synthesis quality. For quantitatively evaluate the realism of image synthesis, the t-SNE feature mapping and Bland-Altman test are employed as objective metrics. Experimental results show the superior performance of our proposed method compared to previous methods for hair synthesis with plausible colours and preserving the integrity of the lesion texture.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1269
Author(s):  
Jiabin Luo ◽  
Wentai Lei ◽  
Feifei Hou ◽  
Chenghao Wang ◽  
Qiang Ren ◽  
...  

Ground-penetrating radar (GPR), as a non-invasive instrument, has been widely used in civil engineering. In GPR B-scan images, there may exist random noise due to the influence of the environment and equipment hardware, which complicates the interpretability of the useful information. Many methods have been proposed to eliminate or suppress the random noise. However, the existing methods have an unsatisfactory denoising effect when the image is severely contaminated by random noise. This paper proposes a multi-scale convolutional autoencoder (MCAE) to denoise GPR data. At the same time, to solve the problem of training dataset insufficiency, we designed the data augmentation strategy, Wasserstein generative adversarial network (WGAN), to increase the training dataset of MCAE. Experimental results conducted on both simulated, generated, and field datasets demonstrated that the proposed scheme has promising performance for image denoising. In terms of three indexes: the peak signal-to-noise ratio (PSNR), the time cost, and the structural similarity index (SSIM), the proposed scheme can achieve better performance of random noise suppression compared with the state-of-the-art competing methods (e.g., CAE, BM3D, WNNM).


Author(s):  
Liqiong Zhang ◽  
Min Li ◽  
Xiaohua Qiu

To overcome the “staircase effect” while preserving the structural information such as image edges and textures quickly and effectively, we propose a compensating total variation image denoising model combining L1 and L2 norm. A new compensating regular term is designed, which can perform anisotropic and isotropic diffusion in image denoising, thus making up for insufficient diffusion in the total variation model. The algorithm first uses local standard deviation to distinguish neighborhood types. Then, the anisotropic diffusion based on L1 norm plays the role of edge protection in the strong edge region. The anisotropic and the isotropic diffusion simultaneously exist in the smooth region, so that the weak textures can be protected while overcoming the “staircase effect” effectively. The simulation experiments show that this method can effectively improve the peak signal-to-noise ratio and obtain the higher structural similarity index and the shorter running time.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. V137-V148 ◽  
Author(s):  
Pierre Turquais ◽  
Endrias G. Asgedom ◽  
Walter Söllner

We have addressed the seismic data denoising problem, in which the noise is random and has an unknown spatiotemporally varying variance. In seismic data processing, random noise is often attenuated using transform-based methods. The success of these methods in denoising depends on the ability of the transform to efficiently describe the signal features in the data. Fixed transforms (e.g., wavelets, curvelets) do not adapt to the data and might fail to efficiently describe complex morphologies in the seismic data. Alternatively, dictionary learning methods adapt to the local morphology of the data and provide state-of-the-art denoising results. However, conventional denoising by dictionary learning requires a priori information on the noise variance, and it encounters difficulties when applied for denoising seismic data in which the noise variance is varying in space or time. We have developed a coherence-constrained dictionary learning (CDL) method for denoising that does not require any a priori information related to the signal or noise. To denoise a given window of a seismic section using CDL, overlapping small 2D patches are extracted and a dictionary of patch-sized signals is trained to learn the elementary features embedded in the seismic signal. For each patch, using the learned dictionary, a sparse optimization problem is solved, and a sparse approximation of the patch is computed to attenuate the random noise. Unlike conventional dictionary learning, the sparsity of the approximation is constrained based on coherence such that it does not need a priori noise variance or signal sparsity information and is still optimal to filter out Gaussian random noise. The denoising performance of the CDL method is validated using synthetic and field data examples, and it is compared with the K-SVD and FX-Decon denoising. We found that CDL gives better denoising results than K-SVD and FX-Decon for removing noise when the variance varies in space or time.


2021 ◽  
Vol 11 (11) ◽  
pp. 4803
Author(s):  
Shiming Chen ◽  
Shaoping Xu ◽  
Xiaoguo Chen ◽  
Fen Li

Image denoising, a classic ill-posed problem, aims to recover a latent image from a noisy measurement. Over the past few decades, a considerable number of denoising methods have been studied extensively. Among these methods, supervised deep convolutional networks have garnered increasing attention, and their superior performance is attributed to their capability to learn realistic image priors from a large amount of paired noisy and clean images. However, if the image to be denoised is significantly different from the training images, it could lead to inferior results, and the networks may even produce hallucinations by using inappropriate image priors to handle an unseen noisy image. Recently, deep image prior (DIP) was proposed, and it overcame this drawback to some extent. The structure of the DIP generator network is capable of capturing the low-level statistics of a natural image using an unsupervised method with no training images other than the image itself. Compared with a supervised denoising model, the unsupervised DIP is more flexible when processing image content that must be denoised. Nevertheless, the denoising performance of DIP is usually inferior to the current supervised learning-based methods using deep convolutional networks, and it is susceptible to the over-fitting problem. To solve these problems, we propose a novel deep generative network with multiple target images and an adaptive termination condition. Specifically, we utilized mainstream denoising methods to generate two clear target images to be used with the original noisy image, enabling better guidance during the convergence process and improving the convergence speed. Moreover, we adopted the noise level estimation (NLE) technique to set a more reasonable adaptive termination condition, which can effectively solve the problem of over-fitting. Extensive experiments demonstrated that, according to the denoising results, the proposed approach significantly outperforms the original DIP method in tests on different databases. Specifically, the average peak signal-to-noise ratio (PSNR) performance of our proposed method on four databases at different noise levels is increased by 1.90 to 4.86 dB compared to the original DIP method. Moreover, our method achieves superior performance against state-of-the-art methods in terms of popular metrics, which include the structural similarity index (SSIM) and feature similarity index measurement (FSIM). Thus, the proposed method lays a good foundation for subsequent image processing tasks, such as target detection and super-resolution.


Geophysics ◽  
2021 ◽  
pp. 1-83
Author(s):  
Mohammed Outhmane Faouzi Zizi ◽  
Pierre Turquais

For a marine seismic survey, the recorded and processed data size can reach several terabytes. Storing seismic data sets is costly and transferring them between storage devices can be challenging. Dictionary learning has been shown to provide representations with a high level of sparsity. This method stores the shape of the redundant events once, and represents each occurrence of these events with a single sparse coefficient. Therefore, an efficient dictionary learning based compression workflow, which is specifically designed for seismic data, is developed here. This compression method differs from conventional compression methods in three respects: 1) the transform domain is not predefined but data-driven; 2) the redundancy in seismic data is fully exploited by learning small-sized dictionaries from local windows of the seismic shot gathers; 3) two modes are proposed depending on the geophysical application. Based on a test seismic data set, we demonstrate superior performance of the proposed workflow in terms of compression ratio for a wide range of signal-to-residual ratios, compared to standard seismic data methods, such as the zfp software or algorithms from the Seismic Unix package. Using a more realistic data set of marine seismic acquisition, we evaluate the capability of the proposed workflow to preserve the seismic signal for different applications. For applications such as near-real time transmission and long-term data storage, we observe insignificant signal leakage on a 2D line stack when the dictionary learning method reaches a compression ratio of 24.85. For other applications such as visual QC of shot gathers, our method preserves the visual aspect of the data even when a compression ratio of 95 is reached.


Geophysics ◽  
2021 ◽  
pp. 1-43
Author(s):  
Chao Zhang ◽  
Mirko van der Baan

Neural networks hold substantial promise to automate various processing and interpretation tasks. Yet their performance is often sub-optimal compared with standard but more closely guided approaches. Lack of performance is often attributed to poor generalization, in particular if fewer training examples are provided than free parameters exist in the machine learning algorithm. In this case the training data are typically memorized instead of the algorithm learning the underlying general trends. Network generalization is improved if the provided samples are representative, in that they describe all features of interest well. We argue that a more subtle condition preventing poor performance is that the provided examples must also be complete; the examples must span the full solution space. Ensuring completeness during training is challenging unless the target application is well understood. We illustrate that one possible solution is to make the problem more general if this greatly increases the number of available training data. For instance, if seismic images are treated as a subclass of natural images, then a deep-learning-based denoiser for seismic data can be trained using exclusively natural images. The latter are widely available. The resulting denoising algorithm has never seen any seismic data during the training stage; yet it displays a performance comparable to standard and advanced random-noise reduction methods. We exclude any seismic data during training to demonstrate the natural images are both complete and representative for this specific task. Furthermore, we apply a novel approach to increase the amount of training data known as double noise injection, providing both noisy input and output images during the training process. Given the importance of network generalization, we hope that insights gained in this study may help improve the performance of a range of machine learning applications in geophysics.


Geophysics ◽  
2021 ◽  
pp. 1-52
Author(s):  
Nanying Lan ◽  
Zhang Fanchang ◽  
Chuanhui Li

Due to the limitations imposed by acquisition cost, obstacles, and inaccessible regions, the originally acquired seismic data are often sparsely or irregularly sampled in space, which seriously affects the ability of seismic data to image under-ground structures. Fortunately, compressed sensing provides theoretical support for interpolating and recovering irregularly or under-sampled data. Under the framework of compressed sensing, we propose a robust interpolation method for high-dimensional seismic data, based on elastic half norm regularization and tensor dictionary learning. Inspired by the Elastic-Net, we first develop the elastic half norm regularization as a sparsity constraint, and establish a robust high-dimensional interpolation model with this technique. Then, considering the multi-dimensional structure and spatial correlation of seismic data, we introduce a tensor dictionary learning algorithm to train a high-dimensional adaptive tensor dictionary from the original data. This tensor dictionary is used as the sparse transform for seismic data interpolation because it can capture more detailed seismic features to achieve the optimal and fast sparse representation of high-dimensional seismic data. Finally, we solve the robust interpolation model by an efficient iterative thresholding algorithm in the transform space and perform the space conversion by a modified imputation algorithm to recover the wavefields at the unobserved spatial positions. We conduct high-dimensional interpolation experiments on model and field seismic data on a regular data grid. Experimental results demonstrate that, this method has superior performance and higher computational efficiency in both noise-free and noisy seismic data interpolation, compared to extensively utilized dictionary learning-based interpolation methods.


Sign in / Sign up

Export Citation Format

Share Document