scholarly journals When AWGN-Based Denoiser Meets Real Noises

2020 ◽  
Vol 34 (07) ◽  
pp. 13074-13081
Author(s):  
Yuqian Zhou ◽  
Jianbo Jiao ◽  
Haibin Huang ◽  
Yang Wang ◽  
Jue Wang ◽  
...  

Discriminative learning based image denoisers have achieved promising performance on synthetic noises such as Additive White Gaussian Noise (AWGN). The synthetic noises adopted in most previous work are pixel-independent, but real noises are mostly spatially/channel-correlated and spatially/channel-variant. This domain gap yields unsatisfied performance on images with real noises if the model is only trained with AWGN. In this paper, we propose a novel approach to boost the performance of a real image denoiser which is trained only with synthetic pixel-independent noise data dominated by AWGN. First, we train a deep model that consists of a noise estimator and a denoiser with mixed AWGN and Random Value Impulse Noise (RVIN). We then investigate Pixel-shuffle Down-sampling (PD) strategy to adapt the trained model to real noises. Extensive experiments demonstrate the effectiveness and generalization of the proposed approach. Notably, our method achieves state-of-the-art performance on real sRGB images in the DND benchmark among models trained with synthetic noises. Codes are available at https://github.com/yzhouas/PD-Denoising-pytorch.

2020 ◽  
Vol 12 (16) ◽  
pp. 2636
Author(s):  
Emanuele Dalsasso ◽  
Xiangli Yang ◽  
Loïc Denis ◽  
Florence Tupin ◽  
Wen Yang

Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) images. Many different schemes have been proposed for the restoration of intensity SAR images. Among the different possible approaches, methods based on convolutional neural networks (CNNs) have recently shown to reach state-of-the-art performance for SAR image restoration. CNN training requires good training data: many pairs of speckle-free/speckle-corrupted images. This is an issue in SAR applications, given the inherent scarcity of speckle-free images. To handle this problem, this paper analyzes different strategies one can adopt, depending on the speckle removal task one wishes to perform and the availability of multitemporal stacks of SAR data. The first strategy applies a CNN model, trained to remove additive white Gaussian noise from natural images, to a recently proposed SAR speckle removal framework: MuLoG (MUlti-channel LOgarithm with Gaussian denoising). No training on SAR images is performed, the network is readily applied to speckle reduction tasks. The second strategy considers a novel approach to construct a reliable dataset of speckle-free SAR images necessary to train a CNN model. Finally, a hybrid approach is also analyzed: the CNN used to remove additive white Gaussian noise is trained on speckle-free SAR images. The proposed methods are compared to other state-of-the-art speckle removal filters, to evaluate the quality of denoising and to discuss the pros and cons of the different strategies. Along with the paper, we make available the weights of the trained network to allow its usage by other researchers.


2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Xiaoni Gao ◽  
Mei Yu ◽  
Jianrong Wang ◽  
Jianguo Wei

We propose al0sparsity based approach to remove additive white Gaussian noise from a given image. To achieve this goal, we combine the local prior and global prior together to recover the noise-free values of pixels. The local prior depends on the neighborhood relationships of a search window to help maintain edges and smoothness. The global prior is generated from a hierarchicall0sparse representation to help eliminate the redundant information and preserve the global consistency. In addition, to make the correlations between pixels more meaningful, we adopt Principle Component Analysis to measure the similarities, which can be both propitious to reduce the computational complexity and improve the accuracies. Experiments on the benchmark image set show that the proposed approach can achieve superior performance to the state-of-the-art approaches both in accuracy and perception in removing the zero-mean additive white Gaussian noise.


2018 ◽  
Vol 10 (10) ◽  
pp. 1600 ◽  
Author(s):  
Chang Li ◽  
Yu Liu ◽  
Juan Cheng ◽  
Rencheng Song ◽  
Hu Peng ◽  
...  

Generalized bilinear model (GBM) has received extensive attention in the field of hyperspectral nonlinear unmixing. Traditional GBM unmixing methods are usually assumed to be degraded only by additive white Gaussian noise (AWGN), and the intensity of AWGN in each band of hyperspectral image (HSI) is assumed to be the same. However, the real HSIs are usually degraded by mixture of various kinds of noise, which include Gaussian noise, impulse noise, dead pixels or lines, stripes, and so on. Besides, the intensity of AWGN is usually different for each band of HSI. To address the above mentioned issues, we propose a novel nonlinear unmixing method based on the bandwise generalized bilinear model (NU-BGBM), which can be adapted to the presence of complex mixed noise in real HSI. Besides, the alternative direction method of multipliers (ADMM) is adopted to solve the proposed NU-BGBM. Finally, extensive experiments are conducted to demonstrate the effectiveness of the proposed NU-BGBM compared with some other state-of-the-art unmixing methods.


Author(s):  
Yuqian Zhou ◽  
Jianbo Jiao ◽  
Haibin Huang ◽  
Jue Wang ◽  
Thomas Huang

Discriminative learning based denoising model trained with Additive White Gaussian Noise (AWGN) performs well on synthesized noise. However, realistic noise can be spatialvariant, signal-dependent and a mixture of complicated noises. In this paper, we explore multiple strategies for applying an AWGN-based denoiser to realistic noise. Specifically, we trained a deep network integrating noise estimating and denoiser with mixed Gaussian (AWGN) and Random Value Impulse Noise (RVIN). To adapt the model to realistic noises, we investigated multi-channel, multi-scale and super-resolution approaches. Our preliminary results demonstrated the effectiveness of the newly-proposed noise model and adaptation strategies.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Quan Yuan ◽  
Zhenyun Peng ◽  
Zhencheng Chen ◽  
Yanke Guo ◽  
Bin Yang ◽  
...  

Medical image information may be polluted by noise in the process of generation and transmission, which will seriously hinder the follow-up image processing and medical diagnosis. In medical images, there is a typical mixed noise composed of additive white Gaussian noise (AWGN) and impulse noise. In the conventional denoising methods, impulse noise is first removed, followed by the elimination of white Gaussian noise (WGN). However, it is difficult to separate the two kinds of noises completely in practical application. The existing denoising algorithm of weight coding based on sparse nonlocal regularization, which can simultaneously remove AWGN and impulse noise, is plagued by the problems of incomplete noise removal and serious loss of details. The denoising algorithm based on sparse representation and low rank constraint can preserve image details better. Thus, a medical image denoising algorithm based on sparse nonlocal regularization weighted coding and low rank constraint is proposed. The denoising effect of the proposed method and the original algorithm on computed tomography (CT) image and magnetic resonance (MR) image are compared. It is revealed that, under different σ and ρ values, the PSNR and FSIM values of CT and MRI images are evidently superior to those of traditional algorithms, suggesting that the algorithm proposed in this work has better denoising effects on medical images than traditional denoising algorithms.


Sign in / Sign up

Export Citation Format

Share Document