scholarly journals Noise2Grad: Extract Image Noise to Denoise

Author(s):  
Huangxing Lin ◽  
Yihong Zhuang ◽  
Yue Huang ◽  
Xinghao Ding ◽  
Xiaoqing Liu ◽  
...  

In many image denoising tasks, the difficulty of collecting noisy/clean image pairs limits the application of supervised CNNs. We consider such a case in which paired data and noise statistics are not accessible, but unpaired noisy and clean images are easy to collect. To form the necessary supervision, our strategy is to extract the noise from the noisy image to synthesize new data. To ease the interference of the image background, we use a noise removal module to aid noise extraction. The noise removal module first roughly removes noise from the noisy image, which is equivalent to excluding much background information. A noise approximation module can therefore easily extract a new noise map from the removed noise to match the gradient of the noisy input. This noise map is added to a random clean image to synthesize a new data pair, which is then fed back to the noise removal module to correct the noise removal process. These two modules cooperate to extract noise finely. After convergence, the noise removal module can remove noise without damaging other background details, so we use it as our final denoising network. Experiments show that the denoising performance of the proposed method is competitive with other supervised CNNs.

2021 ◽  
Vol 17 (4) ◽  
pp. 1-16
Author(s):  
Xiaowe Xu ◽  
Jiawei Zhang ◽  
Jinglan Liu ◽  
Yukun Ding ◽  
Tianchen Wang ◽  
...  

As one of the most commonly ordered imaging tests, the computed tomography (CT) scan comes with inevitable radiation exposure that increases cancer risk to patients. However, CT image quality is directly related to radiation dose, and thus it is desirable to obtain high-quality CT images with as little dose as possible. CT image denoising tries to obtain high-dose-like high-quality CT images (domain Y ) from low dose low-quality CT images (domain X ), which can be treated as an image-to-image translation task where the goal is to learn the transform between a source domain X (noisy images) and a target domain Y (clean images). Recently, the cycle-consistent adversarial denoising network (CCADN) has achieved state-of-the-art results by enforcing cycle-consistent loss without the need of paired training data, since the paired data is hard to collect due to patients’ interests and cardiac motion. However, out of concerns on patients’ privacy and data security, protocols typically require clinics to perform medical image processing tasks including CT image denoising locally (i.e., edge denoising). Therefore, the network models need to achieve high performance under various computation resource constraints including memory and performance. Our detailed analysis of CCADN raises a number of interesting questions that point to potential ways to further improve its performance using the same or even fewer computation resources. For example, if the noise is large leading to a significant difference between domain X and domain Y , can we bridge X and Y with a intermediate domain Z such that both the denoising process between X and Z and that between Z and Y are easier to learn? As such intermediate domains lead to multiple cycles, how do we best enforce cycle- consistency? Driven by these questions, we propose a multi-cycle-consistent adversarial network (MCCAN) that builds intermediate domains and enforces both local and global cycle-consistency for edge denoising of CT images. The global cycle-consistency couples all generators together to model the whole denoising process, whereas the local cycle-consistency imposes effective supervision on the process between adjacent domains. Experiments show that both local and global cycle-consistency are important for the success of MCCAN, which outperforms CCADN in terms of denoising quality with slightly less computation resource consumption.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Quan Yuan ◽  
Zhenyun Peng ◽  
Zhencheng Chen ◽  
Yanke Guo ◽  
Bin Yang ◽  
...  

Medical image information may be polluted by noise in the process of generation and transmission, which will seriously hinder the follow-up image processing and medical diagnosis. In medical images, there is a typical mixed noise composed of additive white Gaussian noise (AWGN) and impulse noise. In the conventional denoising methods, impulse noise is first removed, followed by the elimination of white Gaussian noise (WGN). However, it is difficult to separate the two kinds of noises completely in practical application. The existing denoising algorithm of weight coding based on sparse nonlocal regularization, which can simultaneously remove AWGN and impulse noise, is plagued by the problems of incomplete noise removal and serious loss of details. The denoising algorithm based on sparse representation and low rank constraint can preserve image details better. Thus, a medical image denoising algorithm based on sparse nonlocal regularization weighted coding and low rank constraint is proposed. The denoising effect of the proposed method and the original algorithm on computed tomography (CT) image and magnetic resonance (MR) image are compared. It is revealed that, under different σ and ρ values, the PSNR and FSIM values of CT and MRI images are evidently superior to those of traditional algorithms, suggesting that the algorithm proposed in this work has better denoising effects on medical images than traditional denoising algorithms.


2012 ◽  
Vol 285 (7) ◽  
pp. 1777-1786 ◽  
Author(s):  
Ser-Hoon Lee ◽  
Hyung-Min Park ◽  
Sun-Young Hwang

Author(s):  
Karthikeyan P. ◽  
Vasuki S. ◽  
Karthik K.

Noise removal in medical images remains a challenge for the researchers because noise removal introduces artifacts and blurring of the image. Developing medical image denoising algorithm is a difficult operation because a tradeoff between noise reduction and the preservation of actual features of image has to be made in a way that enhances and preserves the diagnostically relevant image content. A special member of the emerging family of multiscale geometric transforms is the contourlet transform which effectively captures the image edges and contours. This overcomes the limitations of the existing method of denoising using wavelet and curvelet. But due to down sampling and up sampling, the contourlet transform is shift-variant. However, shift-invariance is desirable in image analysis applications such as edge detection, contour characterization, and image enhancement. In this chapter, nonsubsampled contourlet transform (shift-invariance transform)-based denoising is presented which more effectively represents edges than contourlet transform.


2020 ◽  
Vol 6 (3) ◽  
pp. 319-331
Author(s):  
Xiaoce Wu ◽  
Bingyin Zhou ◽  
Qingyun Ren ◽  
Wei Guo

Abstract Multispectral image denoising is a basic problem whose results affect subsequent processes such as target detection and classification. Numerous approaches have been proposed, but there are still many challenges, particularly in using prior knowledge of multispectral images, which is crucial for solving the ill-posed problem of noise removal. This paper considers both non-local self-similarity in space and global correlation in spectrum. We propose a novel low-rank Tucker decomposition model for removing the noise, in which sparse and graph Laplacian regularization terms are employed to encode this prior knowledge. It can jointly learn a sparse and low-rank representation while preserving the local geometrical structure between spectral bands, so as to better capture simultaneously the correlation in spatial and spectral directions. We adopt the alternating direction method of multipliers to solve the resulting problem. Experiments demonstrate that the proposed method outperforms the state-of-the-art, such as cube-based and tensor-based methods, both quantitatively and qualitatively.


2020 ◽  
Vol 20 (03) ◽  
pp. 2050025
Author(s):  
S. Shajun Nisha ◽  
S. P. Raja ◽  
A. Kasthuri

Image denoising, a significant research area in the field of medical image processing, makes an effort to recover the original image from its noise corrupted image. The Pulse Coupled Neural Networks (PCNN) works well against denoising a noisy image. Generally, image denoising techniques are directly applied on the pixels. From the literature review, it is reported that denoising after frequency domain transformation is performing better since noise removal is applied over the coefficients. Motivated by this, in this paper, a new technique called the Static Thresholded Pulse Coupled Neural Network (ST-PCNN) is proposed by combining PCNN with traditional filtering or threshold shrinkage technique in Contourlet Transform domain. Four different existing PCNN architectures, such as Neuromime Structure, Intersecting Cortical Model, Unit-Linking Model and Multichannel Model are considered for comparative analysis. The filters such as Wiener, Median, Average, Gaussian and threshold shrinkage techniques such as Sure Shrink, HeurShrink, Neigh Shrink, BayesShrink are used. For noise removal, a mixture of Speckle and Gaussian noise is considered for a CT skull image. A mixture of Rician and Gaussian noise is considered for MRI brain image. A mixture of Speckle and Salt and Pepper noise is considered for a Mammogram image. The Performance Metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Image Quality Index (IQI), Universal Image Quality Index (UQI), Image Enhancement Filter (IEF), Structural Content (SC), Correlation Coefficient (CC), and Weighted Signal-to-Noise Ratio (WSNR) and Visual Signal-to-Noise Ratio (VSNR) are used to evaluate the performance of denoising.


2020 ◽  
Vol 17 (4) ◽  
pp. 1770-1780
Author(s):  
B. Chinna Rao ◽  
M. Madhavilatha

This paper develops a new image denoising framework based on the Dual Tree Complex Wavelet Transform and an edge based patch grouping. The proposed patch grouping mechanism considers the photometric features along with gradient features to cluster the image patches into different groups with similar properties. Furthermore, the K-means algorithm was accomplished for patch grouping instead of Euclidean distance metric. An adaptive thresholding mechanism is also developed here to remove the noise with less information loss at edge features. Extensive simulation is carried out through MATLAB software over different grayscale images at different noise levels and noise types and the performance is measured with the performance metrics such as PSNR and SSIM for varying noise levels. The obtained simulation revealed the outstanding performance of proposed approach both in the preservation of edge features and also in the quality improvisation by efficient noise removal.


2018 ◽  
Vol 2018 ◽  
pp. 1-19 ◽  
Author(s):  
Min Wang ◽  
Wei Yan ◽  
Shudao Zhou

Singular value (SV) difference is the difference in the singular values between a noisy image and the original image; it varies regularly with noise intensity. This paper proposes an image denoising method using the singular value difference in the wavelet domain. First, the SV difference model is generated for different noise variances in the three directions of the wavelet transform and the noise variance of a new image is used to make the calculation by the diagonal part. Next, the single-level discrete 2-D wavelet transform is used to decompose each noisy image into its low-frequency and high-frequency parts. Then, singular value decomposition (SVD) is used to obtain the SVs of the three high-frequency parts. Finally, the three denoised high-frequency parts are reconstructed by SVD from the SV difference, and the final denoised image is obtained using the inverse wavelet transform. Experiments show the effectiveness of this method compared with relevant existing methods.


Sign in / Sign up

Export Citation Format

Share Document