scholarly journals Multi-Spectral Fusion and Denoising of Color and Near-Infrared Images Using Multi-Scale Wavelet Analysis

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3610
Author(s):  
Haonan Su ◽  
Cheolkon Jung ◽  
Long Yu

We formulate multi-spectral fusion and denoising for the luminance channel as a maximum a posteriori estimation problem in the wavelet domain. To deal with the discrepancy between RGB and near infrared (NIR) data in fusion, we build a discrepancy model and introduce the wavelet scale map. The scale map adjusts the wavelet coefficients of NIR data to have the same distribution as the RGB data. We use the priors of the wavelet scale map and its gradient as the contrast preservation term and gradient denoising term, respectively. Specifically, we utilize the local contrast and visibility measurements in the contrast preservation term to transfer the selected NIR data to the fusion result. We also use the gradient of NIR wavelet coefficients as the weight for the gradient denoising term in the wavelet scale map. Based on the wavelet scale map, we perform fusion of the RGB and NIR wavelet coefficients in the base and detail layers. To remove noise, we model the prior of the fused wavelet coefficients using NIR-guided Laplacian distributions. In the chrominance channels, we remove noise guided by the fused luminance channel. Based on the luminance variation after fusion, we further enhance the color of the fused image. Our experimental results demonstrated that the proposed method successfully performed the fusion of RGB and NIR images with noise reduction, detail preservation, and color enhancement.

Author(s):  
Snehal S. Rajole ◽  
J. V. Shinde

In this paper we proposed unique technique which is adaptive to noisy images for eye gaze detection as processing noisy sclera images captured at-a-distance and on-the-move has not been extensively investigated. Sclera blood vessels have been investigated recently as an efficient biometric trait. Capturing part of the eye with a normal camera using visible-wavelength images rather than near infrared images has provoked research interest. This technique involves sclera template rotation alignment and a distance scaling method to minimize the error rates when noisy eye images are captured at-a-distance and on-the move. The proposed system is tested and results are generated by extensive simulation in java.


1999 ◽  
Vol 117 (1) ◽  
pp. 439-445 ◽  
Author(s):  
P. Persi ◽  
A. R. Marenzi ◽  
A. A. Kaas ◽  
G. Olofsson ◽  
L. Nordh ◽  
...  

2021 ◽  
Vol 9 (2) ◽  
pp. 225
Author(s):  
Farong Gao ◽  
Kai Wang ◽  
Zhangyi Yang ◽  
Yejian Wang ◽  
Qizhong Zhang

In this study, an underwater image enhancement method based on local contrast correction (LCC) and multi-scale fusion is proposed to resolve low contrast and color distortion of underwater images. First, the original image is compensated using the red channel, and the compensated image is processed with a white balance. Second, LCC and image sharpening are carried out to generate two different image versions. Finally, the local contrast corrected images are fused with sharpened images by the multi-scale fusion method. The results show that the proposed method can be applied to water degradation images in different environments without resorting to an image formation model. It can effectively solve color distortion, low contrast, and unobvious details of underwater images.


2013 ◽  
Vol 281 ◽  
pp. 47-50
Author(s):  
Zhi Hong Chen

In this paper we propose a new steganographic method, which based on wet paper codes and wavelet transformation. The method is designed to embed secret messages in images' wavelet coefficients and depends on images' texture characters in local neighborhood. The receivers can extract secret bits from carrier images only by some matrix multiplications without knowing the formulas written by senders, which further improves steganographic security and minimizes the impact of embedding changes. The experimental results show that our proposed method has good robust and visual concealment performance and proves out it's a practical steganographic algorithm.


Author(s):  
Han Xu ◽  
Pengwei Liang ◽  
Wei Yu ◽  
Junjun Jiang ◽  
Jiayi Ma

In this paper, we propose a new end-to-end model, called dual-discriminator conditional generative adversarial network (DDcGAN), for fusing infrared and visible images of different resolutions. Unlike the pixel-level methods and existing deep learning-based methods, the fusion task is accomplished through the adversarial process between a generator and two discriminators, in addition to the specially designed content loss. The generator is trained to generate real-like fused images to fool discriminators. The two discriminators are trained to calculate the JS divergence between the probability distribution of downsampled fused images and infrared images, and the JS divergence between the probability distribution of gradients of fused images and gradients of visible images, respectively. Thus, the fused images can compensate for the features that are not constrained by the single content loss. Consequently, the prominence of thermal targets in the infrared image and the texture details in the visible image can be preserved or even enhanced in the fused image simultaneously. Moreover, by constraining and distinguishing between the downsampled fused image and the low-resolution infrared image, DDcGAN can be preferably applied to the fusion of different resolution images. Qualitative and quantitative experiments on publicly available datasets demonstrate the superiority of our method over the state-of-the-art.


Sign in / Sign up

Export Citation Format

Share Document