scholarly journals A Generative Adversarial Network with Dual Discriminators for Infrared and Visible Image Fusion Based on Saliency Detection

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Dazhi Zhang ◽  
Jilei Hou ◽  
Wei Wu ◽  
Tao Lu ◽  
Huabing Zhou

Infrared and visible image fusion needs to preserve both the salient target of the infrared image and the texture details of the visible image. Therefore, an infrared and visible image fusion method based on saliency detection is proposed. Firstly, the saliency map of the infrared image is obtained by saliency detection. Then, the specific loss function and network architecture are designed based on the saliency map to improve the performance of the fusion algorithm. Specifically, the saliency map is normalized to [0, 1], used as a weight map to constrain the loss function. At the same time, the saliency map is binarized to extract salient regions and nonsalient regions. And, a generative adversarial network with dual discriminators is obtained. The two discriminators are used to distinguish the salient regions and the nonsalient regions, respectively, to promote the generator to generate better fusion results. The experimental results show that the fusion results of our method are better than those of the existing methods in both subjective and objective aspects.

Author(s):  
Zhuo Chen ◽  
Ming Fang ◽  
Xu Chai ◽  
Feiran Fu ◽  
Lihong Yuan

Infrared and visible image fusion is an effective method to solve the lack of single sensor imaging. The purpose is that the fusion images are suitable for human eyes and conducive to the next application and processing. In order to solve the problems of incomplete feature extraction, loss of details, and less samples of common data sets, it is not conducive to training, an end-to-end network architecture for image fusion is proposed. U-net is introduced into image fusion, and the final fusion result is obtained by using the generative adversarial network. Through its special convolution structure, the important feature information is extracted to the maximum extent, and the sample does not need to be cut to avoid the problem of reducing the fusion accuracy, but also to improve the training speed. Then the U-net extracted feature is confronted with the discriminator containing infrared image, and the generator model is obtained. The experimental results show that the present algorithm can obtain the fusion image with clear outline, prominent texture and obvious target. SD, SF, SSIM, AG and other indicators are obviously improved.


2020 ◽  
Vol 104 ◽  
pp. 103144 ◽  
Author(s):  
Jiangtao Xu ◽  
Xingping Shi ◽  
Shuzhen Qin ◽  
Kaige Lu ◽  
Han Wang ◽  
...  

2021 ◽  
Vol 11 (19) ◽  
pp. 9255
Author(s):  
Syeda Minahil ◽  
Jun-Hyung Kim ◽  
Youngbae Hwang

In infrared (IR) and visible image fusion, the significant information is extracted from each source image and integrated into a single image with comprehensive data. We observe that the salient regions in the infrared image contain targets of interests. Therefore, we enforce spatial adaptive weights derived from the infrared images. In this paper, a Generative Adversarial Network (GAN)-based fusion method is proposed for infrared and visible image fusion. Based on the end-to-end network structure with dual discriminators, a patch-wise discrimination is applied to reduce blurry artifact from the previous image-level approaches. A new loss function is also proposed to use constructed weight maps which direct the adversarial training of GAN in a manner such that the informative regions of the infrared images are preserved. Experiments are performed on the two datasets and ablation studies are also conducted. The qualitative and quantitative analysis shows that we achieve competitive results compared to the existing fusion methods.


Author(s):  
Han Xu ◽  
Pengwei Liang ◽  
Wei Yu ◽  
Junjun Jiang ◽  
Jiayi Ma

In this paper, we propose a new end-to-end model, called dual-discriminator conditional generative adversarial network (DDcGAN), for fusing infrared and visible images of different resolutions. Unlike the pixel-level methods and existing deep learning-based methods, the fusion task is accomplished through the adversarial process between a generator and two discriminators, in addition to the specially designed content loss. The generator is trained to generate real-like fused images to fool discriminators. The two discriminators are trained to calculate the JS divergence between the probability distribution of downsampled fused images and infrared images, and the JS divergence between the probability distribution of gradients of fused images and gradients of visible images, respectively. Thus, the fused images can compensate for the features that are not constrained by the single content loss. Consequently, the prominence of thermal targets in the infrared image and the texture details in the visible image can be preserved or even enhanced in the fused image simultaneously. Moreover, by constraining and distinguishing between the downsampled fused image and the low-resolution infrared image, DDcGAN can be preferably applied to the fusion of different resolution images. Qualitative and quantitative experiments on publicly available datasets demonstrate the superiority of our method over the state-of-the-art.


2020 ◽  
Vol 10 (2) ◽  
pp. 554 ◽  
Author(s):  
Dongdong Xu ◽  
Yongcheng Wang ◽  
Shuyan Xu ◽  
Kaiguang Zhu ◽  
Ning Zhang ◽  
...  

Infrared and visible image fusion can obtain combined images with salient hidden objectives and abundant visible details simultaneously. In this paper, we propose a novel method for infrared and visible image fusion with a deep learning framework based on a generative adversarial network (GAN) and a residual network (ResNet). The fusion is accomplished with an adversarial game and directed by the unique loss functions. The generator with residual blocks and skip connections can extract deep features of source image pairs and generate an elementary fused image with infrared thermal radiation information and visible texture information, and more details in visible images are added to the final images through the discriminator. It is unnecessary to design the activity level measurements and fusion rules manually, which are now implemented automatically. Also, there are no complicated multi-scale transforms in this method, so the computational cost and complexity can be reduced. Experiment results demonstrate that the proposed method eventually gets desirable images, achieving better performance in objective assessment and visual quality compared with nine representative infrared and visible image fusion methods.


Sign in / Sign up

Export Citation Format

Share Document