scholarly journals Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network

2020 ◽  
Vol 10 (2) ◽  
pp. 554 ◽  
Author(s):  
Dongdong Xu ◽  
Yongcheng Wang ◽  
Shuyan Xu ◽  
Kaiguang Zhu ◽  
Ning Zhang ◽  
...  

Infrared and visible image fusion can obtain combined images with salient hidden objectives and abundant visible details simultaneously. In this paper, we propose a novel method for infrared and visible image fusion with a deep learning framework based on a generative adversarial network (GAN) and a residual network (ResNet). The fusion is accomplished with an adversarial game and directed by the unique loss functions. The generator with residual blocks and skip connections can extract deep features of source image pairs and generate an elementary fused image with infrared thermal radiation information and visible texture information, and more details in visible images are added to the final images through the discriminator. It is unnecessary to design the activity level measurements and fusion rules manually, which are now implemented automatically. Also, there are no complicated multi-scale transforms in this method, so the computational cost and complexity can be reduced. Experiment results demonstrate that the proposed method eventually gets desirable images, achieving better performance in objective assessment and visual quality compared with nine representative infrared and visible image fusion methods.


2021 ◽  
Vol 11 (19) ◽  
pp. 9255
Author(s):  
Syeda Minahil ◽  
Jun-Hyung Kim ◽  
Youngbae Hwang

In infrared (IR) and visible image fusion, the significant information is extracted from each source image and integrated into a single image with comprehensive data. We observe that the salient regions in the infrared image contain targets of interests. Therefore, we enforce spatial adaptive weights derived from the infrared images. In this paper, a Generative Adversarial Network (GAN)-based fusion method is proposed for infrared and visible image fusion. Based on the end-to-end network structure with dual discriminators, a patch-wise discrimination is applied to reduce blurry artifact from the previous image-level approaches. A new loss function is also proposed to use constructed weight maps which direct the adversarial training of GAN in a manner such that the informative regions of the infrared images are preserved. Experiments are performed on the two datasets and ablation studies are also conducted. The qualitative and quantitative analysis shows that we achieve competitive results compared to the existing fusion methods.



Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4556 ◽  
Author(s):  
Yaochen Liu ◽  
Lili Dong ◽  
Yuanyuan Ji ◽  
Wenhai Xu

In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images.





Author(s):  
Yong Yang ◽  
Jiaxiang Liu ◽  
Shuying Huang ◽  
Weiguo Wan ◽  
Wenying Wen ◽  
...  


2020 ◽  
Vol 104 ◽  
pp. 103144 ◽  
Author(s):  
Jiangtao Xu ◽  
Xingping Shi ◽  
Shuzhen Qin ◽  
Kaige Lu ◽  
Han Wang ◽  
...  




Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 376
Author(s):  
Jilei Hou ◽  
Dazhi Zhang ◽  
Wei Wu ◽  
Jiayi Ma ◽  
Huabing Zhou

This paper proposes a new generative adversarial network for infrared and visible image fusion based on semantic segmentation (SSGAN), which can consider not only the low-level features of infrared and visible images, but also the high-level semantic information. Source images can be divided into foregrounds and backgrounds by semantic masks. The generator with a dual-encoder-single-decoder framework is used to extract the feature of foregrounds and backgrounds by different encoder paths. Moreover, the discriminator’s input image is designed based on semantic segmentation, which is obtained by combining the foregrounds of the infrared images with the backgrounds of the visible images. Consequently, the prominence of thermal targets in the infrared images and texture details in the visible images can be preserved in the fused images simultaneously. Qualitative and quantitative experiments on publicly available datasets demonstrate that the proposed approach can significantly outperform the state-of-the-art methods.



Sign in / Sign up

Export Citation Format

Share Document