Infrared and Visible Image Fusion Method based on VGGNet and Visual Saliency Map

Author(s):  
Changqing Wang ◽  
Quancheng Du ◽  
Xiangyu Yang
2011 ◽  
Vol 403-408 ◽  
pp. 1927-1932
Author(s):  
Hai Peng ◽  
Hua Jun Feng ◽  
Ju Feng Zhao ◽  
Zhi Hai Xu ◽  
Qi Li ◽  
...  

We propose a new image fusion method to fuse the frames of infrared and visual image sequences more effectively. In our method, we introduce an improved salient feature detection algorithm to achieve the saliency map of the original frames. This improved method can detect not only spatially but also temporally salient features using dynamic information of inter-frames. Images are then segmented into target regions and background regions based on saliency distribution. We formulate fusion rules for different regions using a double threshold method and finally fuse the image frames in NSCT multi-scale domain. Comparison of different methods shows that our result is a more effective one to stress salient features of target regions and maintain details of background regions from the original image sequences.


PLoS ONE ◽  
2020 ◽  
Vol 15 (9) ◽  
pp. e0239535
Author(s):  
Chunming Wu ◽  
Long Chen

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Qingqing Li ◽  
Guangliang Han ◽  
Peixun Liu ◽  
Hang Yang ◽  
Jiajia Wu ◽  
...  

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 40
Author(s):  
Chaowei Duan ◽  
Changda Xing ◽  
Yiliu Liu ◽  
Zhisheng Wang

As a powerful technique to merge complementary information of original images, infrared (IR) and visible image fusion approaches are widely used in surveillance, target detecting, tracking, and biological recognition, etc. In this paper, an efficient IR and visible image fusion method is proposed to simultaneously enhance the significant targets/regions in all source images and preserve rich background details in visible images. The multi-scale representation based on the fast global smoother is firstly used to decompose source images into the base and detail layers, aiming to extract the salient structure information and suppress the halos around the edges. Then, a target-enhanced parallel Gaussian fuzzy logic-based fusion rule is proposed to merge the base layers, which can avoid the brightness loss and highlight significant targets/regions. In addition, the visual saliency map-based fusion rule is designed to merge the detail layers with the purpose of obtaining rich details. Finally, the fused image is reconstructed. Extensive experiments are conducted on 21 image pairs and a Nato-camp sequence (32 image pairs) to verify the effectiveness and superiority of the proposed method. Compared with several state-of-the-art methods, experimental results demonstrate that the proposed method can achieve more competitive or superior performances according to both the visual results and objective evaluation.


Sign in / Sign up

Export Citation Format

Share Document