Two-scale image fusion of visible and infrared images using saliency detection

2016 ◽  
Vol 76 ◽  
pp. 52-64 ◽  
Author(s):  
Durga Prasad Bavirisetti ◽  
Ravindra Dhuli
2016 ◽  
Author(s):  
Chao Liu ◽  
Xiao-hui Zhang ◽  
Qing-ping Hu ◽  
Yong-kang Chen

2016 ◽  
Vol 31 (10) ◽  
pp. 1006-1015
Author(s):  
郭少军 GUO Shao-jun ◽  
娄树理 LOU Shu-li ◽  
刘峰 LIU Feng

2020 ◽  
Vol 23 (4) ◽  
pp. 815-824
Author(s):  
A. Rajesh Naidu ◽  
D. Bhavana ◽  
P. Revanth ◽  
G. Gopi ◽  
M. Prabhu Kishore ◽  
...  

2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Yifeng Niu ◽  
Shengtao Xu ◽  
Lizhen Wu ◽  
Weidong Hu

Infrared and visible image fusion is an important precondition of realizing target perception for unmanned aerial vehicles (UAVs), then UAV can perform various given missions. Information of texture and color in visible images are abundant, while target information in infrared images is more outstanding. The conventional fusion methods are mostly based on region segmentation; as a result, the fused image for target recognition could not be actually acquired. In this paper, a novel fusion method of airborne infrared and visible image based on target region segmentation and discrete wavelet transform (DWT) is proposed, which can gain more target information and preserve more background information. The fusion experiments are done on condition that the target is unmoving and observable both in visible and infrared images, targets are moving and observable both in visible and infrared images, and the target is observable only in an infrared image. Experimental results show that the proposed method can generate better fused image for airborne target perception.


2020 ◽  
Vol 39 (3) ◽  
pp. 4617-4629
Author(s):  
Chengrui Gao ◽  
Feiqiang Liu ◽  
Hua Yan

Infrared and visible image fusion refers to the technology that merges the visual details of visible images and thermal feature information of infrared images; it has been extensively adopted in numerous image processing fields. In this study, a dual-tree complex wavelet transform (DTCWT) and convolutional sparse representation (CSR)-based image fusion method was proposed. In the proposed method, the infrared images and visible images were first decomposed by dual-tree complex wavelet transform to characterize their high-frequency bands and low-frequency band. Subsequently, the high-frequency bands were enhanced by guided filtering (GF), while the low-frequency band was merged through convolutional sparse representation and choose-max strategy. Lastly, the fused images were reconstructed by inverse DTCWT. In the experiment, the objective and subjective comparisons with other typical methods proved the advantage of the proposed method. To be specific, the results achieved using the proposed method were more consistent with the human vision system and contained more texture detail information.


Sign in / Sign up

Export Citation Format

Share Document