Infrared and visible image fusion based on convolutional sparse representation and guided filtering

2021 ◽  
Vol 30 (04) ◽  
Author(s):  
Yansong Zhu ◽  
Yixiang Lu ◽  
Qingwei Gao ◽  
Dong Sun
2020 ◽  
Vol 39 (3) ◽  
pp. 4617-4629
Author(s):  
Chengrui Gao ◽  
Feiqiang Liu ◽  
Hua Yan

Infrared and visible image fusion refers to the technology that merges the visual details of visible images and thermal feature information of infrared images; it has been extensively adopted in numerous image processing fields. In this study, a dual-tree complex wavelet transform (DTCWT) and convolutional sparse representation (CSR)-based image fusion method was proposed. In the proposed method, the infrared images and visible images were first decomposed by dual-tree complex wavelet transform to characterize their high-frequency bands and low-frequency band. Subsequently, the high-frequency bands were enhanced by guided filtering (GF), while the low-frequency band was merged through convolutional sparse representation and choose-max strategy. Lastly, the fused images were reconstructed by inverse DTCWT. In the experiment, the objective and subjective comparisons with other typical methods proved the advantage of the proposed method. To be specific, the results achieved using the proposed method were more consistent with the human vision system and contained more texture detail information.


2018 ◽  
Vol 26 (5) ◽  
pp. 1242-1253 ◽  
Author(s):  
刘先红 LIU Xian-hong ◽  
陈志斌 CHEN Zhi-bin ◽  
秦梦泽 QIN Meng-ze

2014 ◽  
Vol 67 ◽  
pp. 397-407 ◽  
Author(s):  
Xiaoqi Lu ◽  
Baohua Zhang ◽  
Ying Zhao ◽  
He Liu ◽  
Haiquan Pei

2019 ◽  
Vol 1302 ◽  
pp. 022045
Author(s):  
Sa Huang ◽  
Guangyu Chu ◽  
Yifan Fei ◽  
Xiaoli Zhang ◽  
Hailiang Wang

Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 303 ◽  
Author(s):  
Xiaole Ma ◽  
Shaohai Hu ◽  
Shuaiqi Liu ◽  
Jing Fang ◽  
Shuwen Xu

In this paper, a remote sensing image fusion method is presented since sparse representation (SR) has been widely used in image processing, especially for image fusion. Firstly, we used source images to learn the adaptive dictionary, and sparse coefficients were obtained by sparsely coding the source images with the adaptive dictionary. Then, with the help of improved hyperbolic tangent function (tanh) and l 0 − max , we fused these sparse coefficients together. The initial fused image can be obtained by the image fusion method based on SR. To take full advantage of the spatial information of the source images, the fused image based on the spatial domain (SF) was obtained at the same time. Lastly, the final fused image could be reconstructed by guided filtering of the fused image based on SR and SF. Experimental results show that the proposed method outperforms some state-of-the-art methods on visual and quantitative evaluations.


Sign in / Sign up

Export Citation Format

Share Document