Infrared Polarization and Intensity Image Fusion Based on Dual-Tree Complex Wavelet Transform and Sparse Representation

2017 ◽  
Vol 46 (12) ◽  
pp. 1210002 ◽  
Author(s):  
朱攀 ZHU Pan ◽  
刘泽阳 LIU Ze-yang ◽  
黄战华 HUANG Zhan-hua
2020 ◽  
Vol 39 (3) ◽  
pp. 4617-4629
Author(s):  
Chengrui Gao ◽  
Feiqiang Liu ◽  
Hua Yan

Infrared and visible image fusion refers to the technology that merges the visual details of visible images and thermal feature information of infrared images; it has been extensively adopted in numerous image processing fields. In this study, a dual-tree complex wavelet transform (DTCWT) and convolutional sparse representation (CSR)-based image fusion method was proposed. In the proposed method, the infrared images and visible images were first decomposed by dual-tree complex wavelet transform to characterize their high-frequency bands and low-frequency band. Subsequently, the high-frequency bands were enhanced by guided filtering (GF), while the low-frequency band was merged through convolutional sparse representation and choose-max strategy. Lastly, the fused images were reconstructed by inverse DTCWT. In the experiment, the objective and subjective comparisons with other typical methods proved the advantage of the proposed method. To be specific, the results achieved using the proposed method were more consistent with the human vision system and contained more texture detail information.


Oncology ◽  
2017 ◽  
pp. 519-541
Author(s):  
Satishkumar S. Chavan ◽  
Sanjay N. Talbar

The process of enriching the important details from various modality medical images by combining them into single image is called multimodality medical image fusion. It aids physicians in terms of better visualization, more accurate diagnosis and appropriate treatment plan for the cancer patient. The combined fused image is the result of merging of anatomical and physiological variations. It allows accurate localization of cancer tissues and more helpful for estimation of target volume for radiation. The details from both modalities (CT and MRI) are extracted in frequency domain by applying various transforms and combined them using variety of fusion rules to achieve the best quality of images. The performance and effectiveness of each transform on fusion results is evaluated subjectively as well as objectively. The fused images by algorithms in which feature extraction is achieved by M-Band Wavelet Transform and Daubechies Complex Wavelet Transform are superior over other frequency domain algorithms as per subjective and objective analysis.


2016 ◽  
Vol 16 (04) ◽  
pp. 1650022 ◽  
Author(s):  
Deepak Gambhir ◽  
Meenu Manchanda

Medical image fusion is being used at large by clinical professionals for improved diagnosis and treatment of diseases. The main aim of image fusion process is to combine complete information from all input images into a single fused image. Therefore, a novel fusion rule is proposed for fusing medical images based on Daubechies complex wavelet transform (DCxWT). Input images are first decomposed using DCxWT. The complex coefficients so obtained are then fused using normalized correlation based fusion rule. Finally, the fused image is obtained by inverse DCxWT with all combined complex coefficients. The performance of the proposed method has been evaluated and compared both visually and objectively with DCxWT based fusion methods using state-of art fusion rules as well as with existing fusion techniques. Experimental results and comparative study demonstrate that the proposed fusion technique generates better results than existing fusion rules as well as with other fusion techniques.


Sign in / Sign up

Export Citation Format

Share Document