Multi-focus image fusion algorithm based on multilevel morphological component analysis and support vector machine

2017 ◽  
Vol 11 (10) ◽  
pp. 919-926 ◽  
Author(s):  
Xiongfei Li ◽  
Lingling Wang ◽  
Jing Wang ◽  
Xiaoli Zhang
Author(s):  
Peng Guo ◽  
Guoqi Xie ◽  
Renfa Li ◽  
Hui Hu

In feature-level image fusion, deep learning technology, particularly convolutional sparse representation (SR) theory, has emerged as a new topic over the past three years. This paper proposes an effective image fusion method based on convolution SR, namely, convolutional sparsity-based morphological component analysis and guided filter (CS-MCA-GF). The guided filter operator and choose-max coefficient fusion scheme introduced in this method can effectively eliminate the artifacts generated by the morphological components in the linear fusion, and maintain the pixel saliency of the source images. Experiments show that the proposed method can achieve an excellent performance in multi-modal image fusion, which includes medical image fusion.


2017 ◽  
Vol 14 (8) ◽  
pp. 795-807
Author(s):  
Georges Laussane Loum ◽  
Atiampo Kodjo Armand ◽  
Pandry Koffi Ghislain ◽  
Souleymane Oumtanaga

Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1531
Author(s):  
Shanshan Huang ◽  
Yikun Yang ◽  
Xin Jin ◽  
Ya Zhang ◽  
Qian Jiang ◽  
...  

Multi-sensor image fusion is used to combine the complementary information of source images from the multiple sensors. Recently, conventional image fusion schemes based on signal processing techniques have been studied extensively, and machine learning-based techniques have been introduced into image fusion because of the prominent advantages. In this work, a new multi-sensor image fusion method based on the support vector machine and principal component analysis is proposed. First, the key features of the source images are extracted by combining the sliding window technique and five effective evaluation indicators. Second, a trained support vector machine model is used to extract the focus region and the non-focus region of the source images according to the extracted image features, the fusion decision is therefore obtained for each source image. Then, the consistency verification operation is used to absorb a single singular point in the decisions of the trained classifier. Finally, a novel method based on principal component analysis and the multi-scale sliding window is proposed to handle the disputed areas in the fusion decision pair. Experiments are performed to verify the performance of the new combined method.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Chuangeng Tian ◽  
Lu Tang ◽  
Xiao Li ◽  
Kaili Liu ◽  
Jian Wang

This paper proposes a perceptual medical image fusion framework based on morphological component analysis combining convolutional sparsity and pulse-coupled neural network, which is called MCA-CS-PCNN for short. Source images are first decomposed into cartoon components and texture components by morphological component analysis, and a convolutional sparse representation of cartoon layers and texture layers is produced by prelearned dictionaries. Then, convolutional sparsity is used as a stimulus to motivate the PCNN for dealing with cartoon layers and texture layers. Finally, the medical fused image is computed via combining fused cartoon layers and texture layers. Experimental results verify that the MCA-CS-PCNN model is superior to the state-of-the-art fusion strategy.


Sign in / Sign up

Export Citation Format

Share Document