scholarly journals An Integrated Dictionary-Learning Entropy-Based Medical Image Fusion Framework

2017 ◽  
Vol 9 (4) ◽  
pp. 61 ◽  
Author(s):  
Guanqiu Qi ◽  
Jinchuan Wang ◽  
Qiong Zhang ◽  
Fancheng Zeng ◽  
Zhiqin Zhu
Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1423
Author(s):  
Kai Guo ◽  
Xiongfei Li ◽  
Hongrui Zang ◽  
Tiehu Fan

In order to obtain the physiological information and key features of source images to the maximum extent, improve the visual effect and clarity of the fused image, and reduce the computation, a multi-modal medical image fusion framework based on feature reuse is proposed. The framework consists of intuitive fuzzy processing (IFP), capture image details network (CIDN), fusion, and decoding. First, the membership function of the image is redefined to remove redundant features and obtain the image with complete features. Then, inspired by DenseNet, we proposed a new encoder to capture all the medical information features in the source image. In the fusion layer, we calculate the weight of each feature graph in the required fusion coefficient according to the trajectory of the feature graph. Finally, the filtered medical information is spliced and decoded to reproduce the required fusion image. In the encoding and image reconstruction networks, the mixed loss function of cross entropy and structural similarity is adopted to greatly reduce the information loss in image fusion. To assess performance, we conducted three sets of experiments on medical images of different grayscales and colors. Experimental results show that the proposed algorithm has advantages not only in detail and structure recognition but also in visual features and time complexity compared with other algorithms.


2016 ◽  
Vol 214 ◽  
pp. 471-482 ◽  
Author(s):  
Zhiqin Zhu ◽  
Yi Chai ◽  
Hongpeng Yin ◽  
Yanxia Li ◽  
Zhaodong Liu

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Chuangeng Tian ◽  
Lu Tang ◽  
Xiao Li ◽  
Kaili Liu ◽  
Jian Wang

This paper proposes a perceptual medical image fusion framework based on morphological component analysis combining convolutional sparsity and pulse-coupled neural network, which is called MCA-CS-PCNN for short. Source images are first decomposed into cartoon components and texture components by morphological component analysis, and a convolutional sparse representation of cartoon layers and texture layers is produced by prelearned dictionaries. Then, convolutional sparsity is used as a stimulus to motivate the PCNN for dealing with cartoon layers and texture layers. Finally, the medical fused image is computed via combining fused cartoon layers and texture layers. Experimental results verify that the MCA-CS-PCNN model is superior to the state-of-the-art fusion strategy.


2015 ◽  
Vol 157 ◽  
pp. 143-152 ◽  
Author(s):  
Gaurav Bhatnagar ◽  
Q.M. Jonathan Wu ◽  
Zheng Liu

Sign in / Sign up

Export Citation Format

Share Document