scholarly journals Medical image fusion: A survey of the state of the art

2014 ◽  
Vol 19 ◽  
pp. 4-19 ◽  
Author(s):  
Alex Pappachen James ◽  
Belur V. Dasarathy
2021 ◽  
Vol 15 ◽  
Author(s):  
Yi Li ◽  
Junli Zhao ◽  
Zhihan Lv ◽  
Zhenkuan Pan

This article proposes a multimode medical image fusion with CNN and supervised learning, in order to solve the problem of practical medical diagnosis. It can implement different types of multimodal medical image fusion problems in batch processing mode and can effectively overcome the problem that traditional fusion problems that can only be solved by single and single image fusion. To a certain extent, it greatly improves the fusion effect, image detail clarity, and time efficiency in a new method. The experimental results indicate that the proposed method exhibits state-of-the-art fusion performance in terms of visual quality and a variety of quantitative evaluation criteria. Its medical diagnostic background is wide.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Chuangeng Tian ◽  
Lu Tang ◽  
Xiao Li ◽  
Kaili Liu ◽  
Jian Wang

This paper proposes a perceptual medical image fusion framework based on morphological component analysis combining convolutional sparsity and pulse-coupled neural network, which is called MCA-CS-PCNN for short. Source images are first decomposed into cartoon components and texture components by morphological component analysis, and a convolutional sparse representation of cartoon layers and texture layers is produced by prelearned dictionaries. Then, convolutional sparsity is used as a stimulus to motivate the PCNN for dealing with cartoon layers and texture layers. Finally, the medical fused image is computed via combining fused cartoon layers and texture layers. Experimental results verify that the MCA-CS-PCNN model is superior to the state-of-the-art fusion strategy.


2020 ◽  
Vol 34 (07) ◽  
pp. 12797-12804 ◽  
Author(s):  
Hao Zhang ◽  
Han Xu ◽  
Yang Xiao ◽  
Xiaojie Guo ◽  
Jiayi Ma

In this paper, we propose a fast unified image fusion network based on proportional maintenance of gradient and intensity (PMGI), which can end-to-end realize a variety of image fusion tasks, including infrared and visible image fusion, multi-exposure image fusion, medical image fusion, multi-focus image fusion and pan-sharpening. We unify the image fusion problem into the texture and intensity proportional maintenance problem of the source images. On the one hand, the network is divided into gradient path and intensity path for information extraction. We perform feature reuse in the same path to avoid loss of information due to convolution. At the same time, we introduce the pathwise transfer block to exchange information between different paths, which can not only pre-fuse the gradient information and intensity information, but also enhance the information to be processed later. On the other hand, we define a uniform form of loss function based on these two kinds of information, which can adapt to different fusion tasks. Experiments on publicly available datasets demonstrate the superiority of our PMGI over the state-of-the-art in terms of both visual effect and quantitative metric in a variety of fusion tasks. In addition, our method is faster compared with the state-of-the-art.


Author(s):  
Raja Krishnamoorthi ◽  
Annapurna Bai ◽  
A. Srinivas

2017 ◽  
Vol 9 (4) ◽  
pp. 61 ◽  
Author(s):  
Guanqiu Qi ◽  
Jinchuan Wang ◽  
Qiong Zhang ◽  
Fancheng Zeng ◽  
Zhiqin Zhu

Sign in / Sign up

Export Citation Format

Share Document