Multimodal Biomedical Image Fusion Method via Rolling Guidance Filter and Deep Convolutional Neural Networks

Optik ◽  
2021 ◽  
pp. 166726
Author(s):  
Jun Fu ◽  
Weisheng Li ◽  
Aijia Ouyang ◽  
Baiqing He
2013 ◽  
Vol 444-445 ◽  
pp. 1620-1624
Author(s):  
Xi Cai ◽  
Guang Han ◽  
Jin Kuan Wang

To simulate biological activities of human visual system, we propose a curvelet-based image fusion method using unit-linking pulse coupled neural networks (ULPCNNs) model. Contrasts of detailed coefficients are inputted into the ULPCNNs to imitate the sensitivity of HVS to detailed information, and the contrasts are also employed as corresponding linking strength for the neurons. After motivated by external stimuli from images, ULPCNNs can produce series of binary pulses containing much information of global features. Then we use the average firing times of output pulses in a neighborhood as the salience measure to determine our fusion rules. Experimental results demonstrate that, our proposed method has a satisfying fusion result both on visual effects and objective evaluations.


Diagnostics ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 2379
Author(s):  
Yin Dai ◽  
Yumeng Song ◽  
Weibin Liu ◽  
Wenhe Bai ◽  
Yifan Gao ◽  
...  

Parkinson’s disease (PD) is a common neurodegenerative disease that has a significant impact on people’s lives. Early diagnosis is imperative since proper treatment stops the disease’s progression. With the rapid development of CAD techniques, there have been numerous applications of computer-aided diagnostic (CAD) techniques in the diagnosis of PD. In recent years, image fusion has been applied in various fields and is valuable in medical diagnosis. This paper mainly adopts a multi-focus image fusion method primarily based on deep convolutional neural networks to fuse magnetic resonance images (MRI) and positron emission tomography (PET) neural photographs into multi-modal images. Additionally, the study selected Alexnet, Densenet, ResNeSt, and Efficientnet neural networks to classify the single-modal MRI dataset and the multi-modal dataset. The test accuracy rates of the single-modal MRI dataset are 83.31%, 87.76%, 86.37%, and 86.44% on the Alexnet, Densenet, ResNeSt, and Efficientnet, respectively. Moreover, the test accuracy rates of the multi-modal fusion dataset on the Alexnet, Densenet, ResNeSt, and Efficientnet are 90.52%, 97.19%, 94.15%, and 93.39%. As per all four networks discussed above, it can be concluded that the test results for the multi-modal dataset are better than those for the single-modal MRI dataset. The experimental results showed that the multi-focus image fusion method according to deep learning can enhance the accuracy of PD image classification.


Sign in / Sign up

Export Citation Format

Share Document