Multimodal medical image fusion based on nonsubsampled shearlet transform and convolutional sparse representation

Author(s):  
Lifang Wang ◽  
Jieliang Dou ◽  
Pinle Qin ◽  
Suzhen Lin ◽  
Yuan Gao ◽  
...  
2019 ◽  
Vol 9 (9) ◽  
pp. 1815-1826 ◽  
Author(s):  
Liangliang Li ◽  
Linli Wang ◽  
Zuoxu Wang ◽  
Zhenhong Jia ◽  
Yujuan Si ◽  
...  

2018 ◽  
Vol 153 ◽  
pp. 379-395 ◽  
Author(s):  
Xin Jin ◽  
Gao Chen ◽  
Jingyu Hou ◽  
Qian Jiang ◽  
Dongming Zhou ◽  
...  

Author(s):  
Tannaz Akbarpour ◽  
Mousa Shamsi ◽  
Sabalan Daneshvar ◽  
Masoud Pooreisa

Medical image fusion has a crucial role in many areas of modern medicine like diagnosis and therapy planning. Methods based on principal component analysis (PCA) have been extensively used in area of medical image fusion due to their computational simplicity. Methods based on multiresolution analysis are of attraction now due to their ability in extracting image details. A new method is proposed in this paper to benefit from these advantages. For this aim, firstly, images are transformed into multiscale space based on nonsubsampled shearlet transform (NSST). Secondly, principal components and weights of each subband are calculated. Averaging them yields weights necessary for fusion step. Finally, fused image is achieved by merging source images according to weights. Quantitative and qualitative analysis prove outperformance of our methods compared to well-known fusion methods and improvement compared to subsequent best method, in terms of standard deviation [Formula: see text], entropy [Formula: see text], structural similarity [Formula: see text], signal to noise ratio [Formula: see text] and fusion performance metric [Formula: see text].


Sign in / Sign up

Export Citation Format

Share Document