Multi scale decomposition based medical image fusion using convolutional neural network and sparse representation

2021 ◽  
Vol 69 ◽  
pp. 102789
Author(s):  
D. Sunderlin Shibu ◽  
S. Suja Priyadharsini
2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Jingming Xia ◽  
Yiming Chen ◽  
Aiyue Chen ◽  
Yicai Chen

The clinical assistant diagnosis has a high requirement for the visual effect of medical images. However, the low frequency subband coefficients obtained by the NSCT decomposition are not sparse, which is not conducive to maintaining the details of the source image. To solve these problems, a medical image fusion algorithm combined with sparse representation and pulse coupling neural network is proposed. First, the source image is decomposed into low and high frequency subband coefficients by NSCT transform. Secondly, the K singular value decomposition (K-SVD) method is used to train the low frequency subband coefficients to get the overcomplete dictionary D, and the orthogonal matching pursuit (OMP) algorithm is used to sparse the low frequency subband coefficients to complete the fusion of the low frequency subband sparse coefficients. Then, the pulse coupling neural network (PCNN) is excited by the spatial frequency of the high frequency subband coefficients, and the fusion coefficients of the high frequency subband coefficients are selected according to the number of ignition times. Finally, the fusion medical image is reconstructed by NSCT inverter. The experimental results and analysis show that the algorithm of gray and color image fusion is about 34% and 10% higher than the contrast algorithm in the edge information transfer factor QAB/F index, and the performance of the fusion result is better than the existing algorithm.


2020 ◽  
Vol 2020 ◽  
pp. 1-13 ◽  
Author(s):  
Jingming Xia ◽  
Yi Lu ◽  
Ling Tan

Visual effects of medical image have a great impact on clinical assistant diagnosis. At present, medical image fusion has become a powerful means of clinical application. The traditional medical image fusion methods have the problem of poor fusion results due to the loss of detailed feature information during fusion. To deal with it, this paper proposes a new multimodal medical image fusion method based on the imaging characteristics of medical images. In the proposed method, the non-subsampled shearlet transform (NSST) decomposition is first performed on the source images to obtain high-frequency and low-frequency coefficients. The high-frequency coefficients are fused by a parameter‐adaptive pulse-coupled neural network (PAPCNN) model. The method is based on parameter adaptive and optimized connection strength β adopted to promote the performance. The low-frequency coefficients are merged by the convolutional sparse representation (CSR) model. The experimental results show that the proposed method solves the problems of difficult parameter setting and poor detail preservation of sparse representation during image fusion in traditional PCNN algorithms, and it has significant advantages in visual effect and objective indices compared with the existing mainstream fusion algorithms.


2019 ◽  
Vol 16 (Special Issue) ◽  
Author(s):  
Abolfazl Sedighi ◽  
Alireza Nikravanshalmani ◽  
Madjid Khalilian

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Lei Wang ◽  
Chunhong Chang ◽  
Zhouqi Liu ◽  
Jin Huang ◽  
Cong Liu ◽  
...  

The traditional medical image fusion methods, such as the famous multi-scale decomposition-based methods, usually suffer from the bad sparse representations of the salient features and the low ability of the fusion rules to transfer the captured feature information. In order to deal with this problem, a medical image fusion method based on the scale invariant feature transformation (SIFT) descriptor and the deep convolutional neural network (CNN) in the shift-invariant shearlet transform (SIST) domain is proposed. Firstly, the images to be fused are decomposed into the high-pass and the low-pass coefficients. Then, the fusion of the high-pass components is implemented under the rule based on the pre-trained CNN model, which mainly consists of four steps: feature detection, initial segmentation, consistency verification, and the final fusion; the fusion of the low-pass subbands is based on the matching degree computed by the SIFT descriptor to capture the features of the low frequency components. Finally, the fusion results are obtained by inversion of the SIST. Taking the typical standard deviation, QAB/F, entropy, and mutual information as the objective measurements, the experimental results demonstrate that the detailed information without artifacts and distortions can be well preserved by the proposed method, and better quantitative performance can be also obtained.


Sign in / Sign up

Export Citation Format

Share Document