scholarly journals Medical Image Fusion Based on Sparse Representation and PCNN in NSCT Domain

2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Jingming Xia ◽  
Yiming Chen ◽  
Aiyue Chen ◽  
Yicai Chen

The clinical assistant diagnosis has a high requirement for the visual effect of medical images. However, the low frequency subband coefficients obtained by the NSCT decomposition are not sparse, which is not conducive to maintaining the details of the source image. To solve these problems, a medical image fusion algorithm combined with sparse representation and pulse coupling neural network is proposed. First, the source image is decomposed into low and high frequency subband coefficients by NSCT transform. Secondly, the K singular value decomposition (K-SVD) method is used to train the low frequency subband coefficients to get the overcomplete dictionary D, and the orthogonal matching pursuit (OMP) algorithm is used to sparse the low frequency subband coefficients to complete the fusion of the low frequency subband sparse coefficients. Then, the pulse coupling neural network (PCNN) is excited by the spatial frequency of the high frequency subband coefficients, and the fusion coefficients of the high frequency subband coefficients are selected according to the number of ignition times. Finally, the fusion medical image is reconstructed by NSCT inverter. The experimental results and analysis show that the algorithm of gray and color image fusion is about 34% and 10% higher than the contrast algorithm in the edge information transfer factor QAB/F index, and the performance of the fusion result is better than the existing algorithm.

2020 ◽  
Vol 2020 ◽  
pp. 1-13 ◽  
Author(s):  
Jingming Xia ◽  
Yi Lu ◽  
Ling Tan

Visual effects of medical image have a great impact on clinical assistant diagnosis. At present, medical image fusion has become a powerful means of clinical application. The traditional medical image fusion methods have the problem of poor fusion results due to the loss of detailed feature information during fusion. To deal with it, this paper proposes a new multimodal medical image fusion method based on the imaging characteristics of medical images. In the proposed method, the non-subsampled shearlet transform (NSST) decomposition is first performed on the source images to obtain high-frequency and low-frequency coefficients. The high-frequency coefficients are fused by a parameter‐adaptive pulse-coupled neural network (PAPCNN) model. The method is based on parameter adaptive and optimized connection strength β adopted to promote the performance. The low-frequency coefficients are merged by the convolutional sparse representation (CSR) model. The experimental results show that the proposed method solves the problems of difficult parameter setting and poor detail preservation of sparse representation during image fusion in traditional PCNN algorithms, and it has significant advantages in visual effect and objective indices compared with the existing mainstream fusion algorithms.


2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Ling Tan ◽  
Xin Yu

Clinical diagnosis has high requirements for the visual effect of medical images. To obtain rich detail features and clear edges for fusion medical images, an image fusion algorithm FFST-SR-PCNN based on fast finite shearlet transform (FFST) and sparse representation is proposed, aiming at the problem of poor clarity of edge details that is conducive to maintaining the details of source image in current algorithms. Firstly, the source image is decomposed into low-frequency coefficients and high-frequency coefficients by FFST. Secondly, the K-SVD method is used to train the low-frequency coefficients to obtain the overcomplete dictionary D, and then the OMP algorithm sparsely encodes the low-frequency coefficients to complete the fusion of the low-frequency coefficients. Then, a high-frequency coefficient is applied to excite a pulse-coupled neural network, and the fusion coefficient of the high-frequency coefficient is selected according to the number of ignitions. Finally, the fused low-frequency coefficient and high-frequency coefficient are reconstructed into the fused medical image by FFST inverse transform. The experimental results show that the image fusion result of the proposed algorithm is about 35% higher than the comparison algorithms for the edge information transfer factor QAB/F index and has achieved good results in both subjective visual effects and objective evaluation indicators.


2021 ◽  
Vol 12 (4) ◽  
pp. 78-97
Author(s):  
Hassiba Talbi ◽  
Mohamed-Khireddine Kholladi

In this paper, the authors propose an algorithm of hybrid particle swarm with differential evolution (DE) operator, termed DEPSO, with the help of a multi-resolution transform named dual tree complex wavelet transform (DTCWT) to solve the problem of multimodal medical image fusion. This hybridizing approach aims to combine algorithms in a judicious manner, where the resulting algorithm will contain the positive features of these different algorithms. This new algorithm decomposes the source images into high-frequency and low-frequency coefficients by the DTCWT, then adopts the absolute maximum method to fuse high-frequency coefficients; the low-frequency coefficients are fused by a weighted average method while the weights are estimated and enhanced by an optimization method to gain optimal results. The authors demonstrate by the experiments that this algorithm, besides its simplicity, provides a robust and efficient way to fuse multimodal medical images compared to existing wavelet transform-based image fusion algorithms.


2018 ◽  
Vol 7 (2.31) ◽  
pp. 165
Author(s):  
M Shyamala Devi ◽  
P Balamurugan

Image processing technology requires moreover the full image or the part of image which is to be processed from the user’s point of view like the radius of object etc. The main purpose of fusion is to diminish dissimilar error between the fused image and the input images. With respect to the medical diagnosis, the edges and outlines of the concerned objects is more important than extra information. So preserving the edge features of the image is worth for investigating the image fusion. The image with higher contrast contains more edge-like features. Here we propose a new medical image fusion scheme namely Local Energy Match NSCT based on discrete contourlet transformation, which is constructive to give the details of curve edges. It is used to progress the edge information of fused image by dropping the distortion. This transformation lead to crumbling of multimodal image addicted to finer and coarser details and finest details will be decayed into unusual resolution in dissimilar orientation. The input multimodal images namely CT and MRI images are first transformed by Non Sub sampled Contourlet Transformation (NSCT) which decomposes the image into low frequency and high frequency elements. In our system, the Low frequency coefficient of the image is fused by image averaging and Gabor filter bank algorithm. The processed High frequency coefficients of the image are fused by image averaging and gradient based fusion algorithm. Then the fused image is obtained by inverse NSCT with local energy match based coefficients. To evaluate the image fusion accuracy, Peak Signal to Noise Ratio (PSNR), Root Mean Square Error (RMSE) and Correlation Coefficient parameters are used in this work .


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Zhaisheng Ding ◽  
Dongming Zhou ◽  
Rencan Nie ◽  
Ruichao Hou ◽  
Yanyu Liu

Computed tomography (CT) images show structural features, while magnetic resonance imaging (MRI) images represent brain tissue anatomy but do not contain any functional information. How to effectively combine the images of the two modes has become a research challenge. In this paper, a new framework for medical image fusion is proposed which combines convolutional neural networks (CNNs) and non-subsampled shearlet transform (NSST) to simultaneously cover the advantages of them both. This method effectively retains the functional information of the CT image and reduces the loss of brain structure information and spatial distortion of the MRI image. In our fusion framework, the initial weights integrate the pixel activity information from two source images that is generated by a dual-branch convolutional network and is decomposed by NSST. Firstly, the NSST is performed on the source images and the initial weights to obtain their low-frequency and high-frequency coefficients. Then, the first component of the low-frequency coefficients is fused by a novel fusion strategy, which simultaneously copes with two key issues in the fusion processing which are named energy conservation and detail extraction. The second component of the low-frequency coefficients is fused by the strategy that is designed according to the spatial frequency of the weight map. Moreover, the high-frequency coefficients are fused by the high-frequency components of the initial weight. Finally, the final image is reconstructed by the inverse NSST. The effectiveness of the proposed method is verified using pairs of multimodality images, and the sufficient experiments indicate that our method performs well especially for medical image fusion.


Author(s):  
Mummadi Gowthami Reddy ◽  
Palagiri Veera Narayana Reddy ◽  
Patil Ramana Reddy

In the current era of technological development, medical imaging plays an important role in many applications of medical diagnosis and therapy. In this regard, medical image fusion could be a powerful tool to combine multi-modal images by using image processing techniques. But, conventional approaches failed to provide the effective image quality assessments and robustness of fused image. To overcome these drawbacks, in this work three-stage multiscale decomposition (TSMSD) using pulse-coupled neural networks with adaptive arguments (PCNN-AA) approach is proposed for multi-modal medical image fusion. Initially, nonsubsampled shearlet transform (NSST) is applied onto the source images to decompose them into low frequency and high frequency bands. Then, low frequency bands of both the source images are fused using nonlinear anisotropic filtering with discrete Karhunen–Loeve transform (NLAF-DKLT) methodology. Next, high frequency bands obtained from NSST are fused using PCNN-AA approach. Now, fused low frequency and high frequency bands are reconstructed using NSST reconstruction. Finally, band fusion rule algorithm with pyramid reconstruction is applied to get final fused medical image. Extensive simulation outcome discloses the superiority of proposed TSMSD using PCNN-AA approach as compared to state-of-the-art medical image fusion methods in terms of fusion quality metrics such as entropy (E), mutual information (MI), mean (M), standard deviation (STD), correlation coefficient (CC) and computational complexity.


2020 ◽  
Vol 14 ◽  
pp. 174830262093129
Author(s):  
Zhang Zhancheng ◽  
Luo Xiaoqing ◽  
Xiong Mengyu ◽  
Wang Zhiwen ◽  
Li Kai

Medical image fusion can combine multi-modal images into an integrated higher-quality image, which can provide more comprehensive and accurate pathological information than individual image does. Traditional transform domain-based image fusion methods usually ignore the dependencies between coefficients and may lead to the inaccurate representation of source image. To improve the quality of fused image, a medical image fusion method based on the dependencies of quaternion wavelet transform coefficients is proposed. First, the source images are decomposed into low-frequency component and high-frequency component by quaternion wavelet transform. Then, a clarity evaluation index based on quaternion wavelet transform amplitude and phase is constructed and a contextual activity measure is designed. These measures are utilized to fuse the high-frequency coefficients and the choose-max fusion rule is applied to the low-frequency components. Finally, the fused image can be obtained by inverse quaternion wavelet transform. The experimental results on some brain multi-modal medical images demonstrate that the proposed method has achieved advanced fusion result.


Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 591
Author(s):  
Liangliang Li ◽  
Hongbing Ma

Multimodal medical image fusion aims to fuse images with complementary multisource information. In this paper, we propose a novel multimodal medical image fusion method using pulse coupled neural network (PCNN) and a weighted sum of eight-neighborhood-based modified Laplacian (WSEML) integrating guided image filtering (GIF) in non-subsampled contourlet transform (NSCT) domain. Firstly, the source images are decomposed by NSCT, several low- and high-frequency sub-bands are generated. Secondly, the PCNN-based fusion rule is used to process the low-frequency components, and the GIF-WSEML fusion model is used to process the high-frequency components. Finally, the fused image is obtained by integrating the fused low- and high-frequency sub-bands. The experimental results demonstrate that the proposed method can achieve better performance in terms of multimodal medical image fusion. The proposed algorithm also has obvious advantages in objective evaluation indexes VIFF, QW, API, SD, EN and time consumption.


Author(s):  
Guofen Wang ◽  
Yongdong Huang

The medical image fusion process integrates the information of multiple source images into a single image. This fused image can provide more comprehensive information and is helpful in clinical diagnosis and treatment. In this paper, a new medical image fusion algorithm is proposed. Firstly, the original image is decomposed into a low-frequency sub-band and a series of high-frequency sub-bands by using nonsubsampled shearlet transform (NSST). For the low-frequency sub-band, kirsch operator is used to extract the directional feature maps from eight directions and novel sum-modified-Laplacian (NSML) method is used to calculate the significant information of each directional feature map, and then, combining a sigmod function and the significant information updated by gradient domain guided image filtering (GDGF), calculate the fusion weight coefficients of the directional feature maps. The fused feature map is obtained by summing the convolutions of the weight coefficients and the directional feature maps. The final fused low-frequency sub-band is obtained by the linear combination of the eight fused directional feature maps. The modified pulse coupled neural network (MPCNN) model is used to calculate the firing times of each high-frequency sub-band coefficient, and the fused high-frequency sub-bands are selected according to the firing times. Finally, the inverse NSST acts on the fused low-frequency sub-band and the fused high-frequency sub-bands to obtain the fused image. The experimental results show that the proposed medical image fusion algorithm expresses some advantages over the classical medical image fusion algorithms in objective and subjective evaluation.


Sign in / Sign up

Export Citation Format

Share Document