Multi-Modal Medical Image Fusion Algorithm Based On Spatial Frequency Motivated PA-PCNN In NSST Domain

Author(s):  
Vanitha Kamarthi ◽  
D. Satyanarayana ◽  
M.N. Giri Prasad

Background: Image fusion has been grown as an effectual method in diseases related diagnosis schemes. Methods: In this paper, a new method for combining multimodal medical images using spatial frequency motivated parameter-adaptive PCNN (SF-PAPCNN) is suggested. The multi-modal images are disintegrated into frequency bands by using decomposition NSST. The coefficients of low frequency bands are selected using maximum rule. The coefficients of high frequency bands are combined by SF-PAPCNN. Results: The fused medical images is obtained by applying INSST to above coefficients. Conclusion: The quality metrics such as entropy ENT, fusion symmetry FS, deviation STD, mutual information QMI and edge strength QAB/F are used to validate the efficacy of suggested scheme.

2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Ling Tan ◽  
Xin Yu

Clinical diagnosis has high requirements for the visual effect of medical images. To obtain rich detail features and clear edges for fusion medical images, an image fusion algorithm FFST-SR-PCNN based on fast finite shearlet transform (FFST) and sparse representation is proposed, aiming at the problem of poor clarity of edge details that is conducive to maintaining the details of source image in current algorithms. Firstly, the source image is decomposed into low-frequency coefficients and high-frequency coefficients by FFST. Secondly, the K-SVD method is used to train the low-frequency coefficients to obtain the overcomplete dictionary D, and then the OMP algorithm sparsely encodes the low-frequency coefficients to complete the fusion of the low-frequency coefficients. Then, a high-frequency coefficient is applied to excite a pulse-coupled neural network, and the fusion coefficient of the high-frequency coefficient is selected according to the number of ignitions. Finally, the fused low-frequency coefficient and high-frequency coefficient are reconstructed into the fused medical image by FFST inverse transform. The experimental results show that the image fusion result of the proposed algorithm is about 35% higher than the comparison algorithms for the edge information transfer factor QAB/F index and has achieved good results in both subjective visual effects and objective evaluation indicators.


2018 ◽  
Vol 7 (2.31) ◽  
pp. 165
Author(s):  
M Shyamala Devi ◽  
P Balamurugan

Image processing technology requires moreover the full image or the part of image which is to be processed from the user’s point of view like the radius of object etc. The main purpose of fusion is to diminish dissimilar error between the fused image and the input images. With respect to the medical diagnosis, the edges and outlines of the concerned objects is more important than extra information. So preserving the edge features of the image is worth for investigating the image fusion. The image with higher contrast contains more edge-like features. Here we propose a new medical image fusion scheme namely Local Energy Match NSCT based on discrete contourlet transformation, which is constructive to give the details of curve edges. It is used to progress the edge information of fused image by dropping the distortion. This transformation lead to crumbling of multimodal image addicted to finer and coarser details and finest details will be decayed into unusual resolution in dissimilar orientation. The input multimodal images namely CT and MRI images are first transformed by Non Sub sampled Contourlet Transformation (NSCT) which decomposes the image into low frequency and high frequency elements. In our system, the Low frequency coefficient of the image is fused by image averaging and Gabor filter bank algorithm. The processed High frequency coefficients of the image are fused by image averaging and gradient based fusion algorithm. Then the fused image is obtained by inverse NSCT with local energy match based coefficients. To evaluate the image fusion accuracy, Peak Signal to Noise Ratio (PSNR), Root Mean Square Error (RMSE) and Correlation Coefficient parameters are used in this work .


Author(s):  
GAURAV BHATNAGAR ◽  
Q. M. JONATHAN WU

In this paper, a novel image fusion algorithm based on framelet transform is presented. The core idea is to decompose all the images to be fused into low and high-frequency bands using framelet transform. For fusion, two different selection strategies are developed and used for low and high-frequency bands. The first strategy is adaptive weighted average based on local energy and is applied to fuse the low-frequency bands. In order to fuse high-frequency bands, a new strategy is developed based on texture while exploiting the human visual system characteristics, which can preserve more details in source images and further improve the quality of fused image. Experimental results demonstrate the efficiency and better performance than existing image fusion methods both in visual inspection and objective evaluation criteria.


Author(s):  
Mummadi Gowthami Reddy ◽  
Palagiri Veera Narayana Reddy ◽  
Patil Ramana Reddy

In the current era of technological development, medical imaging plays an important role in many applications of medical diagnosis and therapy. In this regard, medical image fusion could be a powerful tool to combine multi-modal images by using image processing techniques. But, conventional approaches failed to provide the effective image quality assessments and robustness of fused image. To overcome these drawbacks, in this work three-stage multiscale decomposition (TSMSD) using pulse-coupled neural networks with adaptive arguments (PCNN-AA) approach is proposed for multi-modal medical image fusion. Initially, nonsubsampled shearlet transform (NSST) is applied onto the source images to decompose them into low frequency and high frequency bands. Then, low frequency bands of both the source images are fused using nonlinear anisotropic filtering with discrete Karhunen–Loeve transform (NLAF-DKLT) methodology. Next, high frequency bands obtained from NSST are fused using PCNN-AA approach. Now, fused low frequency and high frequency bands are reconstructed using NSST reconstruction. Finally, band fusion rule algorithm with pyramid reconstruction is applied to get final fused medical image. Extensive simulation outcome discloses the superiority of proposed TSMSD using PCNN-AA approach as compared to state-of-the-art medical image fusion methods in terms of fusion quality metrics such as entropy (E), mutual information (MI), mean (M), standard deviation (STD), correlation coefficient (CC) and computational complexity.


Author(s):  
Guofen Wang ◽  
Yongdong Huang

The medical image fusion process integrates the information of multiple source images into a single image. This fused image can provide more comprehensive information and is helpful in clinical diagnosis and treatment. In this paper, a new medical image fusion algorithm is proposed. Firstly, the original image is decomposed into a low-frequency sub-band and a series of high-frequency sub-bands by using nonsubsampled shearlet transform (NSST). For the low-frequency sub-band, kirsch operator is used to extract the directional feature maps from eight directions and novel sum-modified-Laplacian (NSML) method is used to calculate the significant information of each directional feature map, and then, combining a sigmod function and the significant information updated by gradient domain guided image filtering (GDGF), calculate the fusion weight coefficients of the directional feature maps. The fused feature map is obtained by summing the convolutions of the weight coefficients and the directional feature maps. The final fused low-frequency sub-band is obtained by the linear combination of the eight fused directional feature maps. The modified pulse coupled neural network (MPCNN) model is used to calculate the firing times of each high-frequency sub-band coefficient, and the fused high-frequency sub-bands are selected according to the firing times. Finally, the inverse NSST acts on the fused low-frequency sub-band and the fused high-frequency sub-bands to obtain the fused image. The experimental results show that the proposed medical image fusion algorithm expresses some advantages over the classical medical image fusion algorithms in objective and subjective evaluation.


2019 ◽  
Vol 14 (7) ◽  
pp. 658-666
Author(s):  
Kai-jian Xia ◽  
Jian-qiang Wang ◽  
Jian Cai

Background: Lung cancer is one of the common malignant tumors. The successful diagnosis of lung cancer depends on the accuracy of the image obtained from medical imaging modalities. Objective: The fusion of CT and PET is combining the complimentary and redundant information both images and can increase the ease of perception. Since the existing fusion method sare not perfect enough, and the fusion effect remains to be improved, the paper proposes a novel method called adaptive PET/CT fusion for lung cancer in Piella framework. Methods: This algorithm firstly adopted the DTCWT to decompose the PET and CT images into different components, respectively. In accordance with the characteristics of low-frequency and high-frequency components and the features of PET and CT image, 5 membership functions are used as a combination method so as to determine the fusion weight for low-frequency components. In order to fuse different high-frequency components, we select the energy difference of decomposition coefficients as the match measure, and the local energy as the activity measure; in addition, the decision factor is also determined for the high-frequency components. Results: The proposed method is compared with some of the pixel-level spatial domain image fusion algorithms. The experimental results show that our proposed algorithm is feasible and effective. Conclusion: Our proposed algorithm can better retain and protrude the lesions edge information and the texture information of lesions in the image fusion.


2021 ◽  
Vol 12 (4) ◽  
pp. 78-97
Author(s):  
Hassiba Talbi ◽  
Mohamed-Khireddine Kholladi

In this paper, the authors propose an algorithm of hybrid particle swarm with differential evolution (DE) operator, termed DEPSO, with the help of a multi-resolution transform named dual tree complex wavelet transform (DTCWT) to solve the problem of multimodal medical image fusion. This hybridizing approach aims to combine algorithms in a judicious manner, where the resulting algorithm will contain the positive features of these different algorithms. This new algorithm decomposes the source images into high-frequency and low-frequency coefficients by the DTCWT, then adopts the absolute maximum method to fuse high-frequency coefficients; the low-frequency coefficients are fused by a weighted average method while the weights are estimated and enhanced by an optimization method to gain optimal results. The authors demonstrate by the experiments that this algorithm, besides its simplicity, provides a robust and efficient way to fuse multimodal medical images compared to existing wavelet transform-based image fusion algorithms.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Zhaisheng Ding ◽  
Dongming Zhou ◽  
Rencan Nie ◽  
Ruichao Hou ◽  
Yanyu Liu

Computed tomography (CT) images show structural features, while magnetic resonance imaging (MRI) images represent brain tissue anatomy but do not contain any functional information. How to effectively combine the images of the two modes has become a research challenge. In this paper, a new framework for medical image fusion is proposed which combines convolutional neural networks (CNNs) and non-subsampled shearlet transform (NSST) to simultaneously cover the advantages of them both. This method effectively retains the functional information of the CT image and reduces the loss of brain structure information and spatial distortion of the MRI image. In our fusion framework, the initial weights integrate the pixel activity information from two source images that is generated by a dual-branch convolutional network and is decomposed by NSST. Firstly, the NSST is performed on the source images and the initial weights to obtain their low-frequency and high-frequency coefficients. Then, the first component of the low-frequency coefficients is fused by a novel fusion strategy, which simultaneously copes with two key issues in the fusion processing which are named energy conservation and detail extraction. The second component of the low-frequency coefficients is fused by the strategy that is designed according to the spatial frequency of the weight map. Moreover, the high-frequency coefficients are fused by the high-frequency components of the initial weight. Finally, the final image is reconstructed by the inverse NSST. The effectiveness of the proposed method is verified using pairs of multimodality images, and the sufficient experiments indicate that our method performs well especially for medical image fusion.


2013 ◽  
Vol 457-458 ◽  
pp. 736-740 ◽  
Author(s):  
Nian Yi Wang ◽  
Wei Lan Wang ◽  
Xiao Ran Guo

In this paper, a new image fusion algorithm based on discrete wavelet transform (DWT) and spiking cortical model (SCM) is proposed. The multiscale decomposition and multi-resolution representation characteristics of DWT are associated with global coupling and pulse synchronization features of SCM. Two different fusion rules are used to fuse the low and high frequency sub-bands respectively. Maximum selection rule (MSR) is used to fuse low frequency coefficients. As to high frequency subband coefficients, spatial frequency (SF) is calculated and then imputed into SCM to motivate neural network. Experimental results demonstrate the effectiveness of the proposed fusion method.


2010 ◽  
Vol 07 (02) ◽  
pp. 99-107 ◽  
Author(s):  
NEMIR AL-AZZAWI ◽  
WAN AHMED K. WAN ABDULLAH

Medical image fusion has been used to derive useful information from multimodality medical image data. This paper presents a dual-tree complex contourlet transform (DT-CCT) based approach for the fusion of magnetic resonance image (MRI) and computed tomography (CT) image. The objective of the fusion of an MRI and a CT image of the same organ is to obtain a single image containing as much information as possible about that organ for diagnosis. The limitation of directional information of dual-tree complex wavelet (DT-CWT) is rectified in DT-CCT by incorporating directional filter banks (DFB) into the DT-CWT. To improve the fused image quality, we propose a new method for fusion rules based on the principle component analysis (PCA) which depend on frequency component of DT-CCT coefficients (contourlet domain). For low frequency coefficients, PCA method is adopted and for high frequency coefficients, the salient features are picked up based on local energy. The final fusion image is obtained by directly applying inverse dual tree complex contourlet transform (IDT-CCT) to the fused low and high frequency coefficients. The DT-CCT produces images with improved contours and textures, while the property of shift invariance is retained. The experimental results showed that the proposed method produces fixed image with extensive features on multimodality.


Sign in / Sign up

Export Citation Format

Share Document