medical image fusion
Recently Published Documents


TOTAL DOCUMENTS

619
(FIVE YEARS 239)

H-INDEX

33
(FIVE YEARS 8)

Author(s):  
Nukapeyyi Tanuja

Abstract: Sparse representation(SR) model named convolutional sparsity based morphological component analysis is introduced for pixel-level medical image fusion. The CS-MCA model can achieve multicomponent and global SRs of source images, by integrating MCA and convolutional sparse representation(CSR) into a unified optimization framework. In the existing method, the CSRs of its gradient and texture components are obtained by the CSMCA model using pre-learned dictionaries. Then for each image component, sparse coefficients of all the source images are merged and then fused component is reconstructed using the corresponding dictionary. In the extension mechanism, we are using deep learning based pyramid decomposition. Now a days deep learning is a very demanding technology. Deep learning is used for image classification, object detection, image segmentation, image restoration. Keywords: CNN, CT, MRI, MCA, CS-MCA.


Author(s):  
Mummadi Gowthami Reddy ◽  
Palagiri Veera Narayana Reddy ◽  
Patil Ramana Reddy

In the current era of technological development, medical imaging plays an important role in many applications of medical diagnosis and therapy. In this regard, medical image fusion could be a powerful tool to combine multi-modal images by using image processing techniques. But, conventional approaches failed to provide the effective image quality assessments and robustness of fused image. To overcome these drawbacks, in this work three-stage multiscale decomposition (TSMSD) using pulse-coupled neural networks with adaptive arguments (PCNN-AA) approach is proposed for multi-modal medical image fusion. Initially, nonsubsampled shearlet transform (NSST) is applied onto the source images to decompose them into low frequency and high frequency bands. Then, low frequency bands of both the source images are fused using nonlinear anisotropic filtering with discrete Karhunen–Loeve transform (NLAF-DKLT) methodology. Next, high frequency bands obtained from NSST are fused using PCNN-AA approach. Now, fused low frequency and high frequency bands are reconstructed using NSST reconstruction. Finally, band fusion rule algorithm with pyramid reconstruction is applied to get final fused medical image. Extensive simulation outcome discloses the superiority of proposed TSMSD using PCNN-AA approach as compared to state-of-the-art medical image fusion methods in terms of fusion quality metrics such as entropy (E), mutual information (MI), mean (M), standard deviation (STD), correlation coefficient (CC) and computational complexity.


2021 ◽  
Author(s):  
Ngo Xuan Tra ◽  
Dinh Phu Hung ◽  
Nguyen Huy Duc ◽  
Nguyen Long Giang

Author(s):  
M Munawwar Iqbal Ch ◽  
Abdul Ghafoor ◽  
Asim Dilawar Bakhshi ◽  
Nuwayrah Jawaid Saghir

2021 ◽  
Vol 27 (4) ◽  
pp. 261-269
Author(s):  
Amir Khorasani ◽  
Mohamad Bagher Tavakoli ◽  
Masih Saboori

Abstract Introduction: Based on the tumor’s growth potential and aggressiveness, glioma is most often classified into low or high-grade groups. Traditionally, tissue sampling is used to determine the glioma grade. The aim of this study is to evaluate the efficiency of the Laplacian Re-decomposition (LRD) medical image fusion algorithm for glioma grading by advanced magnetic resonance imaging (MRI) images and introduce the best image combination for glioma grading. Material and methods: Sixty-one patients (17 low-grade and 44 high-grade) underwent Susceptibility-weighted image (SWI), apparent diffusion coefficient (ADC) map, and Fluid attenuated inversion recovery (FLAIR) MRI imaging. To fuse different MRI image, LRD medical image fusion algorithm was used. To evaluate the effectiveness of LRD in the classification of glioma grade, we compared the parameters of the receiver operating characteristic curve (ROC). Results: The average Relative Signal Contrast (RSC) of SWI and ADC maps in high-grade glioma are significantly lower than RSCs in low-grade glioma. No significant difference was detected between low and high-grade glioma on FLAIR images. In our study, the area under the curve (AUC) for low and high-grade glioma differentiation on SWI and ADC maps were calculated at 0.871 and 0.833, respectively. Conclusions: By fusing SWI and ADC map with LRD medical image fusion algorithm, we can increase AUC for low and high-grade glioma separation to 0.978. Our work has led us to conclude that, by fusing SWI and ADC map with LRD medical image fusion algorithm, we reach the highest diagnostic accuracy for low and high-grade glioma differentiation and we can use LRD medical fusion algorithm for glioma grading.


2021 ◽  
Vol 11 (22) ◽  
pp. 10975
Author(s):  
Srinivasu Polinati ◽  
Durga Prasad Bavirisetti ◽  
Kandala N V P S Rajesh ◽  
Ganesh R Naik ◽  
Ravindra Dhuli

In medical image processing, magnetic resonance imaging (MRI) and computed tomography (CT) modalities are widely used to extract soft and hard tissue information, respectively. However, with the help of a single modality, it is very challenging to extract the required pathological features to identify suspicious tissue details. Several medical image fusion methods have attempted to combine complementary information from MRI and CT to address the issue mentioned earlier over the past few decades. However, existing methods have their advantages and drawbacks. In this work, we propose a new multimodal medical image fusion approach based on variational mode decomposition (VMD) and local energy maxima (LEM). With the help of VMD, we decompose source images into several intrinsic mode functions (IMFs) to effectively extract edge details by avoiding boundary distortions. LEM is employed to carefully combine the IMFs based on the local information, which plays a crucial role in the fused image quality by preserving the appropriate spatial information. The proposed method’s performance is evaluated using various subjective and objective measures. The experimental analysis shows that the proposed method gives promising results compared to other existing and well-received fusion methods.


Sign in / Sign up

Export Citation Format

Share Document