scholarly journals Morphological Component Analysis-Based Perceptual Medical Image Fusion Using Convolutional Sparsity-Motivated PCNN

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Chuangeng Tian ◽  
Lu Tang ◽  
Xiao Li ◽  
Kaili Liu ◽  
Jian Wang

This paper proposes a perceptual medical image fusion framework based on morphological component analysis combining convolutional sparsity and pulse-coupled neural network, which is called MCA-CS-PCNN for short. Source images are first decomposed into cartoon components and texture components by morphological component analysis, and a convolutional sparse representation of cartoon layers and texture layers is produced by prelearned dictionaries. Then, convolutional sparsity is used as a stimulus to motivate the PCNN for dealing with cartoon layers and texture layers. Finally, the medical fused image is computed via combining fused cartoon layers and texture layers. Experimental results verify that the MCA-CS-PCNN model is superior to the state-of-the-art fusion strategy.

Author(s):  
Peng Guo ◽  
Guoqi Xie ◽  
Renfa Li ◽  
Hui Hu

In feature-level image fusion, deep learning technology, particularly convolutional sparse representation (SR) theory, has emerged as a new topic over the past three years. This paper proposes an effective image fusion method based on convolution SR, namely, convolutional sparsity-based morphological component analysis and guided filter (CS-MCA-GF). The guided filter operator and choose-max coefficient fusion scheme introduced in this method can effectively eliminate the artifacts generated by the morphological components in the linear fusion, and maintain the pixel saliency of the source images. Experiments show that the proposed method can achieve an excellent performance in multi-modal image fusion, which includes medical image fusion.


Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1423
Author(s):  
Kai Guo ◽  
Xiongfei Li ◽  
Hongrui Zang ◽  
Tiehu Fan

In order to obtain the physiological information and key features of source images to the maximum extent, improve the visual effect and clarity of the fused image, and reduce the computation, a multi-modal medical image fusion framework based on feature reuse is proposed. The framework consists of intuitive fuzzy processing (IFP), capture image details network (CIDN), fusion, and decoding. First, the membership function of the image is redefined to remove redundant features and obtain the image with complete features. Then, inspired by DenseNet, we proposed a new encoder to capture all the medical information features in the source image. In the fusion layer, we calculate the weight of each feature graph in the required fusion coefficient according to the trajectory of the feature graph. Finally, the filtered medical information is spliced and decoded to reproduce the required fusion image. In the encoding and image reconstruction networks, the mixed loss function of cross entropy and structural similarity is adopted to greatly reduce the information loss in image fusion. To assess performance, we conducted three sets of experiments on medical images of different grayscales and colors. Experimental results show that the proposed algorithm has advantages not only in detail and structure recognition but also in visual features and time complexity compared with other algorithms.


Author(s):  
Nukapeyyi Tanuja

Abstract: Sparse representation(SR) model named convolutional sparsity based morphological component analysis is introduced for pixel-level medical image fusion. The CS-MCA model can achieve multicomponent and global SRs of source images, by integrating MCA and convolutional sparse representation(CSR) into a unified optimization framework. In the existing method, the CSRs of its gradient and texture components are obtained by the CSMCA model using pre-learned dictionaries. Then for each image component, sparse coefficients of all the source images are merged and then fused component is reconstructed using the corresponding dictionary. In the extension mechanism, we are using deep learning based pyramid decomposition. Now a days deep learning is a very demanding technology. Deep learning is used for image classification, object detection, image segmentation, image restoration. Keywords: CNN, CT, MRI, MCA, CS-MCA.


2017 ◽  
Vol 9 (4) ◽  
pp. 61 ◽  
Author(s):  
Guanqiu Qi ◽  
Jinchuan Wang ◽  
Qiong Zhang ◽  
Fancheng Zeng ◽  
Zhiqin Zhu

2018 ◽  
Vol 11 (4) ◽  
pp. 1937-1946
Author(s):  
Nancy Mehta ◽  
Sumit Budhiraja

Multimodal medical image fusion aims at minimizing the redundancy and collecting the relevant information using the input images acquired from different medical sensors. The main goal is to produce a single fused image having more information and has higher efficiency for medical applications. In this paper modified fusion method has been proposed in which NSCT decomposition is used to decompose the wavelet coefficients obtained after wavelet decomposition. NSCT being multidirectional,shift invariant transform provide better results.Guided filter has been used for the fusion of high frequency coefficients on account of its edge preserving property. Phase congruency is used for the fusion of low frequency coefficients due to its insensitivity to illumination contrast hence making it suitable for medical images. The simulated results show that the proposed technique shows better performance in terms of entropy, structural similarity index, Piella metric. The fusion response of the proposed technique is also compared with other fusion approaches; proving the effectiveness of the obtained fusion results.


2017 ◽  
pp. 711-723
Author(s):  
Vikrant Bhateja ◽  
Abhinav Krishn ◽  
Himanshi Patel ◽  
Akanksha Sahu

Medical image fusion facilitates the retrieval of complementary information from medical images and has been employed diversely for computer-aided diagnosis of life threatening diseases. Fusion has been performed using various approaches such as Pyramidal, Multi-resolution, multi-scale etc. Each and every approach of fusion depicts only a particular feature (i.e. the information content or the structural properties of an image). Therefore, this paper presents a comparative analysis and evaluation of multi-modal medical image fusion methodologies employing wavelet as a multi-resolution approach and ridgelet as a multi-scale approach. The current work tends to highlight upon the utility of these approaches according to the requirement of features in the fused image. Principal Component Analysis (PCA) based fusion algorithm has been employed in both ridgelet and wavelet domains for purpose of minimisation of redundancies. Simulations have been performed for different sets of MR and CT-scan images taken from ‘The Whole Brain Atlas'. The performance evaluation has been carried out using different parameters of image quality evaluation like: Entropy (E), Fusion Factor (FF), Structural Similarity Index (SSIM) and Edge Strength (QFAB). The outcome of this analysis highlights the trade-off between the retrieval of information content and the morphological details in finally fused image in wavelet and ridgelet domains.


Oncology ◽  
2017 ◽  
pp. 519-541
Author(s):  
Satishkumar S. Chavan ◽  
Sanjay N. Talbar

The process of enriching the important details from various modality medical images by combining them into single image is called multimodality medical image fusion. It aids physicians in terms of better visualization, more accurate diagnosis and appropriate treatment plan for the cancer patient. The combined fused image is the result of merging of anatomical and physiological variations. It allows accurate localization of cancer tissues and more helpful for estimation of target volume for radiation. The details from both modalities (CT and MRI) are extracted in frequency domain by applying various transforms and combined them using variety of fusion rules to achieve the best quality of images. The performance and effectiveness of each transform on fusion results is evaluated subjectively as well as objectively. The fused images by algorithms in which feature extraction is achieved by M-Band Wavelet Transform and Daubechies Complex Wavelet Transform are superior over other frequency domain algorithms as per subjective and objective analysis.


2020 ◽  
Vol 10 (11) ◽  
pp. 3857
Author(s):  
Fangjia Yang ◽  
Shaoping Xu ◽  
Chongxi Li

Image denoising, a fundamental step in image processing, has been widely studied for several decades. Denoising methods can be classified as internal or external depending on whether they exploit the internal prior or the external noisy-clean image priors to reconstruct a latent image. Typically, these two kinds of methods have their respective merits and demerits. Using a single denoising model to improve existing methods remains a challenge. In this paper, we propose a method for boosting the denoising effect via the image fusion strategy. This study aims to boost the performance of two typical denoising methods, the nonlocally centralized sparse representation (NCSR) and residual learning of deep CNN (DnCNN). These two methods have complementary strengths and can be chosen to represent internal and external denoising methods, respectively. The boosting process is formulated as an adaptive weight-based image fusion problem by preserving the details for the initial denoised images output by the NCSR and the DnCNN. Specifically, we design two kinds of weights to adaptively reflect the influence of the pixel intensity changes and the global gradient of the initial denoised images. A linear combination of these two kinds of weights determines the final weight. The initial denoised images are integrated into the fusion framework to achieve our denoising results. Extensive experiments show that the proposed method significantly outperforms the NCSR and the DnCNN both quantitatively and visually when they are considered as individual methods; similarly, it outperforms several other state-of-the-art denoising methods.


2018 ◽  
Vol 7 (2.31) ◽  
pp. 165
Author(s):  
M Shyamala Devi ◽  
P Balamurugan

Image processing technology requires moreover the full image or the part of image which is to be processed from the user’s point of view like the radius of object etc. The main purpose of fusion is to diminish dissimilar error between the fused image and the input images. With respect to the medical diagnosis, the edges and outlines of the concerned objects is more important than extra information. So preserving the edge features of the image is worth for investigating the image fusion. The image with higher contrast contains more edge-like features. Here we propose a new medical image fusion scheme namely Local Energy Match NSCT based on discrete contourlet transformation, which is constructive to give the details of curve edges. It is used to progress the edge information of fused image by dropping the distortion. This transformation lead to crumbling of multimodal image addicted to finer and coarser details and finest details will be decayed into unusual resolution in dissimilar orientation. The input multimodal images namely CT and MRI images are first transformed by Non Sub sampled Contourlet Transformation (NSCT) which decomposes the image into low frequency and high frequency elements. In our system, the Low frequency coefficient of the image is fused by image averaging and Gabor filter bank algorithm. The processed High frequency coefficients of the image are fused by image averaging and gradient based fusion algorithm. Then the fused image is obtained by inverse NSCT with local energy match based coefficients. To evaluate the image fusion accuracy, Peak Signal to Noise Ratio (PSNR), Root Mean Square Error (RMSE) and Correlation Coefficient parameters are used in this work .


Sign in / Sign up

Export Citation Format

Share Document