scholarly journals Multimodal Medical Image Fusion using Guided Filter in NSCT Domain

2018 ◽  
Vol 11 (4) ◽  
pp. 1937-1946
Author(s):  
Nancy Mehta ◽  
Sumit Budhiraja

Multimodal medical image fusion aims at minimizing the redundancy and collecting the relevant information using the input images acquired from different medical sensors. The main goal is to produce a single fused image having more information and has higher efficiency for medical applications. In this paper modified fusion method has been proposed in which NSCT decomposition is used to decompose the wavelet coefficients obtained after wavelet decomposition. NSCT being multidirectional,shift invariant transform provide better results.Guided filter has been used for the fusion of high frequency coefficients on account of its edge preserving property. Phase congruency is used for the fusion of low frequency coefficients due to its insensitivity to illumination contrast hence making it suitable for medical images. The simulated results show that the proposed technique shows better performance in terms of entropy, structural similarity index, Piella metric. The fusion response of the proposed technique is also compared with other fusion approaches; proving the effectiveness of the obtained fusion results.

2017 ◽  
pp. 711-723
Author(s):  
Vikrant Bhateja ◽  
Abhinav Krishn ◽  
Himanshi Patel ◽  
Akanksha Sahu

Medical image fusion facilitates the retrieval of complementary information from medical images and has been employed diversely for computer-aided diagnosis of life threatening diseases. Fusion has been performed using various approaches such as Pyramidal, Multi-resolution, multi-scale etc. Each and every approach of fusion depicts only a particular feature (i.e. the information content or the structural properties of an image). Therefore, this paper presents a comparative analysis and evaluation of multi-modal medical image fusion methodologies employing wavelet as a multi-resolution approach and ridgelet as a multi-scale approach. The current work tends to highlight upon the utility of these approaches according to the requirement of features in the fused image. Principal Component Analysis (PCA) based fusion algorithm has been employed in both ridgelet and wavelet domains for purpose of minimisation of redundancies. Simulations have been performed for different sets of MR and CT-scan images taken from ‘The Whole Brain Atlas'. The performance evaluation has been carried out using different parameters of image quality evaluation like: Entropy (E), Fusion Factor (FF), Structural Similarity Index (SSIM) and Edge Strength (QFAB). The outcome of this analysis highlights the trade-off between the retrieval of information content and the morphological details in finally fused image in wavelet and ridgelet domains.


2018 ◽  
Vol 7 (2.31) ◽  
pp. 165
Author(s):  
M Shyamala Devi ◽  
P Balamurugan

Image processing technology requires moreover the full image or the part of image which is to be processed from the user’s point of view like the radius of object etc. The main purpose of fusion is to diminish dissimilar error between the fused image and the input images. With respect to the medical diagnosis, the edges and outlines of the concerned objects is more important than extra information. So preserving the edge features of the image is worth for investigating the image fusion. The image with higher contrast contains more edge-like features. Here we propose a new medical image fusion scheme namely Local Energy Match NSCT based on discrete contourlet transformation, which is constructive to give the details of curve edges. It is used to progress the edge information of fused image by dropping the distortion. This transformation lead to crumbling of multimodal image addicted to finer and coarser details and finest details will be decayed into unusual resolution in dissimilar orientation. The input multimodal images namely CT and MRI images are first transformed by Non Sub sampled Contourlet Transformation (NSCT) which decomposes the image into low frequency and high frequency elements. In our system, the Low frequency coefficient of the image is fused by image averaging and Gabor filter bank algorithm. The processed High frequency coefficients of the image are fused by image averaging and gradient based fusion algorithm. Then the fused image is obtained by inverse NSCT with local energy match based coefficients. To evaluate the image fusion accuracy, Peak Signal to Noise Ratio (PSNR), Root Mean Square Error (RMSE) and Correlation Coefficient parameters are used in this work .


Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1423
Author(s):  
Kai Guo ◽  
Xiongfei Li ◽  
Hongrui Zang ◽  
Tiehu Fan

In order to obtain the physiological information and key features of source images to the maximum extent, improve the visual effect and clarity of the fused image, and reduce the computation, a multi-modal medical image fusion framework based on feature reuse is proposed. The framework consists of intuitive fuzzy processing (IFP), capture image details network (CIDN), fusion, and decoding. First, the membership function of the image is redefined to remove redundant features and obtain the image with complete features. Then, inspired by DenseNet, we proposed a new encoder to capture all the medical information features in the source image. In the fusion layer, we calculate the weight of each feature graph in the required fusion coefficient according to the trajectory of the feature graph. Finally, the filtered medical information is spliced and decoded to reproduce the required fusion image. In the encoding and image reconstruction networks, the mixed loss function of cross entropy and structural similarity is adopted to greatly reduce the information loss in image fusion. To assess performance, we conducted three sets of experiments on medical images of different grayscales and colors. Experimental results show that the proposed algorithm has advantages not only in detail and structure recognition but also in visual features and time complexity compared with other algorithms.


2020 ◽  
Vol 14 ◽  
pp. 174830262093129
Author(s):  
Zhang Zhancheng ◽  
Luo Xiaoqing ◽  
Xiong Mengyu ◽  
Wang Zhiwen ◽  
Li Kai

Medical image fusion can combine multi-modal images into an integrated higher-quality image, which can provide more comprehensive and accurate pathological information than individual image does. Traditional transform domain-based image fusion methods usually ignore the dependencies between coefficients and may lead to the inaccurate representation of source image. To improve the quality of fused image, a medical image fusion method based on the dependencies of quaternion wavelet transform coefficients is proposed. First, the source images are decomposed into low-frequency component and high-frequency component by quaternion wavelet transform. Then, a clarity evaluation index based on quaternion wavelet transform amplitude and phase is constructed and a contextual activity measure is designed. These measures are utilized to fuse the high-frequency coefficients and the choose-max fusion rule is applied to the low-frequency components. Finally, the fused image can be obtained by inverse quaternion wavelet transform. The experimental results on some brain multi-modal medical images demonstrate that the proposed method has achieved advanced fusion result.


2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Peng Geng ◽  
Shuaiqi Liu ◽  
Shanna Zhuang

Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.


2020 ◽  
pp. 407-410
Author(s):  
Jakir Hussain G K ◽  
Tamilanban R ◽  
Tamilselvan K S ◽  
Vinoth Saravanan M

The multimodal image fusion is the process of combining relevant information from multiple imaging modalities. A fused image which contains recovering description than the one provided by any image fusion techniques are most widely used for real-world applications like agriculture, robotics and informatics, aeronautical, military, medical, pedestrian detection, etc. We try to give an outline of multimodal medical image fusion methods, developed during the period of time. The fusion of medical images in many combinations assists in utilizing it for medical diagnostics and examination. There is an incredible progress within the fields of deep learning, AI and bio-inspired optimization techniques. Effective utilization of these techniques is often used to further improve the effectiveness of image fusion algorithms.


Author(s):  
Guofen Wang ◽  
Yongdong Huang

The medical image fusion process integrates the information of multiple source images into a single image. This fused image can provide more comprehensive information and is helpful in clinical diagnosis and treatment. In this paper, a new medical image fusion algorithm is proposed. Firstly, the original image is decomposed into a low-frequency sub-band and a series of high-frequency sub-bands by using nonsubsampled shearlet transform (NSST). For the low-frequency sub-band, kirsch operator is used to extract the directional feature maps from eight directions and novel sum-modified-Laplacian (NSML) method is used to calculate the significant information of each directional feature map, and then, combining a sigmod function and the significant information updated by gradient domain guided image filtering (GDGF), calculate the fusion weight coefficients of the directional feature maps. The fused feature map is obtained by summing the convolutions of the weight coefficients and the directional feature maps. The final fused low-frequency sub-band is obtained by the linear combination of the eight fused directional feature maps. The modified pulse coupled neural network (MPCNN) model is used to calculate the firing times of each high-frequency sub-band coefficient, and the fused high-frequency sub-bands are selected according to the firing times. Finally, the inverse NSST acts on the fused low-frequency sub-band and the fused high-frequency sub-bands to obtain the fused image. The experimental results show that the proposed medical image fusion algorithm expresses some advantages over the classical medical image fusion algorithms in objective and subjective evaluation.


Author(s):  
N. NAGARAJA KUMAR ◽  
T. JAYACHANDRA PRASAD ◽  
K. SATYA PRASAD

In recent times, multi-modal medical image fusion has emerged as an important medical application tool. An important goal is to fuse the multi-modal medical images from diverse imaging modalities into a single fused image. The physicians broadly utilize this for precise identification and treatment of diseases. This medical image fusion approach will help the physician perform the combined diagnosis, interventional treatment, pre-operative planning, and intra-operative guidance in various medical applications by developing the corresponding information from clinical images through different modalities. In this paper, a novel multi-modal medical image fusion method is adopted using the intelligent method. Initially, the images from two different modalities are applied with optimized Dual-Tree Complex Wavelet Transform (DT-CWT) for splitting the images into high-frequency subbands and low-frequency subbands. As an improvement to the conventional DT-CWT, the filter coefficients are optimized by the hybrid meta-heuristic algorithm named as Hybrid Beetle and Salp Swarm Optimization (HBSSO) by merging the Salp Swarm Algorithm (SSA), and Beetle Swarm Optimization (BSO). Moreover, the fusion of the source images’ high-frequency subbands was done by the optimized type-2 Fuzzy Entropy. The upper and lower membership limits are optimized by the same hybrid HBSSO. The optimized type-2 fuzzy Entropy automatically selects high-frequency coefficients. Also, the fusion of the low-frequency sub-images is performed by the Averaging approach. Further, the inverse optimized DT-CWT on the fused image sets helps to obtain the final fused medical image. The main objective of the optimized DT-CWT and optimized type-2 fuzzy Entropy is to maximize the SSIM. The experimental results confirm that the developed approach outperforms the existing fusion algorithms in diverse performance measures.


2015 ◽  
Vol 2 (2) ◽  
pp. 78-91 ◽  
Author(s):  
Vikrant Bhateja ◽  
Abhinav Krishn ◽  
Himanshi Patel ◽  
Akanksha Sahu

Medical image fusion facilitates the retrieval of complementary information from medical images and has been employed diversely for computer-aided diagnosis of life threatening diseases. Fusion has been performed using various approaches such as Pyramidal, Multi-resolution, multi-scale etc. Each and every approach of fusion depicts only a particular feature (i.e. the information content or the structural properties of an image). Therefore, this paper presents a comparative analysis and evaluation of multi-modal medical image fusion methodologies employing wavelet as a multi-resolution approach and ridgelet as a multi-scale approach. The current work tends to highlight upon the utility of these approaches according to the requirement of features in the fused image. Principal Component Analysis (PCA) based fusion algorithm has been employed in both ridgelet and wavelet domains for purpose of minimisation of redundancies. Simulations have been performed for different sets of MR and CT-scan images taken from ‘The Whole Brain Atlas'. The performance evaluation has been carried out using different parameters of image quality evaluation like: Entropy (E), Fusion Factor (FF), Structural Similarity Index (SSIM) and Edge Strength (QFAB). The outcome of this analysis highlights the trade-off between the retrieval of information content and the morphological details in finally fused image in wavelet and ridgelet domains.


2011 ◽  
Vol 255-260 ◽  
pp. 2072-2076
Author(s):  
Yi Yong Han ◽  
Jun Ju Zhang ◽  
Ben Kang Chang ◽  
Yi Hui Yuan ◽  
Hui Xu

Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we present a new approach using structural similarity index for assessing quality in image fusion. The advantages of our measures are that they do not require a reference image and can be easily computed. Numerous simulations demonstrate that our measures are conform to subjective evaluations and can be able to assess different image fusion methods.


Sign in / Sign up

Export Citation Format

Share Document