scholarly journals Dual Filter based Images Fusion Algorithm for CT and MRI Medical Images

A novel image fusion algorithm based on two filters, one is laplacian filter for de-nosing the detailed coefficients and second filter is Guided Filter (GF) used to refine the approximation as well as detailed coefficient for Computer Tomography (CT) and Magnetic Resonance Imaging (MRI) medical images is proposed. Because of using wavelet transform, we obtained approximation coefficient and other three coefficients of CT and MRI images. Now two weight maps are obtained after the process of denoising. Another reason for obtaining two weight maps is because of comparison. Here comparison is done between two approximation coefficient and six detailed coefficients. By using the approximation coefficients and detailed coefficients, GF is designed. Here GF will guide an image corresponding to the weight maps. Here the weight maps are smoothed using GF and this is mainly served as input image. Hence the weighted fusion algorithm will fuse the both CT and MRI images. A pure fused image is obtained only when the CT and MRI images are refined by inverse wavelet transform. From the comparison results, it can observe that the proposed system gives better results compared to existing system. As well as the proposed system will give maximum amount of input in detail manner

2017 ◽  
Vol 10 (03) ◽  
pp. 1750001 ◽  
Author(s):  
Abdallah Bengueddoudj ◽  
Zoubeida Messali ◽  
Volodymyr Mosorov

In this paper, we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform (2D-SMCWT). The fusion of the detail 2D-SMCWT coefficients is performed via a Bayesian Maximum a Posteriori (MAP) approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients. For the approximation coefficients, a new fusion rule based on the Principal Component Analysis (PCA) is applied. We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method. The obtained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics. Robustness of the proposed method is further tested against different types of noise. The plots of fusion metrics establish the accuracy of the proposed fusion method.


2011 ◽  
Vol 1 (3) ◽  
Author(s):  
T. Sumathi ◽  
M. Hemalatha

AbstractImage fusion is the method of combining relevant information from two or more images into a single image resulting in an image that is more informative than the initial inputs. Methods for fusion include discrete wavelet transform, Laplacian pyramid based transform, curvelet based transform etc. These methods demonstrate the best performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion. In particular, wavelet transform has good time-frequency characteristics. However, this characteristic cannot be extended easily to two or more dimensions with separable wavelet experiencing limited directivity when spanning a one-dimensional wavelet. This paper introduces the second generation curvelet transform and uses it to fuse images together. This method is compared against the others previously described to show that useful information can be extracted from source and fused images resulting in the production of fused images which offer clear, detailed information.


2013 ◽  
Vol 722 ◽  
pp. 478-481
Author(s):  
Wei Dong Zhu ◽  
Wei Shen ◽  
Xin Ru Tu

In order to gain a new image with high spatial resolution and abundant spectrum by fusing panchromatic and multispectral images, a novel fusion algorithm based on the generation Bandelet is presented, and the fusion rule of Bandelet coefficients reposes on maximum absolute value of frequence. Fusion experiments based on new method, IHS and wavelet transform are carried out with panchromatic and multispectral images of Landsat-7. The experimental results show that fused image of new method is more excellent. Image edges are more distinct, and prove that Bandelet transform has the character of tracking image edges adaptively.


2013 ◽  
Vol 860-863 ◽  
pp. 2846-2849
Author(s):  
Ming Jing Li ◽  
Yu Bing Dong ◽  
Xiao Li Wang

Image fusion is process which combine relevant information from two or more images into a single image. The aim of fusion is to extract relevant information for research. According to different application and characteristic of algorithm, image fusion algorithm could be used to improve quality of image. This paper complete compare analyze of image fusion algorithm based on wavelet transform and Laplacian pyramid. In this paper, principle, operation, steps and characteristic of fusion algorithm are summarized, advantage and disadvantage of different algorithm are compared. The fusion effects of different fusion algorithm are given by MATLAB. Experimental results shows that quality of fused image would be improve obviously.


Electronics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 150
Author(s):  
Meicheng Zheng ◽  
Weilin Luo

Due to refraction, absorption, and scattering of light by suspended particles in water, underwater images are characterized by low contrast, blurred details, and color distortion. In this paper, a fusion algorithm to restore and enhance underwater images is proposed. It consists of a color restoration module, an end-to-end defogging module and a brightness equalization module. In the color restoration module, a color balance algorithm based on CIE Lab color model is proposed to alleviate the effect of color deviation in underwater images. In the end-to-end defogging module, one end is the input image and the other end is the output image. A CNN network is proposed to connect these two ends and to improve the contrast of the underwater images. In the CNN network, a sub-network is used to reduce the depth of the network that needs to be designed to obtain the same features. Several depth separable convolutions are used to reduce the amount of calculation parameters required during network training. The basic attention module is introduced to highlight some important areas in the image. In order to improve the defogging network’s ability to extract overall information, a cross-layer connection and pooling pyramid module are added. In the brightness equalization module, a contrast limited adaptive histogram equalization method is used to coordinate the overall brightness. The proposed fusion algorithm for underwater image restoration and enhancement is verified by experiments and comparison with previous deep learning models and traditional methods. Comparison results show that the color correction and detail enhancement by the proposed method are superior.


Author(s):  
Alka Srivastava ◽  
Ashwani Kumar Aggarwal

Nowadays, there are a lot of medical images and their numbers are increasing day by day. These medical images are stored in the large database. To minimize the redundancy and optimize the storage capacity of images, medical image fusion is used. The main aim of medical image fusion is to combine complementary information from multiple imaging modalities (e.g. CT, MRI, PET, etc.) of the same scene. After performing medical image fusion, the resultant image is more informative and suitable for patient diagnosis. There are some fusion techniques which are described in this chapter to obtain fused image. This chapter presents two approaches to image fusion, namely spatial domain Fusion technique and transforms domain Fusion technique. This chapter describes Techniques such as Principal Component Analysis which is spatial domain technique and Discrete Wavelet Transform and Stationary Wavelet Transform which are Transform domain techniques. Performance metrics are implemented to evaluate the performance of image fusion algorithm.


2019 ◽  
Vol 64 (2) ◽  
pp. 211-220
Author(s):  
Sumanth Kumar Panguluri ◽  
Laavanya Mohan

Nowadays the result of infrared and visible image fusion has been utilized in significant applications like military, surveillance, remote sensing and medical imaging applications. Discrete wavelet transform based image fusion using unsharp masking is presented. DWT is used for decomposing input images (infrared, visible). Approximation and detailed coefficients are generated. For improving contrast unsharp masking has been applied on approximation coefficients. Then for merging approximation coefficients produced after unsharp masking average fusion rule is used. The rule that is used for merging detailed coefficients is max fusion rule. Finally, IDWT is used for generating a fused image. The result produced using the proposed fusion method is providing good contrast and also giving better performance results in reference to mean, entropy and standard deviation when compared with existing techniques.


2021 ◽  
pp. 1-24
Author(s):  
F. Sangeetha Francelin Vinnarasi ◽  
Jesline Daniel ◽  
J.T. Anita Rose ◽  
R. Pugalenthi

Multi-modal image fusion techniques aid the medical experts in better disease diagnosis by providing adequate complementary information from multi-modal medical images. These techniques enhance the effectiveness of medical disorder analysis and classification of results. This study aims at proposing a novel technique using deep learning for the fusion of multi-modal medical images. The modified 2D Adaptive Bilateral Filters (M-2D-ABF) algorithm is used in the image pre-processing for filtering various types of noises. The contrast and brightness are improved by applying the proposed Energy-based CLAHE algorithm in order to preserve the high energy regions of the multimodal images. Images from two different modalities are first registered using mutual information and then registered images are fused to form a single image. In the proposed fusion scheme, images are fused using Siamese Neural Network and Entropy (SNNE)-based image fusion algorithm. Particularly, the medical images are fused by using Siamese convolutional neural network structure and the entropy of the images. Fusion is done on the basis of score of the SoftMax layer and the entropy of the image. The fused image is segmented using Fast Fuzzy C Means Clustering Algorithm (FFCMC) and Otsu Thresholding. Finally, various features are extracted from the segmented regions. Using the extracted features, classification is done using Logistic Regression classifier. Evaluation is performed using publicly available benchmark dataset. Experimental results using various pairs of multi-modal medical images reveal that the proposed multi-modal image fusion and classification techniques compete the existing state-of-the-art techniques reported in the literature.


Today’s research era, image fusion is a actual step by step procedure to develop the visualization of any image. It integrates the essential features of more than a couple of images into a individual fused image without taking any artifacts. Multifocus image fusion has a vital key factor in fusion process where it aims to increase the depth of field using extracting focused part from different multiple focused images. In this paper multi-focus image fusion algorithm is proposed where non local mean technique is used in stationary wavelet transform (SWT) to get the sharp and smooth image. Non-local mean function analyses the pixels belonging to the blurring part and improves the image quality. The proposed work is compared with some existing methods. The results are analyzed visually as well as using performance metrics.


Sign in / Sign up

Export Citation Format

Share Document