scholarly journals Deep Learning Technique for Brain Tumor Detection using Medical Image Fusion

Brain Tumor detection using Medical image fusion plays an important role in medical field .Using Fusion technique, The medical image can be enhanced to detect the tumor. It is a mechanism of combining various images of same scene into a single fused image to reduce uncertainty and redundancy, also extracting vital information from the source images. The applications used here to detect Brain Tumor are DBN and CNN techniques. This paper emerges a new process of fusing the images to produce efficient and reliable result for detecting the cancerous tissue and early detection of Brain Tumor.

Author(s):  
Alka Srivastava ◽  
Ashwani Kumar Aggarwal

Nowadays, there are a lot of medical images and their numbers are increasing day by day. These medical images are stored in the large database. To minimize the redundancy and optimize the storage capacity of images, medical image fusion is used. The main aim of medical image fusion is to combine complementary information from multiple imaging modalities (e.g. CT, MRI, PET, etc.) of the same scene. After performing medical image fusion, the resultant image is more informative and suitable for patient diagnosis. There are some fusion techniques which are described in this chapter to obtain fused image. This chapter presents two approaches to image fusion, namely spatial domain Fusion technique and transforms domain Fusion technique. This chapter describes Techniques such as Principal Component Analysis which is spatial domain technique and Discrete Wavelet Transform and Stationary Wavelet Transform which are Transform domain techniques. Performance metrics are implemented to evaluate the performance of image fusion algorithm.


Author(s):  
Padmanjali A. Hagargi

Image fusion is a technique to fuse the two or more images. As the fused image gathers more information as comparative to the single image, image fusion of multiple images can be done to extract more number of information, with this reason the it is important in the field of medical image analysis. The fusion technique is so useful in detection of different kind of disease using different kind of medical images. Brain tumor disease is a large issue because of non-proper diagnosis and treatment is lacking accordingly. Using T1, T2 Weighted MR images are two medical MR images at different time constant during the scanning of brain tumor. These two or more images can be used to extract more information by the various image fusion technique.


Author(s):  
M Mozaffarilegha ◽  
A Yaghobi Joybari ◽  
A Mostaar

Background: Medical image fusion is being widely used for capturing complimentary information from images of different modalities. Combination of useful information presented in medical images is the aim of image fusion techniques, and the fused image will exhibit more information in comparison with source images.Objective: In the current study, a BEMD-based multi-modal medical image fusion technique is utilized. Moreover, Teager-Kaiser energy operator (TKEO) was applied to lower BIMFs. The results were compared to six routine methods.Methods: An image fusion technique using bi-dimensional empirical mode decomposition (BEMD), Teager-Kaiser energy operator (TKEO) as a local feature selection and HMAX model is presented. BEMD fusion technique can preserve much functional information. In the process of fusion, we adopt the fusion rule of TKEO for lower bi-dimensional intrinsic mode functions (BIMFs) of two images and HMAX visual cortex model as a fusion rule for higher BIMFs, which are verified to be more appropriate for human vision system. Integrating BEMD and this efficient fusion scheme can retain more spatial and functional features of input images.Results: We compared our method with IHS, DWT, LWT, PCA, NSCT and SIST methods. The simulation results and fusion performance show that the presented method is effective in terms of mutual information, quality of fused image (QAB/F), standard deviation, peak signal to noise ratio, structural similarity and considerably better results compared to six typical fusion methods.Conclusion: The statistical analyses revealed that our algorithm significantly improved spatial features and diminished the color distortion compared to other fusion techniques. The proposed approach can be used for routine practice. Fusion of functional and morphological medical images is possible before, during and after treatment of tumors in different organs. Image fusion can enable interventional events and can be further assessed.


2018 ◽  
Vol 11 (4) ◽  
pp. 1937-1946
Author(s):  
Nancy Mehta ◽  
Sumit Budhiraja

Multimodal medical image fusion aims at minimizing the redundancy and collecting the relevant information using the input images acquired from different medical sensors. The main goal is to produce a single fused image having more information and has higher efficiency for medical applications. In this paper modified fusion method has been proposed in which NSCT decomposition is used to decompose the wavelet coefficients obtained after wavelet decomposition. NSCT being multidirectional,shift invariant transform provide better results.Guided filter has been used for the fusion of high frequency coefficients on account of its edge preserving property. Phase congruency is used for the fusion of low frequency coefficients due to its insensitivity to illumination contrast hence making it suitable for medical images. The simulated results show that the proposed technique shows better performance in terms of entropy, structural similarity index, Piella metric. The fusion response of the proposed technique is also compared with other fusion approaches; proving the effectiveness of the obtained fusion results.


2017 ◽  
pp. 711-723
Author(s):  
Vikrant Bhateja ◽  
Abhinav Krishn ◽  
Himanshi Patel ◽  
Akanksha Sahu

Medical image fusion facilitates the retrieval of complementary information from medical images and has been employed diversely for computer-aided diagnosis of life threatening diseases. Fusion has been performed using various approaches such as Pyramidal, Multi-resolution, multi-scale etc. Each and every approach of fusion depicts only a particular feature (i.e. the information content or the structural properties of an image). Therefore, this paper presents a comparative analysis and evaluation of multi-modal medical image fusion methodologies employing wavelet as a multi-resolution approach and ridgelet as a multi-scale approach. The current work tends to highlight upon the utility of these approaches according to the requirement of features in the fused image. Principal Component Analysis (PCA) based fusion algorithm has been employed in both ridgelet and wavelet domains for purpose of minimisation of redundancies. Simulations have been performed for different sets of MR and CT-scan images taken from ‘The Whole Brain Atlas'. The performance evaluation has been carried out using different parameters of image quality evaluation like: Entropy (E), Fusion Factor (FF), Structural Similarity Index (SSIM) and Edge Strength (QFAB). The outcome of this analysis highlights the trade-off between the retrieval of information content and the morphological details in finally fused image in wavelet and ridgelet domains.


Oncology ◽  
2017 ◽  
pp. 519-541
Author(s):  
Satishkumar S. Chavan ◽  
Sanjay N. Talbar

The process of enriching the important details from various modality medical images by combining them into single image is called multimodality medical image fusion. It aids physicians in terms of better visualization, more accurate diagnosis and appropriate treatment plan for the cancer patient. The combined fused image is the result of merging of anatomical and physiological variations. It allows accurate localization of cancer tissues and more helpful for estimation of target volume for radiation. The details from both modalities (CT and MRI) are extracted in frequency domain by applying various transforms and combined them using variety of fusion rules to achieve the best quality of images. The performance and effectiveness of each transform on fusion results is evaluated subjectively as well as objectively. The fused images by algorithms in which feature extraction is achieved by M-Band Wavelet Transform and Daubechies Complex Wavelet Transform are superior over other frequency domain algorithms as per subjective and objective analysis.


2018 ◽  
Vol 7 (2.31) ◽  
pp. 165
Author(s):  
M Shyamala Devi ◽  
P Balamurugan

Image processing technology requires moreover the full image or the part of image which is to be processed from the user’s point of view like the radius of object etc. The main purpose of fusion is to diminish dissimilar error between the fused image and the input images. With respect to the medical diagnosis, the edges and outlines of the concerned objects is more important than extra information. So preserving the edge features of the image is worth for investigating the image fusion. The image with higher contrast contains more edge-like features. Here we propose a new medical image fusion scheme namely Local Energy Match NSCT based on discrete contourlet transformation, which is constructive to give the details of curve edges. It is used to progress the edge information of fused image by dropping the distortion. This transformation lead to crumbling of multimodal image addicted to finer and coarser details and finest details will be decayed into unusual resolution in dissimilar orientation. The input multimodal images namely CT and MRI images are first transformed by Non Sub sampled Contourlet Transformation (NSCT) which decomposes the image into low frequency and high frequency elements. In our system, the Low frequency coefficient of the image is fused by image averaging and Gabor filter bank algorithm. The processed High frequency coefficients of the image are fused by image averaging and gradient based fusion algorithm. Then the fused image is obtained by inverse NSCT with local energy match based coefficients. To evaluate the image fusion accuracy, Peak Signal to Noise Ratio (PSNR), Root Mean Square Error (RMSE) and Correlation Coefficient parameters are used in this work .


2020 ◽  
Vol 2020 ◽  
pp. 1-16 ◽  
Author(s):  
Bing Huang ◽  
Feng Yang ◽  
Mengxiao Yin ◽  
Xiaoying Mo ◽  
Cheng Zhong

The medical image fusion is the process of coalescing multiple images from multiple imaging modalities to obtain a fused image with a large amount of information for increasing the clinical applicability of medical images. In this paper, we attempt to give an overview of multimodal medical image fusion methods, putting emphasis on the most recent advances in the domain based on (1) the current fusion methods, including based on deep learning, (2) imaging modalities of medical image fusion, and (3) performance analysis of medical image fusion on mainly data set. Finally, the conclusion of this paper is that the current multimodal medical image fusion research results are more significant and the development trend is on the rise but with many challenges in the research field.


Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1423
Author(s):  
Kai Guo ◽  
Xiongfei Li ◽  
Hongrui Zang ◽  
Tiehu Fan

In order to obtain the physiological information and key features of source images to the maximum extent, improve the visual effect and clarity of the fused image, and reduce the computation, a multi-modal medical image fusion framework based on feature reuse is proposed. The framework consists of intuitive fuzzy processing (IFP), capture image details network (CIDN), fusion, and decoding. First, the membership function of the image is redefined to remove redundant features and obtain the image with complete features. Then, inspired by DenseNet, we proposed a new encoder to capture all the medical information features in the source image. In the fusion layer, we calculate the weight of each feature graph in the required fusion coefficient according to the trajectory of the feature graph. Finally, the filtered medical information is spliced and decoded to reproduce the required fusion image. In the encoding and image reconstruction networks, the mixed loss function of cross entropy and structural similarity is adopted to greatly reduce the information loss in image fusion. To assess performance, we conducted three sets of experiments on medical images of different grayscales and colors. Experimental results show that the proposed algorithm has advantages not only in detail and structure recognition but also in visual features and time complexity compared with other algorithms.


2016 ◽  
Vol 16 (04) ◽  
pp. 1650022 ◽  
Author(s):  
Deepak Gambhir ◽  
Meenu Manchanda

Medical image fusion is being used at large by clinical professionals for improved diagnosis and treatment of diseases. The main aim of image fusion process is to combine complete information from all input images into a single fused image. Therefore, a novel fusion rule is proposed for fusing medical images based on Daubechies complex wavelet transform (DCxWT). Input images are first decomposed using DCxWT. The complex coefficients so obtained are then fused using normalized correlation based fusion rule. Finally, the fused image is obtained by inverse DCxWT with all combined complex coefficients. The performance of the proposed method has been evaluated and compared both visually and objectively with DCxWT based fusion methods using state-of art fusion rules as well as with existing fusion techniques. Experimental results and comparative study demonstrate that the proposed fusion technique generates better results than existing fusion rules as well as with other fusion techniques.


Sign in / Sign up

Export Citation Format

Share Document