An Improved Multifocus Image Fusion Algorithm Using Deep Learning and Adaptive Fuzzy Filter

Author(s):  
Meenu Manchanda ◽  
Deepak Gambhir ◽  
Sanjeev Kr. Singh
2021 ◽  
pp. 1-24
Author(s):  
F. Sangeetha Francelin Vinnarasi ◽  
Jesline Daniel ◽  
J.T. Anita Rose ◽  
R. Pugalenthi

Multi-modal image fusion techniques aid the medical experts in better disease diagnosis by providing adequate complementary information from multi-modal medical images. These techniques enhance the effectiveness of medical disorder analysis and classification of results. This study aims at proposing a novel technique using deep learning for the fusion of multi-modal medical images. The modified 2D Adaptive Bilateral Filters (M-2D-ABF) algorithm is used in the image pre-processing for filtering various types of noises. The contrast and brightness are improved by applying the proposed Energy-based CLAHE algorithm in order to preserve the high energy regions of the multimodal images. Images from two different modalities are first registered using mutual information and then registered images are fused to form a single image. In the proposed fusion scheme, images are fused using Siamese Neural Network and Entropy (SNNE)-based image fusion algorithm. Particularly, the medical images are fused by using Siamese convolutional neural network structure and the entropy of the images. Fusion is done on the basis of score of the SoftMax layer and the entropy of the image. The fused image is segmented using Fast Fuzzy C Means Clustering Algorithm (FFCMC) and Otsu Thresholding. Finally, various features are extracted from the segmented regions. Using the extracted features, classification is done using Logistic Regression classifier. Evaluation is performed using publicly available benchmark dataset. Experimental results using various pairs of multi-modal medical images reveal that the proposed multi-modal image fusion and classification techniques compete the existing state-of-the-art techniques reported in the literature.


2021 ◽  
Vol 7 ◽  
pp. e364
Author(s):  
Omar M. Elzeki ◽  
Mohamed Abd Elfattah ◽  
Hanaa Salem ◽  
Aboul Ella Hassanien ◽  
Mahmoud Shams

Background and Purpose COVID-19 is a new strain of viruses that causes life stoppage worldwide. At this time, the new coronavirus COVID-19 is spreading rapidly across the world and poses a threat to people’s health. Experimental medical tests and analysis have shown that the infection of lungs occurs in almost all COVID-19 patients. Although Computed Tomography of the chest is a useful imaging method for diagnosing diseases related to the lung, chest X-ray (CXR) is more widely available, mainly due to its lower price and results. Deep learning (DL), one of the significant popular artificial intelligence techniques, is an effective way to help doctors analyze how a large number of CXR images is crucial to performance. Materials and Methods In this article, we propose a novel perceptual two-layer image fusion using DL to obtain more informative CXR images for a COVID-19 dataset. To assess the proposed algorithm performance, the dataset used for this work includes 87 CXR images acquired from 25 cases, all of which were confirmed with COVID-19. The dataset preprocessing is needed to facilitate the role of convolutional neural networks (CNN). Thus, hybrid decomposition and fusion of Nonsubsampled Contourlet Transform (NSCT) and CNN_VGG19 as feature extractor was used. Results Our experimental results show that imbalanced COVID-19 datasets can be reliably generated by the algorithm established here. Compared to the COVID-19 dataset used, the fuzed images have more features and characteristics. In evaluation performance measures, six metrics are applied, such as QAB/F, QMI, PSNR, SSIM, SF, and STD, to determine the evaluation of various medical image fusion (MIF). In the QMI, PSNR, SSIM, the proposed algorithm NSCT + CNN_VGG19 achieves the greatest and the features characteristics found in the fuzed image is the largest. We can deduce that the proposed fusion algorithm is efficient enough to generate CXR COVID-19 images that are more useful for the examiner to explore patient status. Conclusions A novel image fusion algorithm using DL for an imbalanced COVID-19 dataset is the crucial contribution of this work. Extensive results of the experiment display that the proposed algorithm NSCT + CNN_VGG19 outperforms competitive image fusion algorithms.


Author(s):  
Meenu Manchanda ◽  
Deepak Gambhir

Multifocus image fusion is a demanding research field due to the utilization of modern imaging devices. Generally, the scene to be captured contains objects at different distances from these devices and so a set of multifocus images of the scene is captured with different objects in-focus. However, to improve the situational awareness of the captured scene, these sets of images are required to be fused together. Therefore, a multifocus image fusion algorithm based on Convolutional Neural Network (CNN) and triangulated fuzzy filter is proposed. A CNN is used to extract information regarding focused pixels of input images and the same is used as fusion rule for fusing the input images. The focused information so extracted may still need to be refined near the boundaries. Therefore, asymmetrical triangular fuzzy filter with the median center (ATMED) is employed to correctly classify the pixels near the boundary. The advantage of using this filter is to rely on precise detection results since any misdetection may considerably degrade the fusion quality. The performance of the proposed algorithm is compared with the state-of-art image fusion algorithms, both subjectively and objectively. Various parameters such as edge strength ([Formula: see text]), fusion loss (FL), fusion artifacts (FA), entropy ([Formula: see text]), standard deviation (SD), spatial frequency (SF), structural similarity index measure (SSIM) and feature similarity index measure (FSIM) are used to evaluate the performance of the proposed algorithm. Experimental results proved that the proposed fusion algorithm produces a fused image that contains all-in-one focused pixels and is better than those obtained using other popular and latest image fusion works.


2018 ◽  
Vol 30 (9) ◽  
pp. 1637
Author(s):  
Zhong Xiang ◽  
Jianfeng Zhang ◽  
Miao Qian ◽  
Zhenyu Wu ◽  
Xudong Hu

Sign in / Sign up

Export Citation Format

Share Document