scholarly journals Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities

Diagnostics ◽  
2019 ◽  
Vol 9 (2) ◽  
pp. 38 ◽  
Author(s):  
Incheol Kim ◽  
Sivaramakrishnan Rajaraman ◽  
Sameer Antani

Deep learning (DL) methods are increasingly being applied for developing reliable computer-aided detection (CADe), diagnosis (CADx), and information retrieval algorithms. However, challenges in interpreting and explaining the learned behavior of the DL models hinders their adoption and use in real-world systems. In this study, we propose a novel method called “Class-selective Relevance Mapping” (CRM) for localizing and visualizing discriminative regions of interest (ROI) within a medical image. Such visualizations offer improved explanation of the convolutional neural network (CNN)-based DL model predictions. We demonstrate CRM effectiveness in classifying medical imaging modalities toward automatically labeling them for visual information retrieval applications. The CRM is based on linear sum of incremental mean squared errors (MSE) calculated at the output layer of the CNN model. It measures both positive and negative contributions of each spatial element in the feature maps produced from the last convolution layer leading to correct classification of an input image. A series of experiments on a “multi-modality” CNN model designed for classifying seven different types of image modalities shows that the proposed method is significantly better in detecting and localizing the discriminative ROIs than other state of the art class-activation methods. Further, to visualize its effectiveness we generate “class-specific” ROI maps by averaging the CRM scores of images in each modality class, and characterize the visual explanation through their different size, shape, and location for our multi-modality CNN model that achieved over 98% performance on a dataset constructed from publicly available images.

2021 ◽  
Author(s):  
Lakpa Dorje Tamang

In this paper, we propose a symmetric series convolutional neural network (SS-CNN), which is a novel deep convolutional neural network (DCNN)-based super-resolution (SR) technique for ultrasound medical imaging. The proposed model comprises two parts: a feature extraction network (FEN) and an up-sampling layer. In the FEN, the low-resolution (LR) counterpart of the ultrasound image passes through a symmetric series of two different DCNNs. The low-level feature maps obtained from the subsequent layers of both DCNNs are concatenated in a feed forward manner, aiding in robust feature extraction to ensure high reconstruction quality. Subsequently, the final concatenated features serve as an input map to the latter 2D convolutional layers, where the textural information of the input image is connected via skip connections. The second part of the proposed model is a sub-pixel convolutional (SPC) layer, which up-samples the output of the FEN by multiplying it with a multi-dimensional kernel followed by a periodic shuffling operation to reconstruct a high-quality SR ultrasound image. We validate the performance of the SS-CNN with publicly available ultrasound image datasets. Experimental results show that the proposed model achieves an exquisite reconstruction performance of ultrasound image over the conventional methods in terms of peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM), while providing compelling SR reconstruction time.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Lili Wang ◽  
Xiao Liu ◽  
Deyun Chen ◽  
Hailu Yang ◽  
Chengdong Wang

For the problems of missing edges and obvious artifacts in Electrical Capacitance Tomography (ECT) reconstruction algorithms, an image reconstruction method based on a multiscale dual-channel convolutional neural network is proposed. Firstly, the image reconstructed by Landweber algorithm is input into the convolutional neural network, and four scales are selected for feature extraction. Feature unions are used across the scales to fuse the information of the output layer with feature maps. To improve the imaging accuracy, two frequency channels are designed for the input image. The middle layer of the network consists of two fully convolutional structures. Convolutional layers and jump connections are designed separately for different channels, which greatly improves the network’s ability to extract feature information and reduces the number of feature maps required for each layer. The number of network layers is shallow, which can speed up the network training, prevent the network from falling into local optimum, and ensure the effective transmission of image details. Simulation experiments are carried out for four typical dual media distributions. The edges of the reconstructed image are smoother and the image error is smaller. It effectively resolves the lack of edges in the reconstruction image and reduces the image edge artifacts in the ECT system.


2021 ◽  
Author(s):  
Lakpa Dorje Tamang

In this paper, we propose a symmetric series convolutional neural network (SS-CNN), which is a novel deep convolutional neural network (DCNN)-based super-resolution (SR) technique for ultrasound medical imaging. The proposed model comprises two parts: a feature extraction network (FEN) and an up-sampling layer. In the FEN, the low-resolution (LR) counterpart of the ultrasound image passes through a symmetric series of two different DCNNs. The low-level feature maps obtained from the subsequent layers of both DCNNs are concatenated in a feed forward manner, aiding in robust feature extraction to ensure high reconstruction quality. Subsequently, the final concatenated features serve as an input map to the latter 2D convolutional layers, where the textural information of the input image is connected via skip connections. The second part of the proposed model is a sub-pixel convolutional (SPC) layer, which up-samples the output of the FEN by multiplying it with a multi-dimensional kernel followed by a periodic shuffling operation to reconstruct a high-quality SR ultrasound image. We validate the performance of the SS-CNN with publicly available ultrasound image datasets. Experimental results show that the proposed model achieves an exquisite reconstruction performance of ultrasound image over the conventional methods in terms of peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM), while providing compelling SR reconstruction time.


2021 ◽  
Vol 5 (2) ◽  
pp. 78-89
Author(s):  
Khai Dinh Lai ◽  
Thuy Thanh Nguyen ◽  
Thai Hoang Le

The development of Computer-aided diagnosis (CAD) systems for automatic lung nodule detection through thoracic computed tomography (CT) scans has been an active area of research in recent years. Lung Nodule Analysis 2016 (LUNA16 challenge) encourages researchers to suggest a variety of successful nodule detection algorithms based on two key stages (1) candidates detection, (2) false-positive reduction. In the scope of this paper, a new convolutional neural network (CNN) architecture is proposed to efficiently solve the second challenge of LUNA16. Specifically, we find that typical CNN models pay little attention to the characteristics of input data, in order to address this constraint, we apply the attention-mechanism: propose a technique to attach Squeeze and Excitation-Block (SE-Block) after each convolution layer of CNN to emphasize important feature maps related to the characteristics of the input image - forming Attention sub-Convnet. The new CNN architecture is suggested by connecting the Attention sub-Convnets. In addition, we also analyze the selection of triplet loss or softmax loss functions to boost the rating performance of the proposed CNN. From the study, this is agreed to select softmax loss during the CNN training phase and triplet loss for the testing phase. Our suggested CNN is used to minimize the number of redundant candidates in order to improve the efficiency of false-positive reduction with the LUNA database. The results obtained in comparison to the previous models indicate the feasibility of the proposed model.


2020 ◽  
Vol 65 (6) ◽  
pp. 759-773
Author(s):  
Segu Praveena ◽  
Sohan Pal Singh

AbstractLeukaemia detection and diagnosis in advance is the trending topic in the medical applications for reducing the death toll of patients with acute lymphoblastic leukaemia (ALL). For the detection of ALL, it is essential to analyse the white blood cells (WBCs) for which the blood smear images are employed. This paper proposes a new technique for the segmentation and classification of the acute lymphoblastic leukaemia. The proposed method of automatic leukaemia detection is based on the Deep Convolutional Neural Network (Deep CNN) that is trained using an optimization algorithm, named Grey wolf-based Jaya Optimization Algorithm (GreyJOA), which is developed using the Grey Wolf Optimizer (GWO) and Jaya Optimization Algorithm (JOA) that improves the global convergence. Initially, the input image is applied to pre-processing and the segmentation is performed using the Sparse Fuzzy C-Means (Sparse FCM) clustering algorithm. Then, the features, such as Local Directional Patterns (LDP) and colour histogram-based features, are extracted from the segments of the pre-processed input image. Finally, the extracted features are applied to the Deep CNN for the classification. The experimentation evaluation of the method using the images of the ALL IDB2 database reveals that the proposed method acquired a maximal accuracy, sensitivity, and specificity of 0.9350, 0.9528, and 0.9389, respectively.


2021 ◽  
Vol 11 (3) ◽  
pp. 352
Author(s):  
Isselmou Abd El Kader ◽  
Guizhi Xu ◽  
Zhang Shuai ◽  
Sani Saminu ◽  
Imran Javaid ◽  
...  

The classification of brain tumors is a difficult task in the field of medical image analysis. Improving algorithms and machine learning technology helps radiologists to easily diagnose the tumor without surgical intervention. In recent years, deep learning techniques have made excellent progress in the field of medical image processing and analysis. However, there are many difficulties in classifying brain tumors using magnetic resonance imaging; first, the difficulty of brain structure and the intertwining of tissues in it; and secondly, the difficulty of classifying brain tumors due to the high density nature of the brain. We propose a differential deep convolutional neural network model (differential deep-CNN) to classify different types of brain tumor, including abnormal and normal magnetic resonance (MR) images. Using differential operators in the differential deep-CNN architecture, we derived the additional differential feature maps in the original CNN feature maps. The derivation process led to an improvement in the performance of the proposed approach in accordance with the results of the evaluation parameters used. The advantage of the differential deep-CNN model is an analysis of a pixel directional pattern of images using contrast calculations and its high ability to classify a large database of images with high accuracy and without technical problems. Therefore, the proposed approach gives an excellent overall performance. To test and train the performance of this model, we used a dataset consisting of 25,000 brain magnetic resonance imaging (MRI) images, which includes abnormal and normal images. The experimental results showed that the proposed model achieved an accuracy of 99.25%. This study demonstrates that the proposed differential deep-CNN model can be used to facilitate the automatic classification of brain tumors.


2021 ◽  
Vol 7 (2) ◽  
pp. 37
Author(s):  
Isah Charles Saidu ◽  
Lehel Csató

We present a sample-efficient image segmentation method using active learning, we call it Active Bayesian UNet, or AB-UNet. This is a convolutional neural network using batch normalization and max-pool dropout. The Bayesian setup is achieved by exploiting the probabilistic extension of the dropout mechanism, leading to the possibility to use the uncertainty inherently present in the system. We set up our experiments on various medical image datasets and highlight that with a smaller annotation effort our AB-UNet leads to stable training and better generalization. Added to this, we can efficiently choose from an unlabelled dataset.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Bambang Tutuko ◽  
Siti Nurmaini ◽  
Alexander Edo Tondas ◽  
Muhammad Naufal Rachmatullah ◽  
Annisa Darmawahyuni ◽  
...  

Abstract Background Generalization model capacity of deep learning (DL) approach for atrial fibrillation (AF) detection remains lacking. It can be seen from previous researches, the DL model formation used only a single frequency sampling of the specific device. Besides, each electrocardiogram (ECG) acquisition dataset produces a different length and sampling frequency to ensure sufficient precision of the R–R intervals to determine the heart rate variability (HRV). An accurate HRV is the gold standard for predicting the AF condition; therefore, a current challenge is to determine whether a DL approach can be used to analyze raw ECG data in a broad range of devices. This paper demonstrates powerful results for end-to-end implementation of AF detection based on a convolutional neural network (AFibNet). The method used a single learning system without considering the variety of signal lengths and frequency samplings. For implementation, the AFibNet is processed with a computational cloud-based DL approach. This study utilized a one-dimension convolutional neural networks (1D-CNNs) model for 11,842 subjects. It was trained and validated with 8232 records based on three datasets and tested with 3610 records based on eight datasets. The predicted results, when compared with the diagnosis results indicated by human practitioners, showed a 99.80% accuracy, sensitivity, and specificity. Result Meanwhile, when tested using unseen data, the AF detection reaches 98.94% accuracy, 98.97% sensitivity, and 98.97% specificity at a sample period of 0.02 seconds using the DL Cloud System. To improve the confidence of the AFibNet model, it also validated with 18 arrhythmias condition defined as Non-AF-class. Thus, the data is increased from 11,842 to 26,349 instances for three-class, i.e., Normal sinus (N), AF and Non-AF. The result found 96.36% accuracy, 93.65% sensitivity, and 96.92% specificity. Conclusion These findings demonstrate that the proposed approach can use unknown data to derive feature maps and reliably detect the AF periods. We have found that our cloud-DL system is suitable for practical deployment


Sign in / Sign up

Export Citation Format

Share Document