scholarly journals Grad Centroid Activation Mapping for Convolutional Neural Networks

Author(s):  
Baptiste Lafabregue ◽  
Jonathan Weber ◽  
Pierre Gancarski ◽  
Germain Forestier
2020 ◽  
Vol 10 (6) ◽  
pp. 2124 ◽  
Author(s):  
Ki-Sun Lee ◽  
Jae-Jun Ryu ◽  
Hyon Seok Jang ◽  
Dong-Yul Lee ◽  
Seok-Ki Jung

The aim of this study was to evaluate the deep convolutional neural networks (DCNNs) based on analysis of cephalometric radiographs for the differential diagnosis of the indications of orthognathic surgery. Among the DCNNs, Modified-Alexnet, MobileNet, and Resnet50 were used, and the accuracy of the models was evaluated by performing 4-fold cross validation. Additionally, gradient-weighted class activation mapping (Grad-CAM) was used to perform visualized interpretation to determine which region affected the DCNNs’ class classification. The prediction accuracy of the models was 96.4% for Modified-Alexnet, 95.4% for MobileNet, and 95.6% for Resnet50. According to the Grad-CAM analysis, the most influential regions for the DCNNs’ class classification were the maxillary and mandibular teeth, mandible, and mandibular symphysis. This study suggests that DCNNs-based analysis of cephalometric radiograph images can be successfully applied for differential diagnosis of the indications of orthognathic surgery.


2020 ◽  
Vol 34 (10) ◽  
pp. 13943-13944
Author(s):  
Kira Vinogradova ◽  
Alexandr Dibrov ◽  
Gene Myers

Convolutional neural networks have become state-of-the-art in a wide range of image recognition tasks. The interpretation of their predictions, however, is an active area of research. Whereas various interpretation methods have been suggested for image classification, the interpretation of image segmentation still remains largely unexplored. To that end, we propose seg-grad-cam, a gradient-based method for interpreting semantic segmentation. Our method is an extension of the widely-used Grad-CAM method, applied locally to produce heatmaps showing the relevance of individual pixels for semantic segmentation.


Author(s):  
T. Oki ◽  
S. Kizawa

Abstract. This paper examines the possibility of impression evaluation based on gaze analysis of subjects and deep learning, using an example of evaluating street attractiveness in densely built-up wooden residential areas. Firstly, the relationship between the subjects' gazing tendency and their evaluation of street image attractiveness is analysed by measuring the subjects' gaze with an eye tracker. Next, we construct a model that can estimate an attractiveness evaluation result using convolutional neural networks (CNNs), combined with the method of gradient-weighted class activation mapping (Grad-CAM) - these in in visualizing which street components can contribute to evaluating attractiveness. Finally, we discuss the similarity between the subjects' gaze tendencies and activation heatmaps created by Grad-CAM.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document