scholarly journals Convolutional Neural Networks for the Evaluation of Cancer in Barrett’s esophagus: Explainable AI to Lighten up the Black-Box

Author(s):  
Luis A. de Souza ◽  
Robert Mendel ◽  
Sophia Strasser ◽  
Alanna Ebigbo ◽  
Andreas Probst ◽  
...  
Author(s):  
Robert Mendel ◽  
Alanna Ebigbo ◽  
Andreas Probst ◽  
Helmut Messmann ◽  
Christoph Palm

2021 ◽  
pp. 1-11
Author(s):  
Tianshi Mu ◽  
Kequan Lin ◽  
Huabing Zhang ◽  
Jian Wang

Deep learning is gaining significant traction in a wide range of areas. Whereas, recent studies have demonstrated that deep learning exhibits the fatal weakness on adversarial examples. Due to the black-box nature and un-transparency problem of deep learning, it is difficult to explain the reason for the existence of adversarial examples and also hard to defend against them. This study focuses on improving the adversarial robustness of convolutional neural networks. We first explore how adversarial examples behave inside the network through visualization. We find that adversarial examples produce perturbations in hidden activations, which forms an amplification effect to fool the network. Motivated by this observation, we propose an approach, termed as sanitizing hidden activations, to help the network correctly recognize adversarial examples by eliminating or reducing the perturbations in hidden activations. To demonstrate the effectiveness of our approach, we conduct experiments on three widely used datasets: MNIST, CIFAR-10 and ImageNet, and also compare with state-of-the-art defense techniques. The experimental results show that our sanitizing approach is more generalized to defend against different kinds of attacks and can effectively improve the adversarial robustness of convolutional neural networks.


2019 ◽  
Vol 33 (3) ◽  
pp. 04019017 ◽  
Author(s):  
Somin Park ◽  
Seongdeok Bang ◽  
Hongjo Kim ◽  
Hyoungkwan Kim

2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Yosuke Toda ◽  
Fumio Okura

Deep learning with convolutional neural networks (CNNs) has achieved great success in the classification of various plant diseases. However, a limited number of studies have elucidated the process of inference, leaving it as an untouchable black box. Revealing the CNN to extract the learned feature as an interpretable form not only ensures its reliability but also enables the validation of the model authenticity and the training dataset by human intervention. In this study, a variety of neuron-wise and layer-wise visualization methods were applied using a CNN, trained with a publicly available plant disease image dataset. We showed that neural networks can capture the colors and textures of lesions specific to respective diseases upon diagnosis, which resembles human decision-making. While several visualization methods were used as they are, others had to be optimized to target a specific layer that fully captures the features to generate consequential outputs. Moreover, by interpreting the generated attention maps, we identified several layers that were not contributing to inference and removed such layers inside the network, decreasing the number of parameters by 75% without affecting the classification accuracy. The results provide an impetus for the CNN black box users in the field of plant science to better understand the diagnosis process and lead to further efficient use of deep learning for plant disease diagnosis.


2021 ◽  
Vol 21 (7) ◽  
pp. 9
Author(s):  
Zachary J. Cole ◽  
Karl M. Kuntzelman ◽  
Michael D. Dodd ◽  
Matthew R. Johnson

2020 ◽  
Vol 34 (05) ◽  
pp. 9394-9401
Author(s):  
Kai-Chou Yang ◽  
Hung-Yu Kao

In this paper, we propose Self Inference Neural Network (SINN), a simple yet efficient sentence encoder which leverages knowledge from recurrent and convolutional neural networks. SINN gathers semantic evidence in an interaction space which is subsequently fused by a shared vector gate to determine the most relevant mixture of contextual information. We evaluate the proposed method on four benchmarks among three NLP tasks. Experimental results demonstrate that our model sets a new state-of-the-art on MultiNLI, Scitail and is competitive on the remaining two datasets over all sentence encoding methods. The encoding and inference process in our model is highly interpretable. Through visualizations of the fusion component, we open the black box of our network and explore the applicability of the base encoding methods case by case.


Sign in / Sign up

Export Citation Format

Share Document