I Think I Get Your Point, AI! The Illusion of Explanatory Depth in Explainable AI

Author(s):  
Michael Chromik ◽  
Malin Eiband ◽  
Felicitas Buchner ◽  
Adrian Krüger ◽  
Andreas Butz
2020 ◽  
Author(s):  
Markus Jaeger ◽  
Stephan Krügel ◽  
Dimitri Marinelli ◽  
Jochen Papenbrock ◽  
Peter Schwendner

2020 ◽  
Vol 34 (03) ◽  
pp. 2594-2601
Author(s):  
Arjun Akula ◽  
Shuai Wang ◽  
Song-Chun Zhu

We present CoCoX (short for Conceptual and Counterfactual Explanations), a model for explaining decisions made by a deep convolutional neural network (CNN). In Cognitive Psychology, the factors (or semantic-level features) that humans zoom in on when they imagine an alternative to a model prediction are often referred to as fault-lines. Motivated by this, our CoCoX model explains decisions made by a CNN using fault-lines. Specifically, given an input image I for which a CNN classification model M predicts class cpred, our fault-line based explanation identifies the minimal semantic-level features (e.g., stripes on zebra, pointed ears of dog), referred to as explainable concepts, that need to be added to or deleted from I in order to alter the classification category of I by M to another specified class calt. We argue that, due to the conceptual and counterfactual nature of fault-lines, our CoCoX explanations are practical and more natural for both expert and non-expert users to understand the internal workings of complex deep learning models. Extensive quantitative and qualitative experiments verify our hypotheses, showing that CoCoX significantly outperforms the state-of-the-art explainable AI models. Our implementation is available at https://github.com/arjunakula/CoCoX


2021 ◽  
Vol 52 ◽  
pp. 171-178
Author(s):  
David Rößler ◽  
Julian Reisch ◽  
Florian Hauck ◽  
Natalia Kliewer

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Monika S. Mellem ◽  
Matt Kollada ◽  
Jane Tiller ◽  
Thomas Lauritzen

Abstract Background Heterogeneity among patients’ responses to treatment is prevalent in psychiatric disorders. Personalized medicine approaches—which involve parsing patients into subgroups better indicated for a particular treatment—could therefore improve patient outcomes and serve as a powerful tool in patient selection within clinical trials. Machine learning approaches can identify patient subgroups but are often not “explainable” due to the use of complex algorithms that do not mirror clinicians’ natural decision-making processes. Methods Here we combine two analytical approaches—Personalized Advantage Index and Bayesian Rule Lists—to identify paliperidone-indicated schizophrenia patients in a way that emphasizes model explainability. We apply these approaches retrospectively to randomized, placebo-controlled clinical trial data to identify a paliperidone-indicated subgroup of schizophrenia patients who demonstrate a larger treatment effect (outcome on treatment superior than on placebo) than that of the full randomized sample as assessed with Cohen’s d. For this study, the outcome corresponded to a reduction in the Positive and Negative Syndrome Scale (PANSS) total score which measures positive (e.g., hallucinations, delusions), negative (e.g., blunted affect, emotional withdrawal), and general psychopathological (e.g., disturbance of volition, uncooperativeness) symptoms in schizophrenia. Results Using our combined explainable AI approach to identify a subgroup more responsive to paliperidone than placebo, the treatment effect increased significantly over that of the full sample (p < 0.0001 for a one-sample t-test comparing the full sample Cohen’s d = 0.82 and a generated distribution of subgroup Cohen’s d’s with mean d = 1.22, std d = 0.09). In addition, our modeling approach produces simple logical statements (if–then-else), termed a “rule list”, to ease interpretability for clinicians. A majority of the rule lists generated from cross-validation found two general psychopathology symptoms, disturbance of volition and uncooperativeness, to predict membership in the paliperidone-indicated subgroup. Conclusions These results help to technically validate our explainable AI approach to patient selection for a clinical trial by identifying a subgroup with an improved treatment effect. With these data, the explainable rule lists also suggest that paliperidone may provide an improved therapeutic benefit for the treatment of schizophrenia patients with either of the symptoms of high disturbance of volition or high uncooperativeness. Trial Registration: clincialtrials.gov identifier: NCT 00,083,668; prospectively registered May 28, 2004


2021 ◽  
Vol 2021 (5) ◽  
Author(s):  
Garvita Agarwal ◽  
Lauren Hay ◽  
Ia Iashvili ◽  
Benjamin Mannix ◽  
Christine McLean ◽  
...  

Abstract A framework is presented to extract and understand decision-making information from a deep neural network (DNN) classifier of jet substructure tagging techniques. The general method studied is to provide expert variables that augment inputs (“eXpert AUGmented” variables, or XAUG variables), then apply layerwise relevance propagation (LRP) to networks both with and without XAUG variables. The XAUG variables are concatenated with the intermediate layers after network-specific operations (such as convolution or recurrence), and used in the final layers of the network. The results of comparing networks with and without the addition of XAUG variables show that XAUG variables can be used to interpret classifier behavior, increase discrimination ability when combined with low-level features, and in some cases capture the behavior of the classifier completely. The LRP technique can be used to find relevant information the network is using, and when combined with the XAUG variables, can be used to rank features, allowing one to find a reduced set of features that capture part of the network performance. In the studies presented, adding XAUG variables to low-level DNNs increased the efficiency of classifiers by as much as 30-40%. In addition to performance improvements, an approach to quantify numerical uncertainties in the training of these DNNs is presented.


Author(s):  
Krishna Gade ◽  
Sahin Cem Geyik ◽  
Krishnaram Kenthapadi ◽  
Varun Mithal ◽  
Ankur Taly
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document