scholarly journals Mitigating belief projection in explainable artificial intelligence via Bayesian teaching

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Scott Cheng-Hsin Yang ◽  
Wai Keen Vong ◽  
Ravi B. Sojitra ◽  
Tomas Folke ◽  
Patrick Shafto

AbstractState-of-the-art deep-learning systems use decision rules that are challenging for humans to model. Explainable AI (XAI) attempts to improve human understanding but rarely accounts for how people typically reason about unfamiliar agents. We propose explicitly modelling the human explainee via Bayesian teaching, which evaluates explanations by how much they shift explainees’ inferences toward a desired goal. We assess Bayesian teaching in a binary image classification task across a variety of contexts. Absent intervention, participants predict that the AI’s classifications will match their own, but explanations generated by Bayesian teaching improve their ability to predict the AI’s judgements by moving them away from this prior belief. Bayesian teaching further allows each case to be broken down into sub-examples (here saliency maps). These sub-examples complement whole examples by improving error detection for familiar categories, whereas whole examples help predict correct AI judgements of unfamiliar cases.

Author(s):  
Alex J. DeGrave ◽  
Joseph D. Janizek ◽  
Su-In Lee

AbstractArtificial intelligence (AI) researchers and radiologists have recently reported AI systems that accurately detect COVID-19 in chest radiographs. However, the robustness of these systems remains unclear. Using state-of-the-art techniques in explainable AI, we demonstrate that recent deep learning systems to detect COVID-19 from chest radiographs rely on confounding factors rather than medical pathology, creating an alarming situation in which the systems appear accurate, but fail when tested in new hospitals. We observe that the approach to obtain training data for these AI systems introduces a nearly ideal scenario for AI to learn these spurious “shortcuts.” Because this approach to data collection has also been used to obtain training data for detection of COVID-19 in computed tomography scans and for medical imaging tasks related to other diseases, our study reveals a far-reaching problem in medical imaging AI. In addition, we show that evaluation of a model on external data is insufficient to ensure AI systems rely on medically relevant pathology, since the undesired “shortcuts” learned by AI systems may not impair performance in new hospitals. These findings demonstrate that explainable AI should be seen as a prerequisite to clinical deployment of ML healthcare models.


Risks ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 137
Author(s):  
Alex Gramegna ◽  
Paolo Giudici

We propose an Explainable AI model that can be employed in order to explain why a customer buys or abandons a non-life insurance coverage. The method consists in applying similarity clustering to the Shapley values that were obtained from a highly accurate XGBoost predictive classification algorithm. Our proposed method can be embedded into a technologically-based insurance service (Insurtech), allowing to understand, in real time, the factors that most contribute to customers’ decisions, thereby gaining proactive insights on their needs. We prove the validity of our model with an empirical analysis that was conducted on data regarding purchases of insurance micro-policies. Two aspects are investigated: the propensity to buy an insurance policy and the risk of churn of an existing customer. The results from the analysis reveal that customers can be effectively and quickly grouped according to a similar set of characteristics, which can predict their buying or churn behaviour well.


2021 ◽  
Vol 4 ◽  
Author(s):  
Lindsay Wells ◽  
Tomasz Bednarz

Research into Explainable Artificial Intelligence (XAI) has been increasing in recent years as a response to the need for increased transparency and trust in AI. This is particularly important as AI is used in sensitive domains with societal, ethical, and safety implications. Work in XAI has primarily focused on Machine Learning (ML) for classification, decision, or action, with detailed systematic reviews already undertaken. This review looks to explore current approaches and limitations for XAI in the area of Reinforcement Learning (RL). From 520 search results, 25 studies (including 5 snowball sampled) are reviewed, highlighting visualization, query-based explanations, policy summarization, human-in-the-loop collaboration, and verification as trends in this area. Limitations in the studies are presented, particularly a lack of user studies, and the prevalence of toy-examples and difficulties providing understandable explanations. Areas for future study are identified, including immersive visualization, and symbolic representation.


Information ◽  
2019 ◽  
Vol 10 (2) ◽  
pp. 51 ◽  
Author(s):  
Melanie Mitchell

Today’s AI systems sorely lack the essence of human intelligence: Understanding the situations we experience, being able to grasp their meaning. The lack of humanlike understanding in machines is underscored by recent studies demonstrating lack of robustness of state-of-the-art deep-learning systems. Deeper networks and larger datasets alone are not likely to unlock AI’s “barrier of meaning”; instead the field will need to embrace its original roots as an interdisciplinary science of intelligence.


2021 ◽  
Author(s):  
Cor Steging ◽  
Silja Renooij ◽  
Bart Verheij

The justification of an algorithm’s outcomes is important in many domains, and in particular in the law. However, previous research has shown that machine learning systems can make the right decisions for the wrong reasons: despite high accuracies, not all of the conditions that define the domain of the training data are learned. In this study, we investigate what the system does learn, using state-of-the-art explainable AI techniques. With the use of SHAP and LIME, we are able to show which features impact the decision making process and how the impact changes with different distributions of the training data. However, our results also show that even high accuracy and good relevant feature detection are no guarantee for a sound rationale. Hence these state-of-the-art explainable AI techniques cannot be used to fully expose unsound rationales, further advocating the need for a separate method for rationale evaluation.


Author(s):  
Alexey Ignatiev

Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for validating explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This paper overviews the advances of the rigorous logic-based approach to XAI and argues that it is indispensable if trustable XAI is of concern.


2021 ◽  
Vol 4 ◽  
pp. 1-8
Author(s):  
Walter David ◽  
Michelle King-Okoye ◽  
Alessandro Capone ◽  
Gianluca Sensidoni ◽  
Silvia Elena Piovan

Abstract. The COVID-19 pandemic has exposed both national and organizational vulnerabilities to infectious diseases and has impacted, with devastating effects, many business sectors. Authors have identified an urgent need to effectively plan for future threats, by exploiting emerging technologies to forecast, predict and anticipate action at the strategic, operational and local level thus strengthening the capacity of national and international responders. In order to do this, we need an approach to increase awareness of actors involved. The purpose of this study is to investigate how improved medical intelligence, harvesting from big data available from social media, scientific literature and other resources such as local press, can improve situational awareness to take more informed decision in the context of safeguarding and protecting populations from medical threats. This paper focuses on the exploitation of large unstructured data available from microblogging service Twitter for mapping and analytics of health and sentiment situation. Authors tested an explainable artificial intelligence (AI) supported medical intelligence tool on a scenario of a megacity by processing and visualizing tweets on a GIS map. Results indicate that explainable AI provides a promising solution for measuring and tracking the evolution of disease to provide health, sentiment and emotion situational awareness.


Author(s):  
Ruth M. J. Byrne

Counterfactuals about what could have happened are increasingly used in an array of Artificial Intelligence (AI) applications, and especially in explainable AI (XAI). Counterfactuals can aid the provision of interpretable models to make the decisions of inscrutable systems intelligible to developers and users. However, not all counterfactuals are equally helpful in assisting human comprehension. Discoveries about the nature of the counterfactuals that humans create are a helpful guide to maximize the effectiveness of counterfactual use in AI.


Author(s):  
Shane Mueller ◽  
Robert Hoffman ◽  
Gary Klein ◽  
Tauseef Mamun ◽  
Mohammadreza Jalaeian

The field of Explainable AI (XAI) has focused primarily on algorithms that can help explain decisions and classification and help understand whether a particular action of an AI system is justified. These \emph{XAI algorithms} provide a variety of means for answering a number of questions human users might have about an AI. However, explanation is also supported by \emph{non-algorithms}: methods, tools, interfaces, and evaluations that might help develop or provide explanations for users, either on their own or in company with algorithmic explanations. In this article, we introduce and describe a small number of non-algorithms we have developed. These include several sets of guidelines for methodological guidance about evaluating systems, including both formative and summative evaluation (such as the self-explanation scorecard and stakeholder playbook) and several concepts for generating explanations that can augment or replace algorithmic XAI (such as the Discovery platform, Collaborative XAI, and the Cognitive Tutorial). We will introduce and review several of these example systems, and discuss how they might be useful in developing or improving algorithmic explanations, or even providing complete and useful non-algorithmic explanations of AI and ML systems.


Author(s):  
Robert Hoffman ◽  
William Clancey

We reflect on the progress in the area of Explainable AI (XAI) Program relative to previous work in the area of intelligent tutoring systems (ITS). A great deal was learned about explanation—and many challenges uncovered—in research that is directly relevant to XAI. We suggest opportunities for future XAI research deriving from ITS methods, as well as the challenges shared by both ITS and XAI in using AI to assist people in solving difficult problems effectively and efficiently.


Sign in / Sign up

Export Citation Format

Share Document