scholarly journals The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making

2021 ◽  
pp. 101666
Author(s):  
Hans de Bruijn ◽  
Martijn Warnier ◽  
Marijn Janssen
Author(s):  
Tathagata Chakraborti ◽  
Sarath Sreedharan ◽  
Subbarao Kambhampati

In this paper, we provide a comprehensive outline of the different threads of work in Explainable AI Planning (XAIP) that has emerged as a focus area in the last couple of years and contrast that with earlier efforts in the field in terms of techniques, target users, and delivery mechanisms. We hope that the survey will provide guidance to new researchers in automated planning towards the role of explanations in the effective design of human-in-the-loop systems, as well as provide the established researcher with some perspective on the evolution of the exciting world of explainable planning.


2020 ◽  
Vol 34 (10) ◽  
pp. 13738-13739
Author(s):  
Silvia Tulli

The research presented herein addresses the topic of explainability in autonomous pedagogical agents. We will be investigating possible ways to explain the decision-making process of such pedagogical agents (which can be embodied as robots) with a focus on the effect of these explanations in concrete learning scenarios for children. The hypothesis is that the agents' explanations about their decision making will support mutual modeling and a better understanding of the learning tasks and how learners perceive them. The objective is to develop a computational model that will allow agents to express internal states and actions and adapt to the human expectations of cooperative behavior accordingly. In addition, we would like to provide a comprehensive taxonomy of both the desiderata and methods in the explainable AI research applied to children's learning scenarios.


Author(s):  
Sam Hepenstal ◽  
David McNeish

Abstract In domains which require high risk and high consequence decision making, such as defence and security, there is a clear requirement for artificial intelligence (AI) systems to be able to explain their reasoning. In this paper we examine what it means to provide explainable AI. We report on research findings to propose that explanations should be tailored, depending upon the role of the human interacting with the system and the individual system components, to reflect different needs. We demonstrate that a ‘one-size-fits-all’ explanation is insufficient to capture the complexity of needs. Thus, designing explainable AI systems involves careful consideration of context, and within that the nature of both the human and AI components.


Author(s):  
Shravan Sajja ◽  
Nupur Aggarwal ◽  
Sumanta Mukherjee ◽  
Kushagra Manglik ◽  
Satyam Dwivedi ◽  
...  

2020 ◽  
pp. 089443932098011
Author(s):  
Marijn Janssen ◽  
Martijn Hartog ◽  
Ricardo Matheus ◽  
Aaron Yi Ding ◽  
George Kuk

Computational artificial intelligence (AI) algorithms are increasingly used to support decision making by governments. Yet algorithms often remain opaque to the decision makers and devoid of clear explanations for the decisions made. In this study, we used an experimental approach to compare decision making in three situations: humans making decisions (1) without any support of algorithms, (2) supported by business rules (BR), and (3) supported by machine learning (ML). Participants were asked to make the correct decisions given various scenarios, while BR and ML algorithms could provide correct or incorrect suggestions to the decision maker. This enabled us to evaluate whether the participants were able to understand the limitations of BR and ML. The experiment shows that algorithms help decision makers to make more correct decisions. The findings suggest that explainable AI combined with experience helps them detect incorrect suggestions made by algorithms. However, even experienced persons were not able to identify all mistakes. Ensuring the ability to understand and traceback decisions are not sufficient for avoiding making incorrect decisions. The findings imply that algorithms should be adopted with care and that selecting the appropriate algorithms for supporting decisions and training of decision makers are key factors in increasing accountability and transparency.


2021 ◽  
Vol 12 (1) ◽  
pp. 287-296
Author(s):  
Kaustav Das ◽  
Yixiao Wang ◽  
Keith E. Green

Abstract Increasingly, robots are decision makers in manufacturing, finance, medicine, and other areas, but the technology may not be trusted enough for reasons such as gaps between expectation and competency, challenges in explainable AI, users’ exposure level to the technology, etc. To investigate the trust issues between users and robots, the authors employed in this study, the case of robots making decisions in football (or “soccer” as it is known in the US) games as referees. More specifically, we presented a study on how the appearance of a human and three robotic linesmen (as presented in a study by Malle et al.) impacts fans’ trust and preference for them. Our online study with 104 participants finds a positive correlation between “Trust” and “Preference” for humanoid and human linesmen, but not for “AI” and “mechanical” linesmen. Although no significant trust differences were observed for different types of linesmen, participants do prefer human linesman to mechanical and humanoid linesmen. Our qualitative study further validated these quantitative findings by probing possible reasons for people’s preference: when the appearance of a linesman is not humanlike, people focus less on the trust issues but more on other reasons for their linesman preference such as efficiency, stability, and minimal robot design. These findings provide important insights for the design of trustworthy decision-making robots which are increasingly integrated to more and more aspects of our everyday lives.


Author(s):  
Ahmad Kamal Mohd Nor

Deep learning is quickly becoming essential to human ecosystem. However, the opacity of certain deep learning models poses a legal barrier in its adoption for greater purposes. Explainable AI (XAI) is a recent paradigm intended to tackle this issue. It explains the prediction mechanism produced by black box AI models, making it extremely practical for safety, security or financially important decision making. In another aspect, most deep learning studies are based on point estimate prediction with no measure of uncertainty which is vital for decision making. Obviously, these works are not suitable for real world applications. This paper presents a Remaining Useful Life (RUL) estimation problem for turbofan engines equipped with prognostic explainability and uncertainty quantification. A single input, multi outputs probabilistic Long Short-Term Memory (LSTM) is employed to predict the RULs distribution of the turbofans and SHapley Additive exPlanations (SHAP) approach is applied to explain the prognostic made. The explainable probabilistic LSTM is thus able to express its confidence in predicting and explains the produced estimation. The performance of the proposed method is comparable to several other published works


2020 ◽  
Author(s):  
Yasmeen Alufaisan ◽  
Laura Ranee Marusich ◽  
Jonathan Z Bakdash ◽  
Yan Zhou ◽  
Murat Kantarcioglu

Explainable AI provides insights to users into the why formodel predictions, offering potential for users to better un-derstand and trust a model, and to recognize and correct AIpredictions that are incorrect. Prior research on human andexplainable AI interactions has typically focused on measuressuch as interpretability, trust, and usability of the explanation.There are mixed findings whether explainable AI can improveactual human decision-making and the ability to identify theproblems with the underlying model. Using real datasets, wecompare objective human decision accuracy without AI (con-trol), with an AI prediction (no explanation), and AI predic-tion with explanation. We find providing any kind of AI pre-diction tends to improve user decision accuracy, but no con-clusive evidence that explainable AI has a meaningful impact.Moreover, we observed the strongest predictor for human de-cision accuracy was AI accuracy and that users were some-what able to detect when the AI was correct vs. incorrect, butthis was not significantly affected by including an explana-tion. Our results indicate that, at least in some situations, thewhy information provided in explainable AI may not enhanceuser decision-making, and further research may be needed tounderstand how to integrate explainable AI into real systems.


2018 ◽  
Vol 41 ◽  
Author(s):  
Patrick Simen ◽  
Fuat Balcı

AbstractRahnev & Denison (R&D) argue against normative theories and in favor of a more descriptive “standard observer model” of perceptual decision making. We agree with the authors in many respects, but we argue that optimality (specifically, reward-rate maximization) has proved demonstrably useful as a hypothesis, contrary to the authors’ claims.


Sign in / Sign up

Export Citation Format

Share Document