scholarly journals Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning

Author(s):  
Ruth M. J. Byrne

Counterfactuals about what could have happened are increasingly used in an array of Artificial Intelligence (AI) applications, and especially in explainable AI (XAI). Counterfactuals can aid the provision of interpretable models to make the decisions of inscrutable systems intelligible to developers and users. However, not all counterfactuals are equally helpful in assisting human comprehension. Discoveries about the nature of the counterfactuals that humans create are a helpful guide to maximize the effectiveness of counterfactual use in AI.

Risks ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 137
Author(s):  
Alex Gramegna ◽  
Paolo Giudici

We propose an Explainable AI model that can be employed in order to explain why a customer buys or abandons a non-life insurance coverage. The method consists in applying similarity clustering to the Shapley values that were obtained from a highly accurate XGBoost predictive classification algorithm. Our proposed method can be embedded into a technologically-based insurance service (Insurtech), allowing to understand, in real time, the factors that most contribute to customers’ decisions, thereby gaining proactive insights on their needs. We prove the validity of our model with an empirical analysis that was conducted on data regarding purchases of insurance micro-policies. Two aspects are investigated: the propensity to buy an insurance policy and the risk of churn of an existing customer. The results from the analysis reveal that customers can be effectively and quickly grouped according to a similar set of characteristics, which can predict their buying or churn behaviour well.


2021 ◽  
Vol 4 ◽  
Author(s):  
Lindsay Wells ◽  
Tomasz Bednarz

Research into Explainable Artificial Intelligence (XAI) has been increasing in recent years as a response to the need for increased transparency and trust in AI. This is particularly important as AI is used in sensitive domains with societal, ethical, and safety implications. Work in XAI has primarily focused on Machine Learning (ML) for classification, decision, or action, with detailed systematic reviews already undertaken. This review looks to explore current approaches and limitations for XAI in the area of Reinforcement Learning (RL). From 520 search results, 25 studies (including 5 snowball sampled) are reviewed, highlighting visualization, query-based explanations, policy summarization, human-in-the-loop collaboration, and verification as trends in this area. Limitations in the studies are presented, particularly a lack of user studies, and the prevalence of toy-examples and difficulties providing understandable explanations. Areas for future study are identified, including immersive visualization, and symbolic representation.


Author(s):  
Alexey Ignatiev

Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for validating explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This paper overviews the advances of the rigorous logic-based approach to XAI and argues that it is indispensable if trustable XAI is of concern.


2021 ◽  
Vol 4 ◽  
pp. 1-8
Author(s):  
Walter David ◽  
Michelle King-Okoye ◽  
Alessandro Capone ◽  
Gianluca Sensidoni ◽  
Silvia Elena Piovan

Abstract. The COVID-19 pandemic has exposed both national and organizational vulnerabilities to infectious diseases and has impacted, with devastating effects, many business sectors. Authors have identified an urgent need to effectively plan for future threats, by exploiting emerging technologies to forecast, predict and anticipate action at the strategic, operational and local level thus strengthening the capacity of national and international responders. In order to do this, we need an approach to increase awareness of actors involved. The purpose of this study is to investigate how improved medical intelligence, harvesting from big data available from social media, scientific literature and other resources such as local press, can improve situational awareness to take more informed decision in the context of safeguarding and protecting populations from medical threats. This paper focuses on the exploitation of large unstructured data available from microblogging service Twitter for mapping and analytics of health and sentiment situation. Authors tested an explainable artificial intelligence (AI) supported medical intelligence tool on a scenario of a megacity by processing and visualizing tweets on a GIS map. Results indicate that explainable AI provides a promising solution for measuring and tracking the evolution of disease to provide health, sentiment and emotion situational awareness.


Author(s):  
Shane Mueller ◽  
Robert Hoffman ◽  
Gary Klein ◽  
Tauseef Mamun ◽  
Mohammadreza Jalaeian

The field of Explainable AI (XAI) has focused primarily on algorithms that can help explain decisions and classification and help understand whether a particular action of an AI system is justified. These \emph{XAI algorithms} provide a variety of means for answering a number of questions human users might have about an AI. However, explanation is also supported by \emph{non-algorithms}: methods, tools, interfaces, and evaluations that might help develop or provide explanations for users, either on their own or in company with algorithmic explanations. In this article, we introduce and describe a small number of non-algorithms we have developed. These include several sets of guidelines for methodological guidance about evaluating systems, including both formative and summative evaluation (such as the self-explanation scorecard and stakeholder playbook) and several concepts for generating explanations that can augment or replace algorithmic XAI (such as the Discovery platform, Collaborative XAI, and the Cognitive Tutorial). We will introduce and review several of these example systems, and discuss how they might be useful in developing or improving algorithmic explanations, or even providing complete and useful non-algorithmic explanations of AI and ML systems.


Author(s):  
Robert Hoffman ◽  
William Clancey

We reflect on the progress in the area of Explainable AI (XAI) Program relative to previous work in the area of intelligent tutoring systems (ITS). A great deal was learned about explanation—and many challenges uncovered—in research that is directly relevant to XAI. We suggest opportunities for future XAI research deriving from ITS methods, as well as the challenges shared by both ITS and XAI in using AI to assist people in solving difficult problems effectively and efficiently.


Author(s):  
Sam Hepenstal ◽  
David McNeish

Abstract In domains which require high risk and high consequence decision making, such as defence and security, there is a clear requirement for artificial intelligence (AI) systems to be able to explain their reasoning. In this paper we examine what it means to provide explainable AI. We report on research findings to propose that explanations should be tailored, depending upon the role of the human interacting with the system and the individual system components, to reflect different needs. We demonstrate that a ‘one-size-fits-all’ explanation is insufficient to capture the complexity of needs. Thus, designing explainable AI systems involves careful consideration of context, and within that the nature of both the human and AI components.


2020 ◽  
Author(s):  
Toshimichi Ikemura ◽  
Kennosuke Wada ◽  
Yoshiko Wada ◽  
Yuki Iwasaki ◽  
Takashi Abe

Abstract Unsupervised AI (artificial intelligence) can obtain novel knowledge from big data without particular models or prior knowledge and is highly desirable for unveiling hidden features in big data. SARS-CoV-2 poses a serious threat to public health and one important issue in characterizing this fast-evolving virus is to elucidate various aspects of their genome sequence changes. We previously established unsupervised AI, a BLSOM (batch-learning SOM), which can analyze five million genomic sequences simultaneously. The present study applied the BLSOM to the oligonucleotide compositions of forty thousand SARS-CoV-2 genomes. While only the oligonucleotide composition was given, the obtained clusters of genomes corresponded primarily to known main clades and internal divisions in the main clades. Since the BLSOM is explainable AI, it reveals which features of the oligonucleotide composition are responsible for clade clustering. The BLSOM has powerful image display capabilities and enables efficient knowledge discovery about viral evolutionary processes.


Author(s):  
Krzysztof Fiok ◽  
Farzad V Farahani ◽  
Waldemar Karwowski ◽  
Tareq Ahram

Researchers and software users benefit from the rapid growth of artificial intelligence (AI) to an unprecedented extent in various domains where automated intelligent action is required. However, as they continue to engage with AI, they also begin to understand the limitations and risks associated with ceding control and decision-making to not always transparent artificial computer agents. Understanding of “what is happening in the black box” becomes feasible with explainable AI (XAI) methods designed to mitigate these risks and introduce trust into human-AI interactions. Our study reviews the essential capabilities, limitations, and desiderata of XAI tools developed over recent years and reviews the history of XAI and AI in education (AIED). We present different approaches to AI and XAI from the viewpoint of researchers focused on AIED in comparison with researchers focused on AI and machine learning (ML). We conclude that both groups of interest desire increased efforts to obtain improved XAI tools; however, these groups formulate different target user groups and expectations regarding XAI features and provide different examples of possible achievements. We summarize these viewpoints and provide guidelines for scientists looking to incorporate XAI into their own work.


Author(s):  
Stephen K. Reed

Deep connectionist learning has resulted in very impressive accomplishments, but it is unclear how it achieves its results. A dilemma in using the output of machine learning is that the best performing methods are the least explainable. Explainable artificial intelligence seeks to develop systems that can explain their reasoning to a human user. The application of IBM’s WatsonPaths to medicine includes a diagnostic network that infers a diagnosis from symptoms with a degree of confidence associated with each diagnosis. The Semanticscience Integrated Ontology uses categories such as objects, processes, attributes, and relations to create networks of biological knowledge. The same categories are fundamental in representing other types of knowledge such as cognition. Extending an ontology requires a consistent use of semantic terms across different domains of knowledge.


Sign in / Sign up

Export Citation Format

Share Document