scholarly journals Knowledge representation for explainable artificial intelligence

Author(s):  
Joaquín Borrego-Díaz ◽  
Juan Galán Páez

AbstractAlongside the particular need to explain the behavior of black box artificial intelligence (AI) systems, there is a general need to explain the behavior of any type of AI-based system (the explainable AI, XAI) or complex system that integrates this type of technology, due to the importance of its economic, political or industrial rights impact. The unstoppable development of AI-based applications in sensitive areas has led to what could be seen, from a formal and philosophical point of view, as some sort of crisis in the foundations, for which it is necessary both to provide models of the fundamentals of explainability as well as to discuss the advantages and disadvantages of different proposals. The need for foundations is also linked to the permanent challenge that the notion of explainability represents in Philosophy of Science. The paper aims to elaborate a general theoretical framework to discuss foundational characteristics of explaining, as well as how solutions (events) would be justified (explained). The approach, epistemological in nature, is based on the phenomenological-based approach to complex systems reconstruction (which encompasses complex AI-based systems). The formalized perspective is close to ideas from argumentation and induction (as learning). The soundness and limitations of the approach are addressed from Knowledge representation and reasoning paradigm and, in particular, from Computational Logic point of view. With regard to the latter, the proposal is intertwined with several related notions of explanation coming from the Philosophy of Science.

Author(s):  
Krzysztof Fiok ◽  
Farzad V Farahani ◽  
Waldemar Karwowski ◽  
Tareq Ahram

Researchers and software users benefit from the rapid growth of artificial intelligence (AI) to an unprecedented extent in various domains where automated intelligent action is required. However, as they continue to engage with AI, they also begin to understand the limitations and risks associated with ceding control and decision-making to not always transparent artificial computer agents. Understanding of “what is happening in the black box” becomes feasible with explainable AI (XAI) methods designed to mitigate these risks and introduce trust into human-AI interactions. Our study reviews the essential capabilities, limitations, and desiderata of XAI tools developed over recent years and reviews the history of XAI and AI in education (AIED). We present different approaches to AI and XAI from the viewpoint of researchers focused on AIED in comparison with researchers focused on AI and machine learning (ML). We conclude that both groups of interest desire increased efforts to obtain improved XAI tools; however, these groups formulate different target user groups and expectations regarding XAI features and provide different examples of possible achievements. We summarize these viewpoints and provide guidelines for scientists looking to incorporate XAI into their own work.


2021 ◽  
Author(s):  
J. Eric T. Taylor ◽  
Graham Taylor

Artificial intelligence powered by deep neural networks has reached a levelof complexity where it can be difficult or impossible to express how a modelmakes its decisions. This black-box problem is especially concerning when themodel makes decisions with consequences for human well-being. In response,an emerging field called explainable artificial intelligence (XAI) aims to increasethe interpretability, fairness, and transparency of machine learning. In thispaper, we describe how cognitive psychologists can make contributions to XAI.The human mind is also a black box, and cognitive psychologists have overone hundred and fifty years of experience modeling it through experimentation.We ought to translate the methods and rigour of cognitive psychology to thestudy of artificial black boxes in the service of explainability. We provide areview of XAI for psychologists, arguing that current methods possess a blindspot that can be complemented by the experimental cognitive tradition. Wealso provide a framework for research in XAI, highlight exemplary cases ofexperimentation within XAI inspired by psychological science, and provide atutorial on experimenting with machines. We end by noting the advantages ofan experimental approach and invite other psychologists to conduct research inthis exciting new field.


Author(s):  
David Mendes ◽  
Irene Pimenta Rodrigues

The ISO/HL7 27931:2009 standard intends to establish a global interoperability framework for healthcare applications. However, being a messaging related protocol, it lacks a semantic foundation for interoperability at a machine treatable level intended through the Semantic Web. There is no alignment between the HL7 V2.xml message payloads and a meaning service like a suitable ontology. Careful application of Semantic Web tools and concepts can ease the path to the fundamental concept of Shared Semantics. In this chapter, the Semantic Web and Artificial Intelligence tools and techniques that allow aligned ontology population are presented and their applicability discussed. The authors present the coverage of HL7 RIM inadequacy for ontology mapping and how to circumvent it, NLP techniques for semi-automated ontology population, and the current trends about knowledge representation and reasoning that concur to the proposed achievement.


Author(s):  
Gonzalo Aranda-Corral ◽  
Joaquín Borrego-Díaz

In this chapter, we advance, from the point of view of Knowledge Representation and Reasoning, an analysis of which ontological dimensions are needed to develop Mobile Web 2.0 on top of Semantic Web. This analysis will be particularly focused on social networks and it will try to make an outlook about the new knowledge challenges on this field. Some of these new challenges will be linked to Semantic Web context, while others will be inherent to Semantic Mobile Web 2.0.


Author(s):  
Evren Daglarli

Today, the effects of promising technologies such as explainable artificial intelligence (xAI) and meta-learning (ML) on the internet of things (IoT) and the cyber-physical systems (CPS), which are important components of Industry 4.0, are increasingly intensified. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. For these reasons, it is necessary to make serious efforts on the explanability and interpretability of black box models. In the near future, the integration of explainable artificial intelligence and meta-learning approaches to cyber-physical systems will have effects on a high level of virtualization and simulation infrastructure, real-time supply chain, cyber factories with smart machines communicating over the internet, maximizing production efficiency, analysis of service quality and competition level.


Risks ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 137
Author(s):  
Alex Gramegna ◽  
Paolo Giudici

We propose an Explainable AI model that can be employed in order to explain why a customer buys or abandons a non-life insurance coverage. The method consists in applying similarity clustering to the Shapley values that were obtained from a highly accurate XGBoost predictive classification algorithm. Our proposed method can be embedded into a technologically-based insurance service (Insurtech), allowing to understand, in real time, the factors that most contribute to customers’ decisions, thereby gaining proactive insights on their needs. We prove the validity of our model with an empirical analysis that was conducted on data regarding purchases of insurance micro-policies. Two aspects are investigated: the propensity to buy an insurance policy and the risk of churn of an existing customer. The results from the analysis reveal that customers can be effectively and quickly grouped according to a similar set of characteristics, which can predict their buying or churn behaviour well.


2021 ◽  
Vol 4 ◽  
Author(s):  
Lindsay Wells ◽  
Tomasz Bednarz

Research into Explainable Artificial Intelligence (XAI) has been increasing in recent years as a response to the need for increased transparency and trust in AI. This is particularly important as AI is used in sensitive domains with societal, ethical, and safety implications. Work in XAI has primarily focused on Machine Learning (ML) for classification, decision, or action, with detailed systematic reviews already undertaken. This review looks to explore current approaches and limitations for XAI in the area of Reinforcement Learning (RL). From 520 search results, 25 studies (including 5 snowball sampled) are reviewed, highlighting visualization, query-based explanations, policy summarization, human-in-the-loop collaboration, and verification as trends in this area. Limitations in the studies are presented, particularly a lack of user studies, and the prevalence of toy-examples and difficulties providing understandable explanations. Areas for future study are identified, including immersive visualization, and symbolic representation.


2020 ◽  
Vol 8 ◽  
pp. 61-72
Author(s):  
Kara Combs ◽  
Mary Fendley ◽  
Trevor Bihl

Artificial Intelligence and Machine Learning (AI/ML) models are increasingly criticized for their “black-box” nature. Therefore, eXplainable AI (XAI) approaches to extract human-interpretable decision processes from algorithms have been explored. However, XAI research lacks understanding of algorithmic explainability from a human factors’ perspective. This paper presents a repeatable human factors heuristic analysis for XAI with a demonstration on four decision tree classifier algorithms.


AI Magazine ◽  
2012 ◽  
Vol 33 (1) ◽  
pp. 99-103 ◽  
Author(s):  
Alexander Ferrein ◽  
Thomas Meyer

One of the consequences of the growth in AI research in South Africa in recent years is the establishment of a number of research hubs involved in AI activities ranging from mobile robotics and computational intelligence, to knowledge representation and reasoning, and human language technologies. In this survey we take the reader through a quick tour of the research being conducted at these hubs, and touch on an initiative to maintain and extend the current level of interest in AI research in the country.


Sign in / Sign up

Export Citation Format

Share Document