"The Ritual Food of Gut as an Explanation System of Korean Shamanism"

2017 ◽  
Vol 32 ◽  
pp. 186-218
Author(s):  
Yong Bhum Yi
Keyword(s):  
1982 ◽  
Vol 21 (03) ◽  
pp. 127-136 ◽  
Author(s):  
J. W. Wallis ◽  
E. H. Shortliffe

This paper reports on experiments designed to identify and implement mechanisms for enhancing the explanation capabilities of reasoning programs for medical consultation. The goals of an explanation system are discussed, as is the additional knowledge needed to meet these goals in a medical domain. We have focussed on the generation of explanations that are appropriate for different types of system users. This task requires a knowledge of what is complex and what is important; it is further strengthened by a classification of the associations or causal mechanisms inherent in the inference rules. A causal representation can also be used to aid in refining a comprehensive knowledge base so that the reasoning and explanations are more adequate. We describe a prototype system which reasons from causal inference rules and generates explanations that are appropriate for the user.


1993 ◽  
Vol 02 (01) ◽  
pp. 47-70
Author(s):  
SHARON M. TUTTLE ◽  
CHRISTOPH F. EICK

Forward-chaining rule-based programs, being data-driven, can function in changing environments in which backward-chaining rule-based programs would have problems. But, degugging forward-chaining programs can be tedious; to debug a forward-chaining rule-based program, certain ‘historical’ information about the program run is needed. Programmers should be able to directly request such information, instead of having to rerun the program one step at a time or search a trace of run details. As a first step in designing an explanation system for answering such questions, this paper discusses how a forward-chaining program run’s ‘historical’ details can be stored in its Rete inference network, used to match rule conditions to working memory. This can be done without seriously affecting the network’s run-time performance. We call this generalization of the Rete network a historical Rete network. Various algorithms for maintaining this network are discussed, along with how it can be used during debugging, and a debugging tool, MIRO, that incorporates these techniques is also discussed.


Author(s):  
Tauseef Ibne Mamun ◽  
Kenzie Baker ◽  
Hunter Malinowski ◽  
Rober R. Hoffman ◽  
Shane T. Mueller

Explainable AI represents an increasingly important category of systems that attempt to support human understanding and trust in machine intelligence and automation. Typical systems rely on algorithms to help understand underlying information about decisions and establish justified trust and reliance. Researchers have proposed using goodness criteria to measure the quality of explanations as a formative evaluation of an XAI system, but these criteria have not been systematically investigated in the literature. To explore this, we present a novel collaborative explanation system (CXAI) and propose several goodness criteria to evaluate the quality of its explanations. Results suggest that the explanations provided by this system are typically correct, informative, written in understandable ways, and focus on explanation of larger scale data patterns than are typically generated by algorithmic XAI systems. Implications for how these criteria may be applied to other XAI systems are discussed.


Author(s):  
Tomasz Muldner ◽  
Elhadi Shakshuki

This article presents a novel approach for explaining algorithms that aims to overcome various pedagogical limitations of the current visualization systems. The main idea is that at any given time, a learner is able to focus on a single problem. This problem can be explained, studied, understood, and tested, before the learner moves on to study another problem. Toward this end, a visualization system that explains algorithms at various levels of abstraction has been designed and implemented. In this system, each abstraction is focused on a single operation from the algorithm using various media, including text and an associated visualization. The explanations are designed to help the user to understand basic properties of the operation represented by this abstraction, for example its invariants. The explanation system allows the user to traverse the hierarchy graph, using either a top-down (from primitive operations to general operations) approach or a bottom-up approach. Since the system is implemented using a client-server architecture, it can be used both in the classroom setting and through distance education.


2017 ◽  
Vol 76 ◽  
pp. 36-48 ◽  
Author(s):  
Lara Quijano-Sanchez ◽  
Christian Sauer ◽  
Juan A. Recio-Garcia ◽  
Belen Diaz-Agudo

Sign in / Sign up

Export Citation Format

Share Document