scholarly journals Generating contrastive explanations for inductive logic programming based on a near miss approach

2021 ◽  
Author(s):  
Johannes Rabold ◽  
Michael Siebers ◽  
Ute Schmid

AbstractIn recent research, human-understandable explanations of machine learning models have received a lot of attention. Often explanations are given in form of model simplifications or visualizations. However, as shown in cognitive science as well as in early AI research, concept understanding can also be improved by the alignment of a given instance for a concept with a similar counterexample. Contrasting a given instance with a structurally similar example which does not belong to the concept highlights what characteristics are necessary for concept membership. Such near misses have been proposed by Winston (Learning structural descriptions from examples, 1970) as efficient guidance for learning in relational domains. We introduce an explanation generation algorithm for relational concepts learned with Inductive Logic Programming (GeNME). The algorithm identifies near miss examples from a given set of instances and ranks these examples by their degree of closeness to a specific positive instance. A modified rule which covers the near miss but not the original instance is given as an explanation. We illustrate GeNME with the well-known family domain consisting of kinship relations, the visual relational Winston arches domain, and a real-world domain dealing with file management. We also present a psychological experiment comparing human preferences of rule-based, example-based, and near miss explanations in the family and the arches domains.

1998 ◽  
Vol 07 (01) ◽  
pp. 71-102
Author(s):  
PO-CHI CHEN ◽  
SUH-YIN LEE

One remarkable progress of recent research in machine learning is inductive logic programming (ILP). In most ILP system, clause specialization is one of the most important tasks. Usually, the clause specialization is performed by adding a literal at a time using hill-climbing heuristics. However, the single-literal addition can be caught by local pits when more than one literal needs to be added at a time increase the accuracy. Several techniques have been proposed for this problem but are restricted to relational domains. In this paper, we propose a technique called structure subtraction to construct a set of candidates for adding literals, single-literal or multiple-literals. This technique can be employed in any ILP system using top-down specilization and is not restricted to relational domains. A theory revision system is described to illustrate the use of structural subtraction.


1996 ◽  
Vol 9 (4) ◽  
pp. 157-206 ◽  
Author(s):  
Nada Lavrač ◽  
Irene Weber ◽  
Darko Zupanič ◽  
Dimitar Kazakov ◽  
Olga Štěpánková ◽  
...  

Author(s):  
Rinaldo Lima ◽  
Bernard Espinasse ◽  
Hilário Oliveira ◽  
Rafael Ferreira ◽  
Luciano Cabral ◽  
...  

Author(s):  
Ashwin Srinivasan ◽  
Ross D. King ◽  
Stephen H. Muggleton ◽  
Michael J. E. Sternberg

Sign in / Sign up

Export Citation Format

Share Document