AN IMPROVEMENT TO TOP-DOWN CLAUSE SPECIALIZATION

1998 ◽  
Vol 07 (01) ◽  
pp. 71-102
Author(s):  
PO-CHI CHEN ◽  
SUH-YIN LEE

One remarkable progress of recent research in machine learning is inductive logic programming (ILP). In most ILP system, clause specialization is one of the most important tasks. Usually, the clause specialization is performed by adding a literal at a time using hill-climbing heuristics. However, the single-literal addition can be caught by local pits when more than one literal needs to be added at a time increase the accuracy. Several techniques have been proposed for this problem but are restricted to relational domains. In this paper, we propose a technique called structure subtraction to construct a set of candidates for adding literals, single-literal or multiple-literals. This technique can be employed in any ILP system using top-down specilization and is not restricted to relational domains. A theory revision system is described to illustrate the use of structural subtraction.

2021 ◽  
pp. 338-354
Author(s):  
Ute Schmid

With the growing number of applications of machine learning in complex real-world domains machine learning research has to meet new requirements to deal with the imperfections of real world data and the legal as well as ethical obligations to make classifier decisions transparent and comprehensible. In this contribution, arguments for interpretable and interactive approaches to machine learning are presented. It is argued that visual explanations are often not expressive enough to grasp critical information which relies on relations between different aspects or sub-concepts. Consequently, inductive logic programming (ILP) and the generation of verbal explanations from Prolog rules is advocated. Interactive learning in the context of ILP is illustrated with the Dare2Del system which helps users to manage their digital clutter. It is shown that verbal explanations overcome the explanatory one-way street from AI system to user. Interactive learning with mutual explanations allows the learning system to take into account not only class corrections but also corrections of explanations to guide learning. We propose mutual explanations as a building-block for human-like computing and an important ingredient for human AI partnership.


2006 ◽  
Vol 11 (2) ◽  
pp. 209-243 ◽  
Author(s):  
Vincent Claveau ◽  
Marie-Claude L'Homme

This article presents a method for discovering and organizing noun-verb (N-V) combinations found in a French corpus on computing. Our aim is to find N-V combinations in which verbs convey a “realization meaning” as defined in the framework of lexical functions (Mel’čuk 1996, 1998). Our approach, chiefly corpus-based, uses a machine learning technique, namely Inductive Logic Programming (ILP). The whole acquisition process is divided into three steps: (1) isolating contexts in which specific N-V pairs occur; (2) inferring linguistically-motivated rules that reflect the behaviour of realization N-V pairs; (3) projecting these rules on corpora to find other valid N-V pairs. This technique is evaluated in terms of the relevance of the rules inferred and in terms of the quality (recall and precision) of the results. Results obtained show that our approach is able to find these very specific semantic relationships (the realization N-V pairs) with very good success rates.


1994 ◽  
Vol 9 (1) ◽  
pp. 73-77 ◽  
Author(s):  
F. Bergadano ◽  
D. Gunetti

Inductive Logic Programming (ILP) is an emerging research area at the intersection of machine learning, logic programming and software engineering. The first workshop on this topic was held in 1991 in Portugal (Muggleton, 1991). Subsequently, there was a workshop tied to the Future Generation Computer System Conference in Japan in 1992, and a third one in Bled, Slovenia, in April 1993 (Muggleton, 1993). Ideas related to ILP are also appearing in major AI and machine learning conferences and journals. Although European-based and mainly sponsored by ESPRIT, ILP aims at becoming equally represented elsewhere; for example, among researchers in America who are investigating relational learning and first order theory revision (see, for example, the papers in Birnbaum and Collins, 1991) and within the computational learning theory community. This year's IJCAI workshop on ILP is a first step in this direction, and includes recent work with a broader range of perspectives and techniques.


2021 ◽  
Vol 5 (4) ◽  
pp. 1840-1857
Author(s):  
Clenio B. Gonçalves Junior ◽  
Murillo Rodrigo Petrucelli Homem

 In Computer Music, the knowledge representation process is an essential element for the development of systems. Methods have been applied to provide the computer with the ability to generate conclusions based on previously established experience and definitions. In this sense, Inductive Logic Programming presents itself as a research field that incorporates concepts of Logic Programming and Machine Learning, its declarative character allows musical knowledge to be presented to non-specialist users in a naturally understandable way. The present work performs a systematic review based on approaches that use Inductive Logic Programming in the representation of musical knowledge. Questions that these studies seek to address were raised, as well as identifying characteristic aspects related to their application.


2021 ◽  
Author(s):  
Johannes Rabold ◽  
Michael Siebers ◽  
Ute Schmid

AbstractIn recent research, human-understandable explanations of machine learning models have received a lot of attention. Often explanations are given in form of model simplifications or visualizations. However, as shown in cognitive science as well as in early AI research, concept understanding can also be improved by the alignment of a given instance for a concept with a similar counterexample. Contrasting a given instance with a structurally similar example which does not belong to the concept highlights what characteristics are necessary for concept membership. Such near misses have been proposed by Winston (Learning structural descriptions from examples, 1970) as efficient guidance for learning in relational domains. We introduce an explanation generation algorithm for relational concepts learned with Inductive Logic Programming (GeNME). The algorithm identifies near miss examples from a given set of instances and ranks these examples by their degree of closeness to a specific positive instance. A modified rule which covers the near miss but not the original instance is given as an explanation. We illustrate GeNME with the well-known family domain consisting of kinship relations, the visual relational Winston arches domain, and a real-world domain dealing with file management. We also present a psychological experiment comparing human preferences of rule-based, example-based, and near miss explanations in the family and the arches domains.


Author(s):  
Andrew Cropper ◽  
Sebastijan Dumančić ◽  
Stephen H. Muggleton

Common criticisms of state-of-the-art machine learning include poor generalisation, a lack of interpretability, and a need for large amounts of training data. We survey recent work in inductive logic programming (ILP), a form of machine learning that induces logic programs from data, which has shown promise at addressing these limitations. We focus on new methods for learning recursive programs that generalise from few examples, a shift from using hand-crafted background knowledge to learning background knowledge, and the use of different technologies, notably answer set programming and neural networks. As ILP approaches 30, we also discuss directions for future research.


Sign in / Sign up

Export Citation Format

Share Document