scholarly journals Learning programs by learning from failures

2021 ◽  
Author(s):  
Andrew Cropper ◽  
Rolf Morel

AbstractWe describe an inductive logic programming (ILP) approach calledlearning from failures. In this approach, an ILP system (the learner) decomposes the learning problem into three separate stages:generate,test, andconstrain. In the generate stage, the learner generates a hypothesis (a logic program) that satisfies a set ofhypothesis constraints(constraints on the syntactic form of hypotheses). In the test stage, the learner tests the hypothesis against training examples. A hypothesisfailswhen it does not entail all the positive examples or entails a negative example. If a hypothesis fails, then, in the constrain stage, the learner learns constraints from the failed hypothesis to prune the hypothesis space, i.e. to constrain subsequent hypothesis generation. For instance, if a hypothesis is too general (entails a negative example), the constraints prune generalisations of the hypothesis. If a hypothesis is too specific (does not entail all the positive examples), the constraints prune specialisations of the hypothesis. This loop repeats until either (i) the learner finds a hypothesis that entails all the positive and none of the negative examples, or (ii) there are no more hypotheses to test. We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog. Popper supports infinite problem domains, reasoning about lists and numbers, learning textually minimal programs, and learning recursive programs. Our experimental results on three domains (toy game problems, robot strategies, and list transformations) show that (i) constraints drastically improve learning performance, and (ii) Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.

2021 ◽  
Author(s):  
Andrew Cropper ◽  
Sebastijan Dumančić ◽  
Richard Evans ◽  
Stephen H. Muggleton

AbstractInductive logic programming (ILP) is a form of logic-based machine learning. The goal is to induce a hypothesis (a logic program) that generalises given training examples and background knowledge. As ILP turns 30, we review the last decade of research. We focus on (i) new meta-level search methods, (ii) techniques for learning recursive programs, (iii) new approaches for predicate invention, and (iv) the use of different technologies. We conclude by discussing current limitations of ILP and directions for future research.


2015 ◽  
Vol 15 (4-5) ◽  
pp. 511-525 ◽  
Author(s):  
MARK LAW ◽  
ALESSANDRA RUSSO ◽  
KRYSIA BRODA

AbstractThis paper contributes to the area of inductive logic programming by presenting a new learning framework that allows the learning of weak constraints in Answer Set Programming (ASP). The framework, calledLearning from Ordered Answer Sets, generalises our previous work on learning ASP programs without weak constraints, by considering a new notion of examples asorderedpairs of partial answer sets that exemplify which answer sets of a learned hypothesis (together with a given background knowledge) arepreferredto others. In this new learning task inductive solutions are searched within a hypothesis space of normal rules, choice rules, and hard and weak constraints. We propose a new algorithm, ILASP2, which is sound and complete with respect to our new learning framework. We investigate its applicability to learning preferences in an interview scheduling problem and also demonstrate that when restricted to the task of learning ASP programs without weak constraints, ILASP2 can be much more efficient than our previously proposed system.


Author(s):  
Andrew Cropper

Children learn though play. We introduce the analogous idea of learning programs through play. In this approach, a program induction system (the learner) is given a set of user-supplied build tasks and initial background knowledge (BK). Before solving the build tasks, the learner enters an unsupervised playing stage where it creates its own play tasks to solve, tries to solve them, and saves any solutions (programs) to the BK. After the playing stage is finished, the learner enters the supervised building stage where it tries to solve the build tasks and can reuse solutions learnt whilst playing. The idea is that playing allows the learner to discover reusable general programs on its own which can then help solve the build tasks. We claim that playing can improve learning performance. We show that playing can reduce the textual complexity of target concepts which in turn reduces the sample complexity of a learner. We implement our idea in Playgol, a new inductive logic programming system. We experimentally test our claim on two domains: robot planning and real-world string transformations. Our experimental results suggest that playing can substantially improve learning performance.


2020 ◽  
Vol 34 (04) ◽  
pp. 3676-3683
Author(s):  
Andrew Cropper

Most program induction approaches require predefined, often hand-engineered, background knowledge (BK). To overcome this limitation, we explore methods to automatically acquire BK through multi-task learning. In this approach, a learner adds learned programs to its BK so that they can be reused to help learn other programs. To improve learning performance, we explore the idea of forgetting, where a learner can additionally remove programs from its BK. We consider forgetting in an inductive logic programming (ILP) setting. We show that forgetting can significantly reduce both the size of the hypothesis space and the sample complexity of an ILP learner. We introduce Forgetgol, a multi-task ILP learner which supports forgetting. We experimentally compare Forgetgol against approaches that either remember or forget everything. Our experimental results show that Forgetgol outperforms the alternative approaches when learning from over 10,000 tasks.


AI Magazine ◽  
2016 ◽  
Vol 37 (3) ◽  
pp. 25-32 ◽  
Author(s):  
Benjamin Kaufmann ◽  
Nicola Leone ◽  
Simona Perri ◽  
Torsten Schaub

Answer set programming is a declarative problem solving paradigm that rests upon a workflow involving modeling, grounding, and solving. While the former is described by Gebser and Schaub (2016), we focus here on key issues in grounding, or how to systematically replace object variables by ground terms in a effective way, and solving, or how to compute the answer sets of a propositional logic program obtained by grounding.


Author(s):  
Tobias Kaminski ◽  
Thomas Eiter ◽  
Katsumi Inoue

Meta-Interpretive Learning (MIL) is a recent approach for Inductive Logic Programming (ILP) implemented in Prolog. Alternatively, MIL-problems can be solved by using Answer Set Programming (ASP), which may result in performance gains due to efficient conflict propagation. However, a straightforward MIL-encoding results in a huge size of the ground program and search space. To address these challenges, we encode MIL in the HEX-extension of ASP, which mitigates grounding issues, and we develop novel pruning techniques.


2009 ◽  
pp. 2261-2267
Author(s):  
Fernando Zacarías Flores ◽  
Dionicio Zacarías Flores ◽  
Rosalba Cuapa Canto ◽  
Luis Miguel Guzmán Muñoz

Updates, is a central issue in relational databases and knowledge databases. In the last years, it has been well studied in the non-monotonic reasoning paradigm. Several semantics for logic program updates have been proposed (Brewka, Dix, & Knonolige 1997), (De Schreye, Hermenegildo, & Pereira, 1999) (Katsumo & Mendelzon, 1991). However, recently a set of proposals has been characterized to propose mechanisms of updates based on logic and logic programming. All these mechanisms are built on semantics based on structural properties (Eiter, Fink, Sabattini & Thompits, 2000) (Leite, 2002) (Banti, Alferes & Brogi, 2003) (Zacarias, 2005). Furthermore, all these semantic ones coincide in considering the AGM proposal as the standard model in the update theory, for their wealth in properties. The AGM approach, introduced in (Alchourron, Gardenfors & Makinson, 1985) is the dominating paradigm in the area, but in the context of monotonic logic. All these proposals analyze and reinterpret the AGM postulates under the Answer Set Programming (ASP) such as (Eiter, Fink, Sabattini & Thompits, 2000). However, the majority of the adapted AGM and update postulates are violated by update programs, as shown in(De Schreye, Hermenegildo, & Pereira, 1999).


2019 ◽  
Vol 109 (7) ◽  
pp. 1323-1369
Author(s):  
Andrew Cropper ◽  
Sophie Tourret

AbstractMany forms of inductive logic programming (ILP) use metarules, second-order Horn clauses, to define the structure of learnable programs and thus the hypothesis space. Deciding which metarules to use for a given learning task is a major open problem and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules, so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. In this paper, we study whether fragments of metarules can be logically reduced to minimal finite subsets. We consider two traditional forms of logical reduction: subsumption and entailment. We also consider a new reduction technique called derivation reduction, which is based on SLD-resolution. We compute reduced sets of metarules for fragments relevant to ILP and theoretically show whether these reduced sets are reductions for more general infinite fragments. We experimentally compare learning with reduced sets of metarules on three domains: Michalski trains, string transformations, and game rules. In general, derivation reduced sets of metarules outperform subsumption and entailment reduced sets, both in terms of predictive accuracies and learning times.


1996 ◽  
Vol 8 (3) ◽  
pp. 625-628 ◽  
Author(s):  
Peter L. Bartlett ◽  
Robert C. Williamson

We give upper bounds on the Vapnik-Chervonenkis dimension and pseudodimension of two-layer neural networks that use the standard sigmoid function or radial basis function and have inputs from {−D, …,D}n. In Valiant's probably approximately correct (pac) learning framework for pattern classification, and in Haussler's generalization of this framework to nonlinear regression, the results imply that the number of training examples necessary for satisfactory learning performance grows no more rapidly than W log (WD), where W is the number of weights. The previous best bound for these networks was O(W4).


2014 ◽  
Vol 50 ◽  
pp. 31-70 ◽  
Author(s):  
Y. Wang ◽  
Y. Zhang ◽  
Y. Zhou ◽  
M. Zhang

The ability of discarding or hiding irrelevant information has been recognized as an important feature for knowledge based systems, including answer set programming. The notion of strong equivalence in answer set programming plays an important role for different problems as it gives rise to a substitution principle and amounts to knowledge equivalence of logic programs. In this paper, we uniformly propose a semantic knowledge forgetting, called HT- and FLP-forgetting, for logic programs under stable model and FLP-stable model semantics, respectively. Our proposed knowledge forgetting discards exactly the knowledge of a logic program which is relevant to forgotten variables. Thus it preserves strong equivalence in the sense that strongly equivalent logic programs will remain strongly equivalent after forgetting the same variables. We show that this semantic forgetting result is always expressible; and we prove a representation theorem stating that the HT- and FLP-forgetting can be precisely characterized by Zhang-Zhou's four forgetting postulates under the HT- and FLP-model semantics, respectively. We also reveal underlying connections between the proposed forgetting and the forgetting of propositional logic, and provide complexity results for decision problems in relation to the forgetting. An application of the proposed forgetting is also considered in a conflict solving scenario.


Sign in / Sign up

Export Citation Format

Share Document