scholarly journals Logical reduction of metarules

2019 ◽  
Vol 109 (7) ◽  
pp. 1323-1369
Author(s):  
Andrew Cropper ◽  
Sophie Tourret

AbstractMany forms of inductive logic programming (ILP) use metarules, second-order Horn clauses, to define the structure of learnable programs and thus the hypothesis space. Deciding which metarules to use for a given learning task is a major open problem and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules, so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. In this paper, we study whether fragments of metarules can be logically reduced to minimal finite subsets. We consider two traditional forms of logical reduction: subsumption and entailment. We also consider a new reduction technique called derivation reduction, which is based on SLD-resolution. We compute reduced sets of metarules for fragments relevant to ILP and theoretically show whether these reduced sets are reductions for more general infinite fragments. We experimentally compare learning with reduced sets of metarules on three domains: Michalski trains, string transformations, and game rules. In general, derivation reduced sets of metarules outperform subsumption and entailment reduced sets, both in terms of predictive accuracies and learning times.

2020 ◽  
Vol 34 (04) ◽  
pp. 3676-3683
Author(s):  
Andrew Cropper

Most program induction approaches require predefined, often hand-engineered, background knowledge (BK). To overcome this limitation, we explore methods to automatically acquire BK through multi-task learning. In this approach, a learner adds learned programs to its BK so that they can be reused to help learn other programs. To improve learning performance, we explore the idea of forgetting, where a learner can additionally remove programs from its BK. We consider forgetting in an inductive logic programming (ILP) setting. We show that forgetting can significantly reduce both the size of the hypothesis space and the sample complexity of an ILP learner. We introduce Forgetgol, a multi-task ILP learner which supports forgetting. We experimentally compare Forgetgol against approaches that either remember or forget everything. Our experimental results show that Forgetgol outperforms the alternative approaches when learning from over 10,000 tasks.


1996 ◽  
Vol 9 (4) ◽  
pp. 157-206 ◽  
Author(s):  
Nada Lavrač ◽  
Irene Weber ◽  
Darko Zupanič ◽  
Dimitar Kazakov ◽  
Olga Štěpánková ◽  
...  

Author(s):  
Rinaldo Lima ◽  
Bernard Espinasse ◽  
Hilário Oliveira ◽  
Rafael Ferreira ◽  
Luciano Cabral ◽  
...  

Author(s):  
Ashwin Srinivasan ◽  
Ross D. King ◽  
Stephen H. Muggleton ◽  
Michael J. E. Sternberg

Sign in / Sign up

Export Citation Format

Share Document