scholarly journals The appropriateness of predicate invention as bias shift operation in ILP

1995 ◽  
Vol 20 (1-2) ◽  
pp. 95-117 ◽  
Author(s):  
Irene Stahl
Keyword(s):  
Author(s):  
Man Zhu ◽  
Weixin Wu ◽  
Jeff Z. Pan ◽  
Jingyu Han ◽  
Pengfei Huang ◽  
...  

Author(s):  
Céline Hocquette ◽  
Stephen H. Muggleton

Predicate Invention in Meta-Interpretive Learning (MIL) is generally based on a top-down approach, and the search for a consistent hypothesis is carried out starting from the positive examples as goals. We consider augmenting top-down MIL systems with a bottom-up step during which the background knowledge is generalised with an extension of the immediate consequence operator for second-order logic programs. This new method provides a way to perform extensive predicate invention useful for feature discovery. We demonstrate this method is complete with respect to a fragment of dyadic datalog. We theoretically prove this method reduces the number of clauses to be learned for the top-down learner, which in turn can reduce the sample complexity. We formalise an equivalence relation for predicates which is used to eliminate redundant predicates. Our experimental results suggest pairing the state-of-the-art MIL system Metagol with an initial bottom-up step can significantly improve learning performance.


2015 ◽  
Vol 100 (1) ◽  
pp. 49-73 ◽  
Author(s):  
Stephen H. Muggleton ◽  
Dianhuan Lin ◽  
Alireza Tamaddoni-Nezhad

2020 ◽  
Vol 34 (09) ◽  
pp. 13655-13658
Author(s):  
Andrew Cropper ◽  
Rolf Morel ◽  
Stephen H. Muggleton

A key feature of inductive logic programming (ILP) is its ability to learn first-order programs, which are intrinsically more expressive than propositional programs. In this paper, we introduce ILP techniques to learn higher-order programs. We implement our idea in Metagolho, an ILP system which can learn higher-order programs with higher-order predicate invention. Our experiments show that, compared to first-order programs, learning higher-order programs can significantly improve predictive accuracies and reduce learning times.


Author(s):  
Stefan Kramer

Learning higher-level representations from data has been on the agenda of AI research for several decades. In the paper, I will give a survey of various approaches to learning symbolic higher-level representations: feature construction and constructive induction, predicate invention, propositionalization, pattern mining, and mining time series patterns. Finally, I will give an outlook on how approaches to learning higher-level representations, symbolic and neural, can benefit from each other to solve current issues in machine learning.


Sign in / Sign up

Export Citation Format

Share Document