scholarly journals Learning Large Logic Programs By Going Beyond Entailment

Author(s):  
Andrew Cropper ◽  
Sebastijan Dumančic

A major challenge in inductive logic programming (ILP) is learning large programs. We argue that a key limitation of existing systems is that they use entailment to guide the hypothesis search. This approach is limited because entailment is a binary decision: a hypothesis either entails an example or does not, and there is no intermediate position. To address this limitation, we go beyond entailment and use 'example-dependent' loss functions to guide the search, where a hypothesis can partially cover an example. We implement our idea in Brute, a new ILP system which uses best-first search, guided by an example-dependent loss function, to incrementally build programs. Our experiments on three diverse program synthesis domains (robot planning, string transformations, and ASCII art), show that Brute can substantially outperform existing ILP systems, both in terms of predictive accuracies and learning times, and can learn programs 20 times larger than state-of-the-art systems.

Author(s):  
Farhad Shakerin ◽  
Gopal Gupta

We present a heuristic based algorithm to induce nonmonotonic logic programs that will explain the behavior of XGBoost trained classifiers. We use the technique based on the LIME approach to locally select the most important features contributing to the classification decision. Then, in order to explain the model’s global behavior, we propose the LIME-FOLD algorithm —a heuristic-based inductive logic programming (ILP) algorithm capable of learning nonmonotonic logic programs—that we apply to a transformed dataset produced by LIME. Our proposed approach is agnostic to the choice of the ILP algorithm. Our experiments with UCI standard benchmarks suggest a significant improvement in terms of classification evaluation metrics. Meanwhile, the number of induced rules dramatically decreases compared to ALEPH, a state-of-the-art ILP system.


Author(s):  
Daniele Gunetti

Though inductive logic programming (ILP for short) should mean the “induction of logic programs”, most research and applications of this area are only loosely related to logic programming. In fact, the automatic synthesis of “true” logic programs is a difficult task, since it cannot be done without a lot of information on the sought programs, and without the ability to describe in a simple way well-restricted searching spaces. In this chapter, we argue that, if such knowledge is available, inductive logic programming can be used as a valid tool for software engineering, and we propose an integrated framework for the development, maintenance, reuse, testing, and debugging of logic programs.


Author(s):  
Andrew Cropper ◽  
Sebastijan Dumančić ◽  
Stephen H. Muggleton

Common criticisms of state-of-the-art machine learning include poor generalisation, a lack of interpretability, and a need for large amounts of training data. We survey recent work in inductive logic programming (ILP), a form of machine learning that induces logic programs from data, which has shown promise at addressing these limitations. We focus on new methods for learning recursive programs that generalise from few examples, a shift from using hand-crafted background knowledge to learning background knowledge, and the use of different technologies, notably answer set programming and neural networks. As ILP approaches 30, we also discuss directions for future research.


2020 ◽  
Vol 34 (04) ◽  
pp. 3676-3683
Author(s):  
Andrew Cropper

Most program induction approaches require predefined, often hand-engineered, background knowledge (BK). To overcome this limitation, we explore methods to automatically acquire BK through multi-task learning. In this approach, a learner adds learned programs to its BK so that they can be reused to help learn other programs. To improve learning performance, we explore the idea of forgetting, where a learner can additionally remove programs from its BK. We consider forgetting in an inductive logic programming (ILP) setting. We show that forgetting can significantly reduce both the size of the hypothesis space and the sample complexity of an ILP learner. We introduce Forgetgol, a multi-task ILP learner which supports forgetting. We experimentally compare Forgetgol against approaches that either remember or forget everything. Our experimental results show that Forgetgol outperforms the alternative approaches when learning from over 10,000 tasks.


2018 ◽  
Vol 27 (07) ◽  
pp. 1860011
Author(s):  
Hippolyte Léger ◽  
Dominique Bouthinon ◽  
Mustapha Lebbah ◽  
Hanene Azzag

The θ-subsumption test is known to be a bottleneck in Inductive Logic Programming. The state-of-the-art learning systems in this field are hardly scalable. Last year, we have created a distributed θ-subsumption process based on an Actor Model, with the aim of being able to decide subsumption on very large clauses. This model was correct and complete, but was also very slow. This is why we introduce ANTS (Actor Network based Theta-Subsumption), a new model also based on an actor network, which is significantly faster than the previous one.


Author(s):  
WILLIAM W. COHEN ◽  
PREMKUMAR T. DEVANBU

We evaluate a class of learning algorithms known as inductive logic programming (ILP) methods on the task of predicting fault density in C++ classes. Using these methods, a large space of possible hypotheses is searched in an automated fashion; further, the hypotheses are based directly on an abstract logical representation of the software, eliminating the need to manually propose numerical metrics that predict fault density. We compare two ILP systems, FOIL and FLIPPER, and conclude that FLIPPER generally outperforms FOIL on this problem. We analyze the reasons for the differing performance of these two systems, and based on the analysis, propose two extensions to FLIPPER: a user-directed bias towards easy-to-evaluate clauses, and an extension that allows FLIPPER to learn "counting clauses". Counting clauses augment logic programs with a variation of the "number restrictions" used in description logics, and significantly improve performance on this problem when prior knowledge is used. We also evaluate the use of ILP techniques for automatic generation of Boolean indicators and numeric metrics from the calling tree representation.


1996 ◽  
Vol 9 (4) ◽  
pp. 157-206 ◽  
Author(s):  
Nada Lavrač ◽  
Irene Weber ◽  
Darko Zupanič ◽  
Dimitar Kazakov ◽  
Olga Štěpánková ◽  
...  

2021 ◽  
Vol 11 (15) ◽  
pp. 7046
Author(s):  
Jorge Francisco Ciprián-Sánchez ◽  
Gilberto Ochoa-Ruiz ◽  
Lucile Rossi ◽  
Frédéric Morandini

Wildfires stand as one of the most relevant natural disasters worldwide, particularly more so due to the effect of climate change and its impact on various societal and environmental levels. In this regard, a significant amount of research has been done in order to address this issue, deploying a wide variety of technologies and following a multi-disciplinary approach. Notably, computer vision has played a fundamental role in this regard. It can be used to extract and combine information from several imaging modalities in regard to fire detection, characterization and wildfire spread forecasting. In recent years, there has been work pertaining to Deep Learning (DL)-based fire segmentation, showing very promising results. However, it is currently unclear whether the architecture of a model, its loss function, or the image type employed (visible, infrared, or fused) has the most impact on the fire segmentation results. In the present work, we evaluate different combinations of state-of-the-art (SOTA) DL architectures, loss functions, and types of images to identify the parameters most relevant to improve the segmentation results. We benchmark them to identify the top-performing ones and compare them to traditional fire segmentation techniques. Finally, we evaluate if the addition of attention modules on the best performing architecture can further improve the segmentation results. To the best of our knowledge, this is the first work that evaluates the impact of the architecture, loss function, and image type in the performance of DL-based wildfire segmentation models.


Sign in / Sign up

Export Citation Format

Share Document