probabilistic logic programming
Recently Published Documents


TOTAL DOCUMENTS

50
(FIVE YEARS 18)

H-INDEX

10
(FIVE YEARS 1)

Author(s):  
FELIX Q. WEITKÄMPER

Abstract Probabilistic logic programming is a major part of statistical relational artificial intelligence, where approaches from logic and probability are brought together to reason about and learn from relational domains in a setting of uncertainty. However, the behaviour of statistical relational representations across variable domain sizes is complex, and scaling inference and learning to large domains remains a significant challenge. In recent years, connections have emerged between domain size dependence, lifted inference and learning from sampled subpopulations. The asymptotic behaviour of statistical relational representations has come under scrutiny, and projectivity was investigated as the strongest form of domain size dependence, in which query marginals are completely independent of the domain size. In this contribution we show that every probabilistic logic program under the distribution semantics is asymptotically equivalent to an acyclic probabilistic logic program consisting only of determinate clauses over probabilistic facts. We conclude that every probabilistic logic program inducing a projective family of distributions is in fact everywhere equivalent to a program from this fragment, and we investigate the consequences for the projective families of distributions expressible by probabilistic logic programs.


Author(s):  
DAMIANO AZZOLINI ◽  
FABRIZIO RIGUZZI

Abstract Probabilistic logic programming is an effective formalism for encoding problems characterized by uncertainty. Some of these problems may require the optimization of probability values subject to constraints among probability distributions of random variables. Here, we introduce a new class of probabilistic logic programs, namely probabilistic optimizable logic programs, and we provide an effective algorithm to find the best assignment to probabilities of random variables, such that a set of constraints is satisfied and an objective function is optimized.


2021 ◽  
Author(s):  
Robin Manhaeve ◽  
Giuseppe Marra ◽  
Luc De Raedt

DeepProbLog is a neural-symbolic framework that integrates probabilistic logic programming and neural networks. It is realized by providing an interface between the probabilistic logic and the neural networks. Inference in probabilistic neural symbolic methods is hard, since it combines logical theorem proving with probabilistic inference and neural network evaluation. In this work, we make the inference more efficient by extending an approximate inference algorithm from the field of statistical-relational AI. Instead of considering all possible proofs for a certain query, the system searches for the best proof. However, training a DeepProbLog model using approximate inference introduces additional challenges, as the best proof is unknown at the start of training which can lead to convergence towards a local optimum. To be able to apply DeepProbLog on larger tasks, we propose: 1) a method for approximate inference using an A*-like search, called DPLA* 2) an exploration strategy for proving in a neural-symbolic setting, and 3) a parametric heuristic to guide the proof search. We empirically evaluate the performance and scalability of the new approach, and also compare the resulting approach to other neural-symbolic systems. The experiments show that DPLA* achieves a speed up of up to 2-3 orders of magnitude in some cases.


2021 ◽  
Author(s):  
Arnaud Nguembang Fadja ◽  
Fabrizio Riguzzi ◽  
Evelina Lamma

AbstractProbabilistic logic programming (PLP) combines logic programs and probabilities. Due to its expressiveness and simplicity, it has been considered as a powerful tool for learning and reasoning in relational domains characterized by uncertainty. Still, learning the parameter and the structure of general PLP is computationally expensive due to the inference cost. We have recently proposed a restriction of the general PLP language called hierarchical PLP (HPLP) in which clauses and predicates are hierarchically organized. HPLPs can be converted into arithmetic circuits or deep neural networks and inference is much cheaper than for general PLP. In this paper we present algorithms for learning both the parameters and the structure of HPLPs from data. We first present an algorithm, called parameter learning for hierarchical probabilistic logic programs (PHIL) which performs parameter estimation of HPLPs using gradient descent and expectation maximization. We also propose structure learning of hierarchical probabilistic logic programming (SLEAHP), that learns both the structure and the parameters of HPLPs from data. Experiments were performed comparing PHIL and SLEAHP with PLP and Markov Logic Networks state-of-the art systems for parameter and structure learning respectively. PHIL was compared with EMBLEM, ProbLog2 and Tuffy and SLEAHP with SLIPCOVER, PROBFOIL+, MLB-BC, MLN-BT and RDN-B. The experiments on five well known datasets show that our algorithms achieve similar and often better accuracies but in a shorter time.


2021 ◽  
pp. 103504
Author(s):  
Robin Manhaeve ◽  
Sebastijan Dumančić ◽  
Angelika Kimmig ◽  
Thomas Demeester ◽  
Luc De Raedt

2020 ◽  
Vol 20 (5) ◽  
pp. 641-655
Author(s):  
ELENA BELLODI ◽  
MARCO ALBERTI ◽  
FABRIZIO RIGUZZI ◽  
RICCARDO ZESE

AbstractIn Probabilistic Logic Programming (PLP) the most commonly studied inference task is to compute the marginal probability of a query given a program. In this paper, we consider two other important tasks in the PLP setting: the Maximum-A-Posteriori (MAP) inference task, which determines the most likely values for a subset of the random variables given evidence on other variables, and the Most Probable Explanation (MPE) task, the instance of MAP where the query variables are the complement of the evidence variables. We present a novel algorithm, included in the PITA reasoner, which tackles these tasks by representing each problem as a Binary Decision Diagram and applying a dynamic programming procedure on it. We compare our algorithm with the version of ProbLog that admits annotated disjunctions and can perform MAP and MPE inference. Experiments on several synthetic datasets show that PITA outperforms ProbLog in many cases.


2020 ◽  
Vol 34 (4) ◽  
pp. 501-507
Author(s):  
Timothy van Bremen ◽  
Anton Dries ◽  
Jean Christoph Jung

AbstractWe present onto2problog, a tool that supports ontology-mediated querying of probabilistic data via probabilistic logic programming engines. Our tool supports conjunctive queries on probabilistic data under ontologies encoded in the description logic $$\mathcal{ELH}^{dr}$$ ELH dr , thus capturing a large part of the OWL 2 EL profile.


2020 ◽  
Vol 34 (06) ◽  
pp. 10284-10291
Author(s):  
Efthymia Tsamoura ◽  
Victor Gutierrez-Basulto ◽  
Angelika Kimmig

State-of-the-art inference approaches in probabilistic logic programming typically start by computing the relevant ground program with respect to the queries of interest, and then use this program for probabilistic inference using knowledge compilation and weighted model counting. We propose an alternative approach that uses efficient Datalog techniques to integrate knowledge compilation with forward reasoning with a non-ground program. This effectively eliminates the grounding bottleneck that so far has prohibited the application of probabilistic logic programming in query answering scenarios over knowledge graphs, while also providing fast approximations on classical benchmarks in the field.


Sign in / Sign up

Export Citation Format

Share Document