scholarly journals Bandit-based Monte-Carlo structure learning of probabilistic logic programs

2015 ◽  
Vol 100 (1) ◽  
pp. 127-156 ◽  
Author(s):  
Nicola Di Mauro ◽  
Elena Bellodi ◽  
Fabrizio Riguzzi
2021 ◽  
Author(s):  
Arnaud Nguembang Fadja ◽  
Fabrizio Riguzzi ◽  
Evelina Lamma

AbstractProbabilistic logic programming (PLP) combines logic programs and probabilities. Due to its expressiveness and simplicity, it has been considered as a powerful tool for learning and reasoning in relational domains characterized by uncertainty. Still, learning the parameter and the structure of general PLP is computationally expensive due to the inference cost. We have recently proposed a restriction of the general PLP language called hierarchical PLP (HPLP) in which clauses and predicates are hierarchically organized. HPLPs can be converted into arithmetic circuits or deep neural networks and inference is much cheaper than for general PLP. In this paper we present algorithms for learning both the parameters and the structure of HPLPs from data. We first present an algorithm, called parameter learning for hierarchical probabilistic logic programs (PHIL) which performs parameter estimation of HPLPs using gradient descent and expectation maximization. We also propose structure learning of hierarchical probabilistic logic programming (SLEAHP), that learns both the structure and the parameters of HPLPs from data. Experiments were performed comparing PHIL and SLEAHP with PLP and Markov Logic Networks state-of-the art systems for parameter and structure learning respectively. PHIL was compared with EMBLEM, ProbLog2 and Tuffy and SLEAHP with SLIPCOVER, PROBFOIL+, MLB-BC, MLN-BT and RDN-B. The experiments on five well known datasets show that our algorithms achieve similar and often better accuracies but in a shorter time.


2014 ◽  
Vol 15 (2) ◽  
pp. 169-212 ◽  
Author(s):  
ELENA BELLODI ◽  
FABRIZIO RIGUZZI

AbstractLearning probabilistic logic programming languages is receiving an increasing attention, and systems are available for learning the parameters (PRISM, LeProbLog, LFI-ProbLog and EMBLEM) or both structure and parameters (SEM-CP-logic and SLIPCASE) of these languages. In this paper we present the algorithm SLIPCOVER for “Structure LearnIng of Probabilistic logic programs by searChing OVER the clause space.” It performs a beam search in the space of probabilistic clauses and a greedy search in the space of theories using the log likelihood of the data as the guiding heuristics. To estimate the log likelihood, SLIPCOVER performs Expectation Maximization with EMBLEM. The algorithm has been tested on five real world datasets and compared with SLIPCASE, SEM-CP-logic, Aleph and two algorithms for learning Markov Logic Networks (Learning using Structural Motifs (LSM) and ALEPH++ExactL1). SLIPCOVER achieves higher areas under the precision-recall and receiver operating characteristic curves in most cases.


2008 ◽  
Vol 55 (3-4) ◽  
pp. 355-388 ◽  
Author(s):  
Alex Dekhtyar ◽  
Michael I. Dekhtyar

2018 ◽  
Vol 108 (7) ◽  
pp. 1111-1135 ◽  
Author(s):  
Arnaud Nguembang Fadja ◽  
Fabrizio Riguzzi

2021 ◽  
Author(s):  
Kaixian Yu ◽  
Zihan Cui ◽  
Xin Sui ◽  
Xing Qiu ◽  
Jinfeng Zhang

Abstract Bayesian networks (BNs) provide a probabilistic, graphical framework for modeling high-dimensional joint distributions with complex correlation structures. BNs have wide applications in many disciplines, including biology, social science, finance and biomedical science. Despite extensive studies in the past, network structure learning from data is still a challenging open question in BN research. In this study, we present a sequential Monte Carlo (SMC)-based three-stage approach, GRowth-based Approach with Staged Pruning (GRASP). A double filtering strategy was first used for discovering the overall skeleton of the target BN. To search for the optimal network structures we designed an adaptive SMC (adSMC) algorithm to increase the quality and diversity of sampled networks which were further improved by a third stage to reclaim edges missed in the skeleton discovery step. GRASP gave very satisfactory results when tested on benchmark networks. Finally, BN structure learning using multiple types of genomics data illustrates GRASP’s potential in discovering novel biological relationships in integrative genomic studies.


Author(s):  
Gerardo I. Simari ◽  
Maria Vanina Martinez ◽  
Amy Sliva ◽  
V. S. Subrahmanian

2011 ◽  
Vol 11 (4-5) ◽  
pp. 433-449 ◽  
Author(s):  
FABRIZIO RIGUZZI ◽  
TERRANCE SWIFT

AbstractMany real world domains require the representation of a measure of uncertainty. The most common such representation is probability, and the combination of probability with logic programs has given rise to the field of Probabilistic Logic Programming (PLP), leading to languages such as the Independent Choice Logic, Logic Programs with Annotated Disjunctions (LPADs), Problog, PRISM, and others. These languages share a similar distribution semantics, and methods have been devised to translate programs between these languages. The complexity of computing the probability of queries to these general PLP programs is very high due to the need to combine the probabilities of explanations that may not be exclusive. As one alternative, the PRISM system reduces the complexity of query answering by restricting the form of programs it can evaluate. As an entirely different alternative, Possibilistic Logic Programs adopt a simpler metric of uncertainty than probability.Each of these approaches—general PLP, restricted PLP, and Possibilistic Logic Programming—can be useful in different domains depending on the form of uncertainty to be represented, on the form of programs needed to model problems, and on the scale of the problems to be solved. In this paper, we show how the PITA system, which originally supported the general PLP language of LPADs, can also efficiently support restricted PLP and Possibilistic Logic Programs. PITA relies on tabling with answer subsumption and consists of a transformation along with an API for library functions that interface with answer subsumption. We show that, by adapting its transformation and library functions, PITA can be parameterized to PITA(IND, EXC) which supports the restricted PLP of PRISM, including optimizations that reduce non-discriminating arguments and the computation of Viterbi paths. Furthermore, we show PITA to be competitive with PRISM for complex queries to Hidden Markov Model examples, and sometimes much faster. We further show how PITA can be parameterized to PITA(COUNT) which computes the number of different explanations for a subgoal, and to PITA(POSS) which scalably implements Possibilistic Logic Programming. PITA is a supported package in version 3.3 of XSB.


Sign in / Sign up

Export Citation Format

Share Document