probabilistic logic
Recently Published Documents


TOTAL DOCUMENTS

346
(FIVE YEARS 65)

H-INDEX

22
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Hoifung Poon ◽  
Hai Wang ◽  
Hunter Lang

Deep learning has proven effective for various application tasks, but its applicability is limited by the reliance on annotated examples. Self-supervised learning has emerged as a promising direction to alleviate the supervision bottleneck, but existing work focuses on leveraging co-occurrences in unlabeled data for task-agnostic representation learning, as exemplified by masked language model pretraining. In this chapter, we explore task-specific self-supervision, which leverages domain knowledge to automatically annotate noisy training examples for end applications, either by introducing labeling functions for annotating individual instances, or by imposing constraints over interdependent label decisions. We first present deep probabilistic logic (DPL), which offers a unifying framework for task-specific self-supervision by composing probabilistic logic with deep learning. DPL represents unknown labels as latent variables and incorporates diverse self-supervision using probabilistic logic to train a deep neural network end-to-end using variational EM. Next, we present self-supervised self-supervision (S4), which adds to DPL the capability to learn new self-supervision automatically. Starting from an initial seed self-supervision, S4 iteratively uses the deep neural network to propose new self supervision. These are either added directly (a form of structured self-training) or verified by a human expert (as in feature-based active learning). Experiments on real-world applications such as biomedical machine reading and various text classification tasks show that task-specific self-supervision can effectively leverage domain expertise and often match the accuracy of supervised methods with a tiny fraction of human effort.


2021 ◽  
Author(s):  
Robin Manhaeve ◽  
Giuseppe Marra ◽  
Thomas Demeester ◽  
Sebastijan Dumančić ◽  
Angelika Kimmig ◽  
...  

There is a broad consensus that both learning and reasoning are essential to achieve true artificial intelligence. This has put the quest for neural-symbolic artificial intelligence (NeSy) high on the research agenda. In the past decade, neural networks have caused great advances in the field of machine learning. Conversely, the two most prominent frameworks for reasoning are logic and probability. While in the past they were studied by separate communities, a significant number of researchers has been working towards their integration, cf. the area of statistical relational artificial intelligence (StarAI). Generally, NeSy systems integrate logic with neural networks. However, probability theory has already been integrated with both logic (cf. StarAI) and neural networks. It therefore makes sense to consider the integration of logic, neural networks and probabilities. In this chapter, we first consider these three base paradigms separately. Then, we look at the well established integrations, NeSy and StarAI. Next, we consider the integration of all three paradigms as Neural Probabilistic Logic Programming, and exemplify it with the DeepProbLog framework. Finally, we discuss the limitations of the state of the art, and consider future directions based on the parallels between StarAI and NeSy.


Author(s):  
FELIX Q. WEITKÄMPER

Abstract Probabilistic logic programming is a major part of statistical relational artificial intelligence, where approaches from logic and probability are brought together to reason about and learn from relational domains in a setting of uncertainty. However, the behaviour of statistical relational representations across variable domain sizes is complex, and scaling inference and learning to large domains remains a significant challenge. In recent years, connections have emerged between domain size dependence, lifted inference and learning from sampled subpopulations. The asymptotic behaviour of statistical relational representations has come under scrutiny, and projectivity was investigated as the strongest form of domain size dependence, in which query marginals are completely independent of the domain size. In this contribution we show that every probabilistic logic program under the distribution semantics is asymptotically equivalent to an acyclic probabilistic logic program consisting only of determinate clauses over probabilistic facts. We conclude that every probabilistic logic program inducing a projective family of distributions is in fact everywhere equivalent to a program from this fragment, and we investigate the consequences for the projective families of distributions expressible by probabilistic logic programs.


2021 ◽  
Author(s):  
Pilar Dellunde ◽  
Lluís Godo ◽  
Amanda Vidal

In this paper, we introduce a framework for probabilistic logic-based argumentation inspired on the DeLP formalism and an extensive use of conditional probability. We define probabilistic arguments built from possibly inconsistent probabilistic knowledge bases and study the notions of attack, defeat and preference between these arguments. Finally, we discuss consistency properties of admissible extensions of the Dung’s abstract argumentation graphs obtained from sets of probabilistic arguments and the attack relations between them.


Author(s):  
DAMIANO AZZOLINI ◽  
FABRIZIO RIGUZZI

Abstract Probabilistic logic programming is an effective formalism for encoding problems characterized by uncertainty. Some of these problems may require the optimization of probability values subject to constraints among probability distributions of random variables. Here, we introduce a new class of probabilistic logic programs, namely probabilistic optimizable logic programs, and we provide an effective algorithm to find the best assignment to probabilities of random variables, such that a set of constraints is satisfied and an objective function is optimized.


2021 ◽  
Author(s):  
Robin Manhaeve ◽  
Giuseppe Marra ◽  
Luc De Raedt

DeepProbLog is a neural-symbolic framework that integrates probabilistic logic programming and neural networks. It is realized by providing an interface between the probabilistic logic and the neural networks. Inference in probabilistic neural symbolic methods is hard, since it combines logical theorem proving with probabilistic inference and neural network evaluation. In this work, we make the inference more efficient by extending an approximate inference algorithm from the field of statistical-relational AI. Instead of considering all possible proofs for a certain query, the system searches for the best proof. However, training a DeepProbLog model using approximate inference introduces additional challenges, as the best proof is unknown at the start of training which can lead to convergence towards a local optimum. To be able to apply DeepProbLog on larger tasks, we propose: 1) a method for approximate inference using an A*-like search, called DPLA* 2) an exploration strategy for proving in a neural-symbolic setting, and 3) a parametric heuristic to guide the proof search. We empirically evaluate the performance and scalability of the new approach, and also compare the resulting approach to other neural-symbolic systems. The experiments show that DPLA* achieves a speed up of up to 2-3 orders of magnitude in some cases.


Author(s):  
NITESH KUMAR ◽  
ONDŘEJ KUŽELKA ◽  
LUC DE RAEDT

Abstract Relational autocompletion is the problem of automatically filling out some missing values in multi-relational data. We tackle this problem within the probabilistic logic programming framework of Distributional Clauses (DCs), which supports both discrete and continuous probability distributions. Within this framework, we introduce DiceML – an approach to learn both the structure and the parameters of DC programs from relational data (with possibly missing data). To realize this, DiceML integrates statistical modeling and DCs with rule learning. The distinguishing features of DiceML are that it (1) tackles autocompletion in relational data, (2) learns DCs extended with statistical models, (3) deals with both discrete and continuous distributions, (4) can exploit background knowledge, and (5) uses an expectation–maximization-based (EM) algorithm to cope with missing data. The empirical results show the promise of the approach, even when there is missing data.


2021 ◽  
pp. 375-397
Author(s):  
Yukio-Pegio Gunji ◽  
Yoshihiko Ohzawa ◽  
Terutaka Tanaka

Sign in / Sign up

Export Citation Format

Share Document