lifted inference
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 12)

H-INDEX

4
(FIVE YEARS 1)

Author(s):  
FELIX Q. WEITKÄMPER

Abstract Probabilistic logic programming is a major part of statistical relational artificial intelligence, where approaches from logic and probability are brought together to reason about and learn from relational domains in a setting of uncertainty. However, the behaviour of statistical relational representations across variable domain sizes is complex, and scaling inference and learning to large domains remains a significant challenge. In recent years, connections have emerged between domain size dependence, lifted inference and learning from sampled subpopulations. The asymptotic behaviour of statistical relational representations has come under scrutiny, and projectivity was investigated as the strongest form of domain size dependence, in which query marginals are completely independent of the domain size. In this contribution we show that every probabilistic logic program under the distribution semantics is asymptotically equivalent to an acyclic probabilistic logic program consisting only of determinate clauses over probabilistic facts. We conclude that every probabilistic logic program inducing a projective family of distributions is in fact everywhere equivalent to a program from this fragment, and we investigate the consequences for the projective families of distributions expressible by probabilistic logic programs.


2021 ◽  
Author(s):  
Timothy van Bremen ◽  
Ondřej Kuželka

We consider the problem of weighted first-order model counting (WFOMC): given a first-order sentence ϕ and domain size n ∈ ℕ, determine the weighted sum of models of ϕ over the domain {1, ..., n}. Past work has shown that any sentence using at most two logical variables admits an algorithm for WFOMC that runs in time polynomial in the given domain size (Van den Broeck 2011; Van den Broeck, Meert, and Darwiche 2014). In this paper, we extend this result to any two-variable sentence ϕ with the addition of a tree axiom, stating that some distinguished binary relation in ϕ forms a tree in the graph-theoretic sense.


2021 ◽  
Author(s):  
Ramy Shahin ◽  
Murad Akhundov ◽  
marsha chechik

Applying program analyses to Software Product Lines (SPLs) has been a fundamental research problem at the intersection<br>of Product Line Engineering and software analysis. Different attempts have been made to "lift" particular product-level analyses to run on the entire product line. In this paper, we tackle the class of Datalog-based analyses (e.g., pointer and taint analyses), study the theoretical aspects of lifting Datalog inference, and implement a lifted inference algorithm inside the Souffl  Datalog engine. We evaluate our implementation on a set of Java and C-language benchmark product lines. We show significant savings in processing time and fact database size (billions of times faster on one of the benchmarks) compared to brute-force analysis of each product individually.


2021 ◽  
Author(s):  
Ramy Shahin ◽  
Murad Akhundov ◽  
marsha chechik

Applying program analyses to Software Product Lines (SPLs) has been a fundamental research problem at the intersection<br>of Product Line Engineering and software analysis. Different attempts have been made to "lift" particular product-level analyses to run on the entire product line. In this paper, we tackle the class of Datalog-based analyses (e.g., pointer and taint analyses), study the theoretical aspects of lifting Datalog inference, and implement a lifted inference algorithm inside the Souffl  Datalog engine. We evaluate our implementation on a set of Java and C-language benchmark product lines. We show significant savings in processing time and fact database size (billions of times faster on one of the benchmarks) compared to brute-force analysis of each product individually.


Author(s):  
Yuqiao Chen ◽  
Yibo Yang ◽  
Sriraam Natarajan ◽  
Nicholas Ruozzi

Lifted inference algorithms exploit model symmetry to reduce computational cost in probabilistic inference. However, most existing lifted inference algorithms operate only over discrete domains or continuous domains with restricted potential functions. We investigate two approximate lifted variational approaches that apply to domains with general hybrid potentials, and are expressive enough to capture multi-modality. We demonstrate that the proposed variational methods are highly scalable and can exploit approximate model symmetries even in the presence of a large amount of continuous evidence, outperforming existing message-passing-based approaches in a variety of settings. Additionally, we present a sufficient condition for the Bethe variational approximation to yield a non-trivial estimate over the marginal polytope.


Author(s):  
Manfred Jaeger ◽  
Oliver Schulte

A generative probabilistic model for relational data consists of a family of probability distributions for relational structures over domains of different sizes. In most existing statistical relational learning (SRL) frameworks, these models are not projective in the sense that the marginal of the distribution for size-n structures on induced substructures of size k<n is equal to the given distribution for size-k structures. Projectivity is very beneficial in that it directly enables lifted inference and statistically consistent learning from sub-sampled relational structures. In earlier work some simple fragments of SRL languages have been identified that represent projective models. However, no complete characterization of, and representation framework for projective models has been given. In this paper we fill this gap: exploiting representation theorems for infinite exchangeable arrays we introduce a class of directed graphical latent variable models that precisely correspond to the class of projective relational models. As a by-product we also obtain a characterization for when a given distribution over size-k structures is the statistical frequency distribution of size-k substructures in much larger size-n structures. These results shed new light onto the old open problem of how to apply Halpern et al.'s ``random worlds approach'' for probabilistic inference to general relational signatures.


Author(s):  
Yuqiao Chen ◽  
Nicholas Ruozzi ◽  
Sriraam Natarajan

Lifted inference algorithms for first-order logic models, e.g., Markov logic networks (MLNs), have been of significant interest in recent years.  Lifted inference methods exploit model symmetries in order to reduce the size of the model and, consequently, the computational cost of inference.  In this work, we consider the problem of lifted inference in MLNs with continuous or both discrete and continuous groundings. Existing work on lifting with continuous groundings has mostly been limited to special classes of models, e.g., Gaussian models, for which variable elimination or message-passing updates can be computed exactly.  Here, we develop approximate lifted inference schemes based on particle sampling.  We demonstrate empirically that our approximate lifting schemes perform comparably to existing state-of-the-art for models for Gaussian MLNs, while having the flexibility to be applied to models with arbitrary potential functions.


Sign in / Sign up

Export Citation Format

Share Document