scholarly journals Approximate Weighted First-Order Model Counting: Exploiting Fast Approximate Model Counters and Symmetry

Author(s):  
Timothy van Bremen ◽  
Ondrej Kuzelka

We study the symmetric weighted first-order model counting task and present ApproxWFOMC, a novel anytime method for efficiently bounding the weighted first-order model count of a sentence given an unweighted first-order model counting oracle. The algorithm has applications to inference in a variety of first-order probabilistic representations, such as Markov logic networks and probabilistic logic programs. Crucially for many applications, no assumptions are made on the form of the input sentence. Instead, the algorithm makes use of the symmetry inherent in the problem by imposing cardinality constraints on the number of possible true groundings of a sentence's literals. Realising the first-order model counting oracle in practice using the approximate hashing-based model counter ApproxMC3, we show how our algorithm is competitive with existing approximate and exact techniques for inference in first-order probabilistic models. We additionally provide PAC guarantees on the accuracy of the bounds generated.

Author(s):  
Paul Beame ◽  
Guy Van den Broeck ◽  
Eric Gribkoff ◽  
Dan Suciu

Author(s):  
Omar Adjali ◽  
Amar Ramdane-Cherif

This article describes a semantic framework that demonstrates an approach for modeling and reasoning based on environment knowledge representation language (EKRL) to enhance interaction between robots and their environment. Unlike EKRL, standard Binary approaches like OWL language fails to represent knowledge in an expressive way. The authors show in this work how to: model environment and interaction in an expressive way with first-order and second-order EKRL data-structures, and reason for decision-making thanks to inference capabilities based on a complex unification algorithm. This is with the understanding that robot environments are inherently subject to noise and partial observability, the authors extended EKRL framework with probabilistic reasoning based on Markov logic networks to manage uncertainty.


2021 ◽  
Author(s):  
Timothy van Bremen ◽  
Ondřej Kuželka

We consider the problem of weighted first-order model counting (WFOMC): given a first-order sentence ϕ and domain size n ∈ ℕ, determine the weighted sum of models of ϕ over the domain {1, ..., n}. Past work has shown that any sentence using at most two logical variables admits an algorithm for WFOMC that runs in time polynomial in the given domain size (Van den Broeck 2011; Van den Broeck, Meert, and Darwiche 2014). In this paper, we extend this result to any two-variable sentence ϕ with the addition of a tree axiom, stating that some distinguished binary relation in ϕ forms a tree in the graph-theoretic sense.


Author(s):  
MAI XU ◽  
MARIA PETROU ◽  
JIANHUA LU

In this paper, we propose a novel logic-rule learning approach for the Tower of Knowledge (ToK) architecture, based on Markov logic networks, for scene interpretation. This approach is in the spirit of the recently proposed Markov logic networks for machine learning. Its purpose is to learn the soft-constraint logic rules for labeling the components of a scene. In our approach, FOIL (First Order Inductive Learner) is applied to learn the logic rules for MLN and then gradient ascent search is utilized to compute weights attached to each rule for softening the rules. This approach also benefits from the architecture of ToK, in reasoning whether a component in a scene has the right characteristics in order to fulfil the functions a label implies, from the logic point of view. One significant advantage of the proposed approach, rather than the previous versions of ToK, is its automatic logic learning capability such that the manual insertion of logic rules is not necessary. Experiments of labeling the identified components in buildings, for building scene interpretation, illustrate the promise of this approach.


Author(s):  
Yuanhong Wang ◽  
Timothy van Bremen ◽  
Juhua Pu ◽  
Yuyi Wang ◽  
Ondrej Kuzelka

We study the problem of constructing the relational marginal polytope (RMP) of a given set of first-order formulas. Past work has shown that the RMP construction problem can be reduced to weighted first-order model counting (WFOMC). However, existing reductions in the literature are intractable in practice, since they typically require an infeasibly large number of calls to a WFOMC oracle. In this paper, we propose an algorithm to construct RMPs using fewer oracle calls. As an application, we also show how to apply this new algorithm to improve an existing approximation scheme for WFOMC. We demonstrate the efficiency of the proposed approaches experimentally, and find that our method provides speed-ups over the baseline for RMP construction of a full order of magnitude.


2011 ◽  
Vol 474-476 ◽  
pp. 1874-1880
Author(s):  
Zhen Zi Chen ◽  
Yi Chen

Customer credit evaluation is very important for customer relationship management in Enterprise Resource Planning. However, how to evaluate the customers’ credit is a complicated problem. In this paper, we present a Customer Credit Visual Analysis Model (CCVAM) that can be used to evaluate and classify the credibility of new customers according to the historical data about past customers. The model is based on Markov Logic Networks (MLNs) that combines probability and first-order logic with a weight attached to each formula. In this model, the basic rules or indexes based on expert knowledge are transformed into the representation of normal form. Then MLNs is obtained by combining first-order logic and probabilistic graphical models of the rules. After acquiring the weights attached to the rules in a first-order logic knowledge base, the model provides an interface for visualize the corresponding relationships among the rules and weights. The model has been applied to grade clients’ credit degree in an international enterprise and achieved anticipative results. This model can also be used in other areas where level evaluation is required.


Author(s):  
Ondrej Kuzelka ◽  
Jesse Davis ◽  
Steven Schockaert

The field of statistical relational learning (SRL) is concerned with learning probabilistic models from relational data. Learned SRL models are typically represented using some kind of weighted logical formulas, which makes them considerably more interpretable than those obtained by e.g. neural networks. In practice, however, these models are often still difficult to interpret correctly, as they can contain many formulas that interact in non-trivial ways and weights do not always have an intuitive meaning. To address this, we propose a new SRL method which uses possibilistic logic to encode relational models. Learned models are then essentially stratified classical theories, which explicitly encode what can be derived with a given level of certainty. Compared to Markov Logic Networks (MLNs), our method is faster and produces considerably more interpretable models.


2021 ◽  
Author(s):  
Arnaud Nguembang Fadja ◽  
Fabrizio Riguzzi ◽  
Evelina Lamma

AbstractProbabilistic logic programming (PLP) combines logic programs and probabilities. Due to its expressiveness and simplicity, it has been considered as a powerful tool for learning and reasoning in relational domains characterized by uncertainty. Still, learning the parameter and the structure of general PLP is computationally expensive due to the inference cost. We have recently proposed a restriction of the general PLP language called hierarchical PLP (HPLP) in which clauses and predicates are hierarchically organized. HPLPs can be converted into arithmetic circuits or deep neural networks and inference is much cheaper than for general PLP. In this paper we present algorithms for learning both the parameters and the structure of HPLPs from data. We first present an algorithm, called parameter learning for hierarchical probabilistic logic programs (PHIL) which performs parameter estimation of HPLPs using gradient descent and expectation maximization. We also propose structure learning of hierarchical probabilistic logic programming (SLEAHP), that learns both the structure and the parameters of HPLPs from data. Experiments were performed comparing PHIL and SLEAHP with PLP and Markov Logic Networks state-of-the art systems for parameter and structure learning respectively. PHIL was compared with EMBLEM, ProbLog2 and Tuffy and SLEAHP with SLIPCOVER, PROBFOIL+, MLB-BC, MLN-BT and RDN-B. The experiments on five well known datasets show that our algorithms achieve similar and often better accuracies but in a shorter time.


2014 ◽  
Vol 15 (2) ◽  
pp. 169-212 ◽  
Author(s):  
ELENA BELLODI ◽  
FABRIZIO RIGUZZI

AbstractLearning probabilistic logic programming languages is receiving an increasing attention, and systems are available for learning the parameters (PRISM, LeProbLog, LFI-ProbLog and EMBLEM) or both structure and parameters (SEM-CP-logic and SLIPCASE) of these languages. In this paper we present the algorithm SLIPCOVER for “Structure LearnIng of Probabilistic logic programs by searChing OVER the clause space.” It performs a beam search in the space of probabilistic clauses and a greedy search in the space of theories using the log likelihood of the data as the guiding heuristics. To estimate the log likelihood, SLIPCOVER performs Expectation Maximization with EMBLEM. The algorithm has been tested on five real world datasets and compared with SLIPCASE, SEM-CP-logic, Aleph and two algorithms for learning Markov Logic Networks (Learning using Structural Motifs (LSM) and ALEPH++ExactL1). SLIPCOVER achieves higher areas under the precision-recall and receiver operating characteristic curves in most cases.


2016 ◽  
Vol 44 (1) ◽  
pp. 91-109 ◽  
Author(s):  
Sina Dami ◽  
Ahmad Abdollahzadeh Barforoush ◽  
Hossein Shirazi

Predicting future events from text data has been a controversial and much disputed topic in the field of text analytics. However, far too little attention has been paid to efficient prediction in textual environments. This study has aimed to develop a novel and efficient method for news event prediction. The proposed method is based on Markov logic networks (MLNs) framework, which enables us to concisely represent complex events by full expressivity of first-order logic (FOL), as well as to reason uncertain event with probabilities. In our framework, we first extract text news events via an event representation model at a semantic level and then transform them into web ontology language (OWL) as a posteriori knowledge. A set of domain-specific causal rules in FOL associated with weights were also fed into the system as a priori (common-sense) knowledge. Additionally, several large-scale ontologies including DBpedia, VerbNet and WordNet were used to model common-sense logic rules as contextual knowledge. Finally, all types of such knowledge were integrated into OWL for performing causal inference. The resulted OWL knowledge base is augmented by MLN, which uses weighted first-order formulas to represent probabilistic knowledge. Empirical evaluation of real news showed that our method of news event prediction was better than the baselines in terms of precision, coverage and diversity.


Sign in / Sign up

Export Citation Format

Share Document