probabilistic logical models
Recently Published Documents


TOTAL DOCUMENTS

9
(FIVE YEARS 2)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
ELENA BELLODI ◽  
MARCO GAVANELLI ◽  
RICCARDO ZESE ◽  
EVELINA LAMMA ◽  
FABRIZIO RIGUZZI

Abstract Uncertain information is being taken into account in an increasing number of application fields. In the meantime, abduction has been proved a powerful tool for handling hypothetical reasoning and incomplete knowledge. Probabilistic logical models are a suitable framework to handle uncertain information, and in the last decade many probabilistic logical languages have been proposed, as well as inference and learning systems for them. In the realm of Abductive Logic Programming (ALP), a variety of proof procedures have been defined as well. In this paper, we consider a richer logic language, coping with probabilistic abduction with variables. In particular, we consider an ALP program enriched with integrity constraints à la IFF, possibly annotated with a probability value. We first present the overall abductive language and its semantics according to the Distribution Semantics. We then introduce a proof procedure, obtained by extending one previously presented, and prove its soundness and completeness.


2020 ◽  
Author(s):  
Fabrizio Riguzzi ◽  
Elena Bellodi ◽  
Riccardo Zese ◽  
Marco Alberti ◽  
Evelina Lamma

Abstract Probabilistic logical models deal effectively with uncertain relations and entities typical of many real world domains. In the field of probabilistic logic programming usually the aim is to learn these kinds of models to predict specific atoms or predicates of the domain, called target atoms/predicates. However, it might also be useful to learn classifiers for interpretations as a whole: to this end, we consider the models produced by the inductive constraint logic system, represented by sets of integrity constraints, and we propose a probabilistic version of them. Each integrity constraint is annotated with a probability, and the resulting probabilistic logical constraint model assigns a probability of being positive to interpretations. To learn both the structure and the parameters of such probabilistic models we propose the system PASCAL for “probabilistic inductive constraint logic”. Parameter learning can be performed using gradient descent or L-BFGS. PASCAL has been tested on 11 datasets and compared with a few statistical relational systems and a system that builds relational decision trees (TILDE): we demonstrate that this system achieves better or comparable results in terms of area under the precision–recall and receiver operating characteristic curves, in a comparable execution time.


2008 ◽  
Vol 54 (1-3) ◽  
pp. 99-133 ◽  
Author(s):  
Daan Fierens ◽  
Jan Ramon ◽  
Maurice Bruynooghe ◽  
Hendrik Blockeel

2007 ◽  
Vol 70 (2-3) ◽  
pp. 169-188 ◽  
Author(s):  
Jan Ramon ◽  
Tom Croonenborghs ◽  
Daan Fierens ◽  
Hendrik Blockeel ◽  
Maurice Bruynooghe

Author(s):  
Jan Ramon ◽  
Tom Croonenborghs ◽  
Daan Fierens ◽  
Hendrik Blockeel ◽  
Maurice Bruynooghe

Sign in / Sign up

Export Citation Format

Share Document