scholarly journals Exploiting Contextual Independence In Probabilistic Inference

2003 ◽  
Vol 18 ◽  
pp. 263-313 ◽  
Author(s):  
D. Poole ◽  
N. L. Zhang

Bayesian belief networks have grown to prominence because they provide compact representations for many problems for which probabilistic inference is appropriate, and there are algorithms to exploit this compactness. The next step is to allow compact representations of the conditional probabilities of a variable given its parents. In this paper we present such a representation that exploits contextual independence in terms of parent contexts; which variables act as parents may depend on the value of other variables. The internal representation is in terms of contextual factors (confactors) that is simply a pair of a context and a table. The algorithm, contextual variable elimination, is based on the standard variable elimination algorithm that eliminates the non-query variables in turn, but when eliminating a variable, the tables that need to be multiplied can depend on the context. This algorithm reduces to standard variable elimination when there is no contextual independence structure to exploit. We show how this can be much more efficient than variable elimination when there is structure to exploit. We explain why this new method can exploit more structure than previous methods for structured belief network inference and an analogous algorithm that uses trees.


Author(s):  
Ya. S. Bondarenko ◽  
D. O. Rachko ◽  
A. O. Rozlyvan

In this paper, the technique to solve the prediction problem of reparation of the financial losses caused by a road traffic accident is solved. Exact inference is represented using the Sum-Product Variable Elimination algorithm, Sum-Product Variable Elimination algorithm for computing conditional probabilities, Max-Product Variable Elimination algorithm for MAP, Max-Sum-Product Variable Elimination algorithm for marginal MAP. Reasoning patterns are presented graphically and descriptively.



Networks ◽  
1990 ◽  
Vol 20 (5) ◽  
pp. 661-685 ◽  
Author(s):  
R. Martin Chavez ◽  
Gregory F. Cooper




2016 ◽  
Vol 70 ◽  
pp. 13-35 ◽  
Author(s):  
Rafael Cabañas ◽  
Andrés Cano ◽  
Manuel Gómez-Olmedo ◽  
Anders L. Madsen






2013 ◽  
Vol 47 ◽  
pp. 393-439 ◽  
Author(s):  
N. Taghipour ◽  
D. Fierens ◽  
J. Davis ◽  
H. Blockeel

Lifted probabilistic inference algorithms exploit regularities in the structure of graphical models to perform inference more efficiently. More specifically, they identify groups of interchangeable variables and perform inference once per group, as opposed to once per variable. The groups are defined by means of constraints, so the flexibility of the grouping is determined by the expressivity of the constraint language. Existing approaches for exact lifted inference use specific languages for (in)equality constraints, which often have limited expressivity. In this article, we decouple lifted inference from the constraint language. We define operators for lifted inference in terms of relational algebra operators, so that they operate on the semantic level (the constraints' extension) rather than on the syntactic level, making them language-independent. As a result, lifted inference can be performed using more powerful constraint languages, which provide more opportunities for lifting. We empirically demonstrate that this can improve inference efficiency by orders of magnitude, allowing exact inference where until now only approximate inference was feasible.



2009 ◽  
Vol 3 (3) ◽  
pp. 173-180 ◽  
Author(s):  
C.J. Butz ◽  
J. Chen ◽  
K. Konkel ◽  
P. Lingras


Sign in / Sign up

Export Citation Format

Share Document