scholarly journals Marginal Inference in Continuous Markov Random Fields Using Mixtures

Author(s):  
Yuanzhen Guo ◽  
Hao Xiong ◽  
Nicholas Ruozzi

Exact marginal inference in continuous graphical models is computationally challenging outside of a few special cases. Existing work on approximate inference has focused on approximately computing the messages as part of the loopy belief propagation algorithm either via sampling methods or moment matching relaxations. In this work, we present an alternative family of approximations that, instead of approximating the messages, approximates the beliefs in the continuous Bethe free energy using mixture distributions. We show that these types of approximations can be combined with numerical quadrature to yield algorithms with both theoretical guarantees on the quality of the approximation and significantly better practical performance in a variety of applications that are challenging for current state-of-the-art methods.

2001 ◽  
Vol 13 (10) ◽  
pp. 2173-2200 ◽  
Author(s):  
Yair Weiss ◽  
William T. Freeman

Graphical models, such as Bayesian networks and Markov random fields, represent statistical dependencies of variables by a graph. Local “belief propagation” rules of the sort proposed by Pearl (1988) are guaranteed to converge to the correct posterior probabilities in singly connected graphs. Recently, good performance has been obtained by using these same rules on graphs with loops, a method we refer to as loopy belief propagation. Perhaps the most dramatic instance is the near Shannon-limit performance of “Turbo codes,” whose decoding algorithm is equivalent to loopy propagation. Except for the case of graphs with a single loop, there has been little theoretical understanding of loopy propagation. Here we analyze belief propagation in networks with arbitrary topologies when the nodes in the graph describe jointly gaussian random variables. We give an analytical formula relating the true posterior probabilities with those calculated using loopy propagation. We give sufficient conditions for convergence and show that when belief propagation converges, it gives the correct posterior means for all graph topologies, not just networks with a single loop. These results motivate using the powerful belief propagation algorithm in a broader class of networks and help clarify the empirical performance results.


2011 ◽  
Vol 2011 ◽  
pp. 1-12 ◽  
Author(s):  
S. Y. Chen ◽  
Hanyang Tong ◽  
Zhongjie Wang ◽  
Sheng Liu ◽  
Ming Li ◽  
...  

Generalized belief propagation (GBP) is a region-based belief propagation algorithm which can get good convergence in Markov random fields. However, the computation time is too heavy to use in practical engineering applications. This paper proposes a method to accelerate the efficiency of GBP. A caching technique and chessboard passing strategy are used to speed up algorithm. Then, the direction set method which is used to reduce the complexity of computing clique messages from quadric to cubic. With such a strategy the processing speed can be greatly increased. Besides, it is the first attempt to apply GBP for solving the stereomatching problem. Experiments show that the proposed algorithm can speed up by 15+ times for typical stereo matching problem and infer a more plausible result.


2006 ◽  
Vol 26 ◽  
pp. 153-190 ◽  
Author(s):  
T. Heskes

Loopy and generalized belief propagation are popular algorithms for approximate inference in Markov random fields and Bayesian networks. Fixed points of these algorithms have been shown to correspond to extrema of the Bethe and Kikuchi free energy, both of which are approximations of the exact Helmholtz free energy. However, belief propagation does not always converge, which motivates approaches that explicitly minimize the Kikuchi/Bethe free energy, such as CCCP and UPS. Here we describe a class of algorithms that solves this typically non-convex constrained minimization problem through a sequence of convex constrained minimizations of upper bounds on the Kikuchi free energy. Intuitively one would expect tighter bounds to lead to faster algorithms, which is indeed convincingly demonstrated in our simulations. Several ideas are applied to obtain tight convex bounds that yield dramatic speed-ups over CCCP.


Sign in / Sign up

Export Citation Format

Share Document