loopy belief propagation
Recently Published Documents


TOTAL DOCUMENTS

78
(FIVE YEARS 6)

H-INDEX

10
(FIVE YEARS 1)

Author(s):  
Pu Tian

Factorization reduces computational complexity and is therefore an important tool in statistical machine learning of high dimensional systems. Conventional molecular modeling, including molecular dynamics and Monte Carlo simulations of molecular systems, is a large research field based on approximate factorization of molecular interactions. Recently, the local distribution theory was proposed to factorize global joint distribution of a given molecular system into trainable local distributions. Belief propagation algorithms are a family of exact factorization algorithms for trees and are extended to approximate loopy belief propagation algorithms for graphs with loops. Despite the fact that factorization of probability distribution is their common foundation, computational research in molecular systems and machine learning studies utilizing belief propagation algorithms have been carried out independently with respective track of algorithm development. The connection and differences among these factorization algorithms are briefly presented in this perspective, with the hope to intrigue further development in factorization algorithms for physical modeling of complex molecular systems.


2020 ◽  
Vol 30 (4) ◽  
pp. 955-969 ◽  
Author(s):  
Bin Sheng ◽  
Ping Li ◽  
Xiaoxin Fang ◽  
Ping Tan ◽  
Enhua Wu

2019 ◽  
Author(s):  
Julianus Pfeuffer ◽  
Timo Sachsenberg ◽  
Tjeerd M. H. Dijkstra ◽  
Oliver Serang ◽  
Knut Reinert ◽  
...  

AbstractAccurate protein inference under the presence of shared peptides is still one of the key problems in bottom-up proteomics. Most protein inference tools employing simple heuristic inference strategies are efficient, but exhibit reduced accuracy. More advanced probabilistic methods often exhibit better inference quality but tend to be too slow for large data sets.Here we present a novel protein inference method, EPIFANY, combining a loopy belief propagation algorithm with convolution trees for efficient processing of Bayesian networks. We demonstrate that EPIFANY combines the reliable protein inference of Bayesian methods with significantly shorter runtimes. On the 2016 iPRG protein inference benchmark data EPIFANY is the only tested method which finds all true-positive proteins at a 5% protein FDR without strict pre-filtering on PSM level, yielding an increase in identification performance (+10% in the number of true positives and +35% in partial AUC) compared to previous approaches. Even very large data sets with hundreds of thousands of spectra (which are intractable with other Bayesian and some non-Bayesian tools) can be processed with EPIFANY within minutes. The increased inference quality including shared peptides results in better protein inference results and thus increased robustness of the biological hypotheses generated.EPIFANY is available as open-source software for all major platforms at https://OpenMS.de/epifany.


Author(s):  
Yuanzhen Guo ◽  
Hao Xiong ◽  
Nicholas Ruozzi

Exact marginal inference in continuous graphical models is computationally challenging outside of a few special cases. Existing work on approximate inference has focused on approximately computing the messages as part of the loopy belief propagation algorithm either via sampling methods or moment matching relaxations. In this work, we present an alternative family of approximations that, instead of approximating the messages, approximates the beliefs in the continuous Bethe free energy using mixture distributions. We show that these types of approximations can be combined with numerical quadrature to yield algorithms with both theoretical guarantees on the quality of the approximation and significantly better practical performance in a variety of applications that are challenging for current state-of-the-art methods.


2018 ◽  
Vol 7 (3) ◽  
pp. 531-562 ◽  
Author(s):  
Alyson K Fletcher ◽  
Sundeep Rangan

Abstract We consider the problem of estimating a rank-one matrix in Gaussian noise under a probabilistic model for the left and right factors of the matrix. The probabilistic model can impose constraints on the factors including sparsity and positivity that arise commonly in learning problems. We propose a family of algorithms that reduce the problem to a sequence of scalar estimation computations. These algorithms are similar to approximate message-passing techniques based on Gaussian approximations of loopy belief propagation that have been used recently in compressed sensing. Leveraging analysis methods by Bayati and Montanari, we show that the asymptotic behavior of the algorithm is described by a simple scalar equivalent model, where the distribution of the estimates at each iteration is identical to certain scalar estimates of the variables in Gaussian noise. Moreover, the effective Gaussian noise level is described by a set of state evolution equations. The proposed approach to deriving algorithms thus provides a computationally simple and general method for rank-one estimation problems with a precise analysis in certain high-dimensional settings.


Sign in / Sign up

Export Citation Format

Share Document