variational bayes
Recently Published Documents


TOTAL DOCUMENTS

292
(FIVE YEARS 75)

H-INDEX

20
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Yiming Wang ◽  
Ximing Li ◽  
Jihong Ouyang ◽  
Zeqi Guo ◽  
Yimeng Wang
Keyword(s):  

2021 ◽  
Vol 32 (1) ◽  
Author(s):  
Nathaniel Tomasetti ◽  
Catherine Forbes ◽  
Anastasios Panagiotelis

2021 ◽  
Vol 31 (6) ◽  
Author(s):  
Minh-Ngoc Tran ◽  
Dang H. Nguyen ◽  
Duy Nguyen
Keyword(s):  

2021 ◽  
pp. 1-39
Author(s):  
Chen Zeno ◽  
Itay Golan ◽  
Elad Hoffer ◽  
Daniel Soudry

Abstract Catastrophic forgetting is the notorious vulnerability of neural networks to the changes in the data distribution during learning. This phenomenon has long been considered a major obstacle for using learning agents in realistic continual learning settings. A large body of continual learning research assumes that task boundaries are known during training. However, only a few works consider scenarios in which task boundaries are unknown or not well defined: task-agnostic scenarios. The optimal Bayesian solution for this requires an intractable online Bayes update to the weights posterior. We aim to approximate the online Bayes update as accurately as possible. To do so, we derive novel fixed-point equations for the online variational Bayes optimization problem for multivariate gaussian parametric distributions. By iterating the posterior through these fixed-point equations, we obtain an algorithm (FOO-VB) for continual learning that can handle nonstationary data distribution using a fixed architecture and without using external memory (i.e., without access to previous data). We demonstrate that our method (FOO-VB) outperforms existing methods in task-agnostic scenarios. FOO-VB Pytorch implementation is available at https://github.com/chenzeno/FOO-VB.


Author(s):  
Eduardo A. Aponte ◽  
Yu Yao ◽  
Sudhir Raman ◽  
Stefan Frässle ◽  
Jakob Heinzle ◽  
...  

AbstractIn generative modeling of neuroimaging data, such as dynamic causal modeling (DCM), one typically considers several alternative models, either to determine the most plausible explanation for observed data (Bayesian model selection) or to account for model uncertainty (Bayesian model averaging). Both procedures rest on estimates of the model evidence, a principled trade-off between model accuracy and complexity. In the context of DCM, the log evidence is usually approximated using variational Bayes. Although this approach is highly efficient, it makes distributional assumptions and is vulnerable to local extrema. This paper introduces the use of thermodynamic integration (TI) for Bayesian model selection and averaging in the context of DCM. TI is based on Markov chain Monte Carlo sampling which is asymptotically exact but orders of magnitude slower than variational Bayes. In this paper, we explain the theoretical foundations of TI, covering key concepts such as the free energy and its origins in statistical physics. Our aim is to convey an in-depth understanding of the method starting from its historical origin in statistical physics. In addition, we demonstrate the practical application of TI via a series of examples which serve to guide the user in applying this method. Furthermore, these examples demonstrate that, given an efficient implementation and hardware capable of parallel processing, the challenge of high computational demand can be overcome successfully. The TI implementation presented in this paper is freely available as part of the open source software TAPAS.


2021 ◽  
pp. 107335
Author(s):  
Pavle Boškoski ◽  
Matija Perne ◽  
Martina Rameša ◽  
Biljana Mileva Boshkoska

Sign in / Sign up

Export Citation Format

Share Document