variational inference
Recently Published Documents


TOTAL DOCUMENTS

348
(FIVE YEARS 198)

H-INDEX

22
(FIVE YEARS 5)

Author(s):  
Qingxiu Guo ◽  
Jianchang Liu ◽  
Shubin Tan ◽  
Dongsheng Yang ◽  
Yuan Li ◽  
...  

For multimode process monitoring, accurate mode information is difficult to be obtained, and each mode is monitored separately, which increases the complexity of the system. This paper proposes a multimode process monitoring strategy via improved variational inference Gaussian mixture model based on locality preserving projections (IVIGMM-LPP). First, the raw data are projected to the feature space where samples still maintain the original neighbor structure. Second, a new discriminant condition is introduced to reduce the influence of the initial category parameter on the iteration results in the VIGMM model. Then, the data are updated utilizing modal information, so that the scales of different modes are adjusted to the same level. Next, the deviation vector is introduced to eliminate the multi-center structure of data. Finally, the statistic is built to monitor the process. IVIGMM-LPP establishes one model for monitoring the premise of knowing the mode information, which reduces the complexity of the monitoring process and improves the fault detection rate. The experimental results of a numerical case and the Tennessee Eastman (TE) process verify the effectiveness of IVIGMM-LPP.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1629
Author(s):  
Ali Unlu ◽  
Laurence Aitchison

We developed Variational Laplace for Bayesian neural networks (BNNs), which exploits a local approximation of the curvature of the likelihood to estimate the ELBO without the need for stochastic sampling of the neural-network weights. The Variational Laplace objective is simple to evaluate, as it is the log-likelihood plus weight-decay, plus a squared-gradient regularizer. Variational Laplace gave better test performance and expected calibration errors than maximum a posteriori inference and standard sampling-based variational inference, despite using the same variational approximate posterior. Finally, we emphasize the care needed in benchmarking standard VI, as there is a risk of stopping before the variance parameters have converged. We show that early-stopping can be avoided by increasing the learning rate for the variance parameters.


Author(s):  
Lancelot Da Costa ◽  
Karl Friston ◽  
Conor Heins ◽  
Grigorios A. Pavliotis

This paper develops a Bayesian mechanics for adaptive systems. Firstly, we model the interface between a system and its environment with a Markov blanket. This affords conditions under which states internal to the blanket encode information about external states. Second, we introduce dynamics and represent adaptive systems as Markov blankets at steady state. This allows us to identify a wide class of systems whose internal states appear to infer external states, consistent with variational inference in Bayesian statistics and theoretical neuroscience. Finally, we partition the blanket into sensory and active states. It follows that active states can be seen as performing active inference and well-known forms of stochastic control (such as PID control), which are prominent formulations of adaptive behaviour in theoretical biology and engineering.


2021 ◽  
Author(s):  
Ahmed Hammam ◽  
Seyed Eghbal Ghobadi ◽  
Frank Bonarens ◽  
Christoph Stiller

2021 ◽  
Author(s):  
Robert Hu ◽  
Geoff K. Nicholls ◽  
Dino Sejdinovic

AbstractWe outline an inherent flaw of tensor factorization models when latent factors are expressed as a function of side information and propose a novel method to mitigate this. We coin our methodology kernel fried tensor (KFT) and present it as a large-scale prediction and forecasting tool for high dimensional data. Our results show superior performance against LightGBM and Field aware factorization machines (FFM), two algorithms with proven track records, widely used in large-scale prediction. We also develop a variational inference framework for KFT which enables associating the predictions and forecasts with calibrated uncertainty estimates on several datasets.


Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1475
Author(s):  
Marton Havasi ◽  
Jasper Snoek ◽  
Dustin Tran ◽  
Jonathan Gordon ◽  
José Miguel Hernández-Lobato

Variational inference is an optimization-based method for approximating the posterior distribution of the parameters in Bayesian probabilistic models. A key challenge of variational inference is to approximate the posterior with a distribution that is computationally tractable yet sufficiently expressive. We propose a novel method for generating samples from a highly flexible variational approximation. The method starts with a coarse initial approximation and generates samples by refining it in selected, local regions. This allows the samples to capture dependencies and multi-modality in the posterior, even when these are absent from the initial approximation. We demonstrate theoretically that our method always improves the quality of the approximation (as measured by the evidence lower bound). In experiments, our method consistently outperforms recent variational inference methods in terms of log-likelihood and ELBO across three example tasks: the Eight-Schools example (an inference task in a hierarchical model), training a ResNet-20 (Bayesian inference in a large neural network), and the Mushroom task (posterior sampling in a contextual bandit problem).


Sign in / Sign up

Export Citation Format

Share Document