stochastic variational inference
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 15)

H-INDEX

4
(FIVE YEARS 3)

2021 ◽  
Vol 11 (19) ◽  
pp. 8976
Author(s):  
Junghyun Oh ◽  
Gyuho Eoh

As mobile robots perform long-term operations in large-scale environments, coping with perceptual changes becomes an important issue recently. This paper introduces a stochastic variational inference and learning architecture that can extract condition-invariant features for visual place recognition in a changing environment. Under the assumption that a latent representation of the variational autoencoder can be divided into condition-invariant and condition-sensitive features, a new structure of the variation autoencoder is proposed and a variational lower bound is derived to train the model. After training the model, condition-invariant features are extracted from test images to calculate the similarity matrix, and the places can be recognized even in severe environmental changes. Experiments were conducted to verify the proposed method, and the experimental results showed that our assumption was reasonable and effective in recognizing places in changing environments.


2021 ◽  
Author(s):  
Yuan Jin ◽  
Jin Chai ◽  
Olivier Jung

Abstract Thanks to their flexibility and robustness to overfitting, Gaussian Processes (GPs) are widely used as black-box function approximators. Deep Gaussian Processes (DGPs) are multilayer generations of GPs. The deep architecture alleviates the kernel dependance of GPs, while complicates model inference. The so-called doubly stochastic variational approach, which does not force the independence between layers, shows its effectiveness in large dataset classification and regression in the literature. Meanwhile, similar to deep neural network, DGPs also require application-specific architecture. In addition, the doubly stochastic process introduces extra hyperparameters, which further increases the difficulty in model definition and training. In this study, we apply doubly stochastic variational inference DGP as surrogate model on high-dimensional structural data regression drawn from turbomachinery area. A discrete optimizer, which is based on classification discriminating good solutions from bad ones, is utilized to realize automatic DGP model design and tuning. Empirical experiments are performed firstly on analytical functions to demonstrate the capability of DPGs in high-dimensional and non-stationary data handling. Two industrial turbomachinery problems with respectively 80 and 180 input dimensions are addressed. The first application consists in a turbine frame design problem. In the second application, DGP is used to describe the correlation between 3D blade profiles of a multi-stage low pressure turbine and the corresponding turbine total-total efficiency. Through these two applications, we show the applicability of the proposed automatically designed DGPs in turbomachinery area by highlighting their outperformance with respect to classic GPs.


2021 ◽  
Author(s):  
Aristeidis Panos ◽  
Petros Dellaportas ◽  
Michalis K. Titsias

AbstractWe introduce a Gaussian process latent factor model for multi-label classification that can capture correlations among class labels by using a small set of latent Gaussian process functions. To address computational challenges, when the number of training instances is very large, we introduce several techniques based on variational sparse Gaussian process approximations and stochastic optimization. Specifically, we apply doubly stochastic variational inference that sub-samples data instances and classes which allows us to cope with Big Data. Furthermore, we show it is possible and beneficial to optimize over inducing points, using gradient-based methods, even in very high dimensional input spaces involving up to hundreds of thousands of dimensions. We demonstrate the usefulness of our approach on several real-world large-scale multi-label learning problems.


2020 ◽  
Vol 34 (04) ◽  
pp. 4477-4484
Author(s):  
Ranganath Krishnan ◽  
Mahesh Subedar ◽  
Omesh Tickoo

Stochastic variational inference for Bayesian deep neural network (DNN) requires specifying priors and approximate posterior distributions over neural network weights. Specifying meaningful weight priors is a challenging problem, particularly for scaling variational inference to deeper architectures involving high dimensional weight space. We propose MOdel Priors with Empirical Bayes using DNN (MOPED) method to choose informed weight priors in Bayesian neural networks. We formulate a two-stage hierarchical modeling, first find the maximum likelihood estimates of weights with DNN, and then set the weight priors using empirical Bayes approach to infer the posterior with variational inference. We empirically evaluate the proposed approach on real-world tasks including image classification, video activity recognition and audio classification with varying complex neural network architectures. We also evaluate our proposed approach on diabetic retinopathy diagnosis task and benchmark with the state-of-the-art Bayesian deep learning techniques. We demonstrate MOPED method enables scalable variational inference and provides reliable uncertainty quantification.


2020 ◽  
Vol 4 (POPL) ◽  
pp. 1-33
Author(s):  
Wonyeol Lee ◽  
Hangyeol Yu ◽  
Xavier Rival ◽  
Hongseok Yang

Sign in / Sign up

Export Citation Format

Share Document