scholarly journals Stochastic variational inference for probabilistic optimal power flows

2021 ◽  
Vol 200 ◽  
pp. 107465
Author(s):  
Markus Löschenbrand
Author(s):  
Yuto Yamaguchi ◽  
Kohei Hayashi

How can we decompose a data tensor if the indices are partially missing?Tensor decomposition is a fundamental tool to analyze the tensor data.Suppose, for example, we have a 3rd-order tensor X where each element Xijk takes 1 if user i posts word j at location k on Twitter.Standard tensor decomposition expects all the indices are observed but, in some tweets, location k can be missing.In this paper, we study a tensor decomposition problem where the indices (i, j, or k) of some observed elements are partially missing.Towards the problem, we propose a probabilistic tensor decomposition model that handles missing indices as latent variables.To infer them, we derive an algorithm based on stochastic variational inference, which enables to leverage the information from the incomplete data scalably. The experiments on both synthetic and real datasets show that the proposed method achieves higher accuracy in the tensor completion task than baselines that cannot handle missing indices.


2019 ◽  
Author(s):  
Adam Gayoso ◽  
Romain Lopez ◽  
Zoë Steier ◽  
Jeffrey Regier ◽  
Aaron Streets ◽  
...  

Cellular indexing of transcriptomes and epitopes by sequencing (CITE-seq) combines unbiased single-cell transcriptome measurements with surface protein quantification comparable to flow cytometry, the gold standard for cell type identification. However, current analysis pipelines cannot address the two primary challenges of CITE-seq data: combining both modalities in a shared latent space that harnesses the power of the paired measurements, and handling the technical artifacts of the protein measurement, which is obscured by non-negligible background noise. Here we present Total Variational Inference (totalVI), a fully probabilistic end-to-end framework for normalizing and analyzing CITE-seq data, based on a hierarchical Bayesian model. In totalVI, the mRNA and protein measurements for each cell are generated from a low-dimensional latent random variable unique to that cell, representing its cellular state. totalVI uses deep neural networks to specify conditional distributions. By leveraging advances in stochastic variational inference, it scales easily to millions of cells. Explicit modeling of nuisance factors enables totalVI to produce denoised data in both domains, as well as a batch-corrected latent representation of cells for downstream analysis tasks.


2020 ◽  
Vol 34 (04) ◽  
pp. 4477-4484
Author(s):  
Ranganath Krishnan ◽  
Mahesh Subedar ◽  
Omesh Tickoo

Stochastic variational inference for Bayesian deep neural network (DNN) requires specifying priors and approximate posterior distributions over neural network weights. Specifying meaningful weight priors is a challenging problem, particularly for scaling variational inference to deeper architectures involving high dimensional weight space. We propose MOdel Priors with Empirical Bayes using DNN (MOPED) method to choose informed weight priors in Bayesian neural networks. We formulate a two-stage hierarchical modeling, first find the maximum likelihood estimates of weights with DNN, and then set the weight priors using empirical Bayes approach to infer the posterior with variational inference. We empirically evaluate the proposed approach on real-world tasks including image classification, video activity recognition and audio classification with varying complex neural network architectures. We also evaluate our proposed approach on diabetic retinopathy diagnosis task and benchmark with the state-of-the-art Bayesian deep learning techniques. We demonstrate MOPED method enables scalable variational inference and provides reliable uncertainty quantification.


Sign in / Sign up

Export Citation Format

Share Document