posterior sampling
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 25)

H-INDEX

7
(FIVE YEARS 1)

2022 ◽  
Author(s):  
Shogo Hayashi ◽  
Junya Honda ◽  
Hisashi Kashima

AbstractBayesian optimization (BO) is an approach to optimizing an expensive-to-evaluate black-box function and sequentially determines the values of input variables to evaluate the function. However, it is expensive and in some cases becomes difficult to specify values for all input variables, for example, in outsourcing scenarios where production of input queries with many input variables involves significant cost. In this paper, we propose a novel Gaussian process bandit problem, BO with partially specified queries (BOPSQ). In BOPSQ, unlike the standard BO setting, a learner specifies only the values of some input variables, and the values of the unspecified input variables are randomly determined according to a known or unknown distribution. We propose two algorithms based on posterior sampling for cases of known and unknown input distributions. We further derive their regret bounds that are sublinear for popular kernels. We demonstrate the effectiveness of the proposed algorithms using test functions and real-world datasets.


2021 ◽  
Author(s):  
Chong Zhong ◽  
Zhihua Ma ◽  
Junshan Shen ◽  
Catherine Liu

Bayesian paradigm takes advantage of well-fitting complicated survival models and feasible computing in survival analysis owing to the superiority in tackling the complex censoring scheme, compared with the frequentist paradigm. In this chapter, we aim to display the latest tendency in Bayesian computing, in the sense of automating the posterior sampling, through a Bayesian analysis of survival modeling for multivariate survival outcomes with the complicated data structure. Motivated by relaxing the strong assumption of proportionality and the restriction of a common baseline population, we propose a generalized shared frailty model which includes both parametric and nonparametric frailty random effects to incorporate both treatment-wise and temporal variation for multiple events. We develop a survival-function version of the ANOVA dependent Dirichlet process to model the dependency among the baseline survival functions. The posterior sampling is implemented by the No-U-Turn sampler in Stan, a contemporary Bayesian computing tool, automatically. The proposed model is validated by analysis of the bladder cancer recurrences data. The estimation is consistent with existing results. Our model and Bayesian inference provide evidence that the Bayesian paradigm fosters complex modeling and feasible computing in survival analysis, and Stan relaxes the posterior inference.


Author(s):  
Yuming Ba ◽  
Jana de Wiljes ◽  
Dean S. Oliver ◽  
Sebastian Reich

AbstractMinimization of a stochastic cost function is commonly used for approximate sampling in high-dimensional Bayesian inverse problems with Gaussian prior distributions and multimodal posterior distributions. The density of the samples generated by minimization is not the desired target density, unless the observation operator is linear, but the distribution of samples is useful as a proposal density for importance sampling or for Markov chain Monte Carlo methods. In this paper, we focus on applications to sampling from multimodal posterior distributions in high dimensions. We first show that sampling from multimodal distributions is improved by computing all critical points instead of only minimizers of the objective function. For applications to high-dimensional geoscience inverse problems, we demonstrate an efficient approximate weighting that uses a low-rank Gauss-Newton approximation of the determinant of the Jacobian. The method is applied to two toy problems with known posterior distributions and a Darcy flow problem with multiple modes in the posterior.


Energies ◽  
2021 ◽  
Vol 14 (22) ◽  
pp. 7628
Author(s):  
Anand Selveindran ◽  
Zeinab Zargar ◽  
Seyed Mahdi Razavi ◽  
Ganesh Thakur

Optimal injector selection is a key oilfield development endeavor that can be computationally costly. Methods proposed in the literature to reduce the number of function evaluations are often designed for pattern level analysis and do not scale easily to full field analysis. These methods are rarely applied to both water and miscible gas floods with carbon storage objectives; reservoir management decision making under geological uncertainty is also relatively underexplored. In this work, several innovations are proposed to efficiently determine the optimal injector location under geological uncertainty. A geomodel ensemble is prepared in order to capture the range of geological uncertainty. In these models, the reservoir is divided into multiple well regions that are delineated through spatial clustering. Streamline simulation results are used to train a meta-learner proxy. A posterior sampling algorithm evaluates injector locations across multiple geological realizations. The proposed methodology was applied to a producing field in Asia. The proxy predicted optimal injector locations for water and CO2 EOR and storage floods within several seconds (94–98% R2 scores). Blind tests with geomodels not used in training yielded accuracies greater than 90% (R2 scores). Posterior sampling selected optimal injection locations within minutes compared to hours using numerical simulation. This methodology enabled the rapid evaluation of injector well location for a variety of flood projects. This will aid reservoir managers to rapidly make field development decisions for field scale injection and storage projects under geological uncertainty.


Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1475
Author(s):  
Marton Havasi ◽  
Jasper Snoek ◽  
Dustin Tran ◽  
Jonathan Gordon ◽  
José Miguel Hernández-Lobato

Variational inference is an optimization-based method for approximating the posterior distribution of the parameters in Bayesian probabilistic models. A key challenge of variational inference is to approximate the posterior with a distribution that is computationally tractable yet sufficiently expressive. We propose a novel method for generating samples from a highly flexible variational approximation. The method starts with a coarse initial approximation and generates samples by refining it in selected, local regions. This allows the samples to capture dependencies and multi-modality in the posterior, even when these are absent from the initial approximation. We demonstrate theoretically that our method always improves the quality of the approximation (as measured by the evidence lower bound). In experiments, our method consistently outperforms recent variational inference methods in terms of log-likelihood and ELBO across three example tasks: the Eight-Schools example (an inference task in a hierarchical model), training a ResNet-20 (Bayesian inference in a large neural network), and the Mushroom task (posterior sampling in a contextual bandit problem).


2021 ◽  
Author(s):  
Guy Ohayon ◽  
Theo Adrai ◽  
Gregory Vaksman ◽  
Michael Elad ◽  
Peyman Milanfar

Author(s):  
Ksenia Balabaeva ◽  
Sergey Kovalchuk

The present study is devoted to interpretable artificial intelligence in medicine. In our previous work we proposed an approach to clustering results interpretation based on Bayesian Inference. As an application case we used clinical pathways clustering explanation. However, the approach was limited by working for only binary features. In this work, we expand the functionality of the method and adapt it for modelling posterior distributions of continuous features. To solve the task, we apply BEST algorithm to provide Bayesian t-testing and use NUTS algorithm for posterior sampling. The general results of both binary and continuous interpretation provided by the algorithm have been compared with the interpretation of two medical experts.


2021 ◽  
Vol 8 ◽  
Author(s):  
Nima Vakili ◽  
Michael Habeck

Random tomography is a common problem in imaging science and refers to the task of reconstructing a three-dimensional volume from two-dimensional projection images acquired in unknown random directions. We present a Bayesian approach to random tomography. At the center of our approach is a meshless representation of the unknown volume as a mixture of spherical Gaussians. Each Gaussian can be interpreted as a particle such that the unknown volume is represented by a particle cloud. The particle representation allows us to speed up the computation of projection images and to represent a large variety of structures accurately and efficiently. We develop Markov chain Monte Carlo algorithms to infer the particle positions as well as the unknown orientations. Posterior sampling is challenging due to the high dimensionality and multimodality of the posterior distribution. We tackle these challenges by using Hamiltonian Monte Carlo and a global rotational sampling strategy. We test the approach on various simulated and real datasets.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Alexander Fengler ◽  
Lakshmi N Govindarajan ◽  
Tony Chen ◽  
Michael J Frank

In cognitive neuroscience, computational modeling can formally adjudicate between theories and affords quantitative fits to behavioral/brain data. Pragmatically, however, the space of plausible generative models considered is dramatically limited by the set of models with known likelihood functions. For many models, the lack of a closed-form likelihood typically impedes Bayesian inference methods. As a result, standard models are evaluated for convenience, even when other models might be superior. Likelihood-free methods exist but are limited by their computational cost or their restriction to particular inference scenarios. Here, we propose neural networks that learn approximate likelihoods for arbitrary generative models, allowing fast posterior sampling with only a one-off cost for model simulations that is amortized for future inference. We show that these methods can accurately recover posterior parameter distributions for a variety of neurocognitive process models. We provide code allowing users to deploy these methods for arbitrary hierarchical model instantiations without further training.


2021 ◽  
Author(s):  
Josue E. Rodriguez ◽  
Donald Ray Williams

We propose the Bayesian bootstrap (BB) as a generic, simple, and accessible method for sampling from the posterior distribution of various correlation coefficients that are commonly used in the social-behavioral sciences. In a series of examples, we demonstrate how the BB can be used to estimate Pearson's, Spearman's, Gaussian rank, Kendall's tau, and polychoric correlations. We also describe an approach based on a region of practical equivalence to evaluate differences and null associations among the estimated correlations. Key advantages of the proposed methods are illustrated using two psychological datasets. In addition, we have implemented the methodology in the R package BBcor.


Sign in / Sign up

Export Citation Format

Share Document