scholarly journals Rates of contraction of posterior distributions based on Gaussian process priors

2008 ◽  
Vol 36 (3) ◽  
pp. 1435-1463 ◽  
Author(s):  
A. W. van der Vaart ◽  
J. H. van Zanten
2013 ◽  
Vol 10 (78) ◽  
pp. 20120616 ◽  
Author(s):  
Nick S. Jones ◽  
John Moriarty

Biological data objects often have both of the following features: (i) they are functions rather than single numbers or vectors, and (ii) they are correlated owing to phylogenetic relationships. In this paper, we give a flexible statistical model for such data, by combining assumptions from phylogenetics with Gaussian processes. We describe its use as a non-parametric Bayesian prior distribution, both for prediction (placing posterior distributions on ancestral functions) and model selection (comparing rates of evolution across a phylogeny, or identifying the most likely phylogenies consistent with the observed data). Our work is integrative, extending the popular phylogenetic Brownian motion and Ornstein–Uhlenbeck models to functional data and Bayesian inference, and extending Gaussian process regression to phylogenies. We provide a brief illustration of the application of our method.


2022 ◽  
Author(s):  
Hanne Kekkonen

Abstract We consider the statistical non-linear inverse problem of recovering the absorption term f>0 in the heat equation with given sufficiently smooth functions describing boundary and initial values respectively. The data consists of N discrete noisy point evaluations of the solution u_f. We study the statistical performance of Bayesian nonparametric procedures based on a large class of Gaussian process priors. We show that, as the number of measurements increases, the resulting posterior distributions concentrate around the true parameter generating the data, and derive a convergence rate for the reconstruction error of the associated posterior means. We also consider the optimality of the contraction rates and prove a lower bound for the minimax convergence rate for inferring f from the data, and show that optimal rates can be achieved with truncated Gaussian priors.


Author(s):  
Pengju He ◽  
Mi Qi ◽  
Wenhui Li ◽  
Mengyang Tang ◽  
Ziwei Zhao

Most nonstationary and time-varying mixed source separation algorithms are based on the model of instantaneous mixtures. However, the observation signal is a convolutional mixed source in reverberation environment, such as mobile voice received by indoor microphone arrays. In this paper, a time-varying convolution blind source separation (BSS) algorithm for nonstationary signals is proposed, which can separate both time-varying instantaneous mixtures and time-varying convolution mixtures. We employ the variational Bayesian (VB) inference method with Gaussian process (GP) prior for separating the nonstationary source frame by frame from the time-varying convolution signal, in which the prior information of the mixing matrix and the source signal are obtained by the Gaussian autoregressive method, and the posterior distributions of parameters (source signal and mixing matrix) are obtained by the VB learning. In the learning process, the learned parameters and hyperparameters are propagated to the next frame for VB inference as the prior which is combined with the likelihood function to get the posterior distribution. The experimental results show that the proposed algorithm is effective for separating time-varying mixed speech signals.


2009 ◽  
Vol 21 (3) ◽  
pp. 786-792 ◽  
Author(s):  
Manfred Opper ◽  
Cédric Archambeau

The variational approximation of posterior distributions by multivariate gaussians has been much less popular in the machine learning community compared to the corresponding approximation by factorizing distributions. This is for a good reason: the gaussian approximation is in general plagued by an [Formula: see text] number of variational parameters to be optimized, N being the number of random variables. In this letter, we discuss the relationship between the Laplace and the variational approximation, and we show that for models with gaussian priors and factorizing likelihoods, the number of variational parameters is actually [Formula: see text]. The approach is applied to gaussian process regression with nongaussian likelihoods.


2007 ◽  
Vol 44 (02) ◽  
pp. 393-408 ◽  
Author(s):  
Allan Sly

Multifractional Brownian motion is a Gaussian process which has changing scaling properties generated by varying the local Hölder exponent. We show that multifractional Brownian motion is very sensitive to changes in the selected Hölder exponent and has extreme changes in magnitude. We suggest an alternative stochastic process, called integrated fractional white noise, which retains the important local properties but avoids the undesirable oscillations in magnitude. We also show how the Hölder exponent can be estimated locally from discrete data in this model.


1987 ◽  
Vol 26 (03) ◽  
pp. 117-123
Author(s):  
P. Tautu ◽  
G. Wagner

SummaryA continuous parameter, stationary Gaussian process is introduced as a first approach to the probabilistic representation of the phenotype inheritance process. With some specific assumptions about the components of the covariance function, it may describe the temporal behaviour of the “cancer-proneness phenotype” (CPF) as a quantitative continuous trait. Upcrossing a fixed level (“threshold”) u and reaching level zero are the extremes of the Gaussian process considered; it is assumed that they might be interpreted as the transformation of CPF into a “neoplastic disease phenotype” or as the non-proneness to cancer, respectively.


2014 ◽  
Vol 134 (11) ◽  
pp. 1708-1715
Author(s):  
Tomohiro Hachino ◽  
Kazuhiro Matsushita ◽  
Hitoshi Takata ◽  
Seiji Fukushima ◽  
Yasutaka Igarashi

Sign in / Sign up

Export Citation Format

Share Document