Complexity bounds for Markov chain Monte Carlo algorithms via diffusion limits

2016 ◽  
Vol 53 (2) ◽  
pp. 410-420 ◽  
Author(s):  
Gareth O. Roberts ◽  
Jeffrey S. Rosenthal

Abstract We connect known results about diffusion limits of Markov chain Monte Carlo (MCMC) algorithms to the computer science notion of algorithm complexity. Our main result states that any weak limit of a Markov process implies a corresponding complexity bound (in an appropriate metric). We then combine this result with previously-known MCMC diffusion limit results to prove that under appropriate assumptions, the random-walk Metropolis algorithm in d dimensions takes O(d) iterations to converge to stationarity, while the Metropolis-adjusted Langevin algorithm takes O(d1/3) iterations to converge to stationarity.

2015 ◽  
Vol 52 (3) ◽  
pp. 811-825
Author(s):  
Yves Atchadé ◽  
Yizao Wang

In this paper we study the mixing time of certain adaptive Markov chain Monte Carlo (MCMC) algorithms. Under some regularity conditions, we show that the convergence rate of importance resampling MCMC algorithms, measured in terms of the total variation distance, is O(n-1). By means of an example, we establish that, in general, this algorithm does not converge at a faster rate. We also study the interacting tempering algorithm, a simplified version of the equi-energy sampler, and establish that its mixing time is of order O(n-1/2).


2015 ◽  
Vol 52 (03) ◽  
pp. 811-825
Author(s):  
Yves Atchadé ◽  
Yizao Wang

In this paper we study the mixing time of certain adaptive Markov chain Monte Carlo (MCMC) algorithms. Under some regularity conditions, we show that the convergence rate of importance resampling MCMC algorithms, measured in terms of the total variation distance, isO(n-1). By means of an example, we establish that, in general, this algorithm does not converge at a faster rate. We also study the interacting tempering algorithm, a simplified version of the equi-energy sampler, and establish that its mixing time is of orderO(n-1/2).


Author(s):  
Michael Hynes

A ubiquitous problem in physics is to determine expectation values of observables associated with a system. This problem is typically formulated as an integration of some likelihood over a multidimensional parameter space. In Bayesian analysis, numerical Markov Chain Monte Carlo (MCMC) algorithms are employed to solve such integrals using a fixed number of samples in the Markov Chain. In general, MCMC algorithms are computationally expensive for large datasets and have difficulties sampling from multimodal parameter spaces. An MCMC implementation that is robust and inexpensive for researchers is desired. Distributed computing systems have shown the potential to act as virtual supercomputers, such as in the SETI@home project in which millions of private computers participate. We propose that a clustered peer-to-peer (P2P) computer network serves as an ideal structure to run Markovian state exchange algorithms such as Parallel Tempering (PT). PT overcomes the difficulty in sampling from multimodal distributions by running multiple chains in parallel with different target distributions andexchanging their states in a Markovian manner. To demonstrate the feasibility of peer-to-peer Parallel Tempering (P2P PT), a simple two-dimensional dataset consisting of two Gaussian signals separated by a region of low probability was used in a Bayesian parameter fitting algorithm. A small connected peer-to-peer network was constructed using separate processes on a linux kernel, and P2P PT was applied to the dataset. These sampling results were compared with those obtained from sampling the parameter space with a single chain. It was found that the single chain was unable to sample both modes effectively, while the P2P PT method explored the target distribution well, visiting both modes approximately equally. Future work will involve scaling to many dimensions and large networks, and convergence conditions with highly heterogeneous computing capabilities of members within the network.


2011 ◽  
Vol 39 (6) ◽  
pp. 3262-3289 ◽  
Author(s):  
G. Fort ◽  
E. Moulines ◽  
P. Priouret

Sign in / Sign up

Export Citation Format

Share Document