scholarly journals Convergence rates for optimised adaptive importance samplers

2021 ◽  
Vol 31 (2) ◽  
Author(s):  
Ömer Deniz Akyildiz ◽  
Joaquín Míguez

AbstractAdaptive importance samplers are adaptive Monte Carlo algorithms to estimate expectations with respect to some target distribution which adapt themselves to obtain better estimators over a sequence of iterations. Although it is straightforward to show that they have the same $$\mathcal {O}(1/\sqrt{N})$$ O ( 1 / N ) convergence rate as standard importance samplers, where N is the number of Monte Carlo samples, the behaviour of adaptive importance samplers over the number of iterations has been left relatively unexplored. In this work, we investigate an adaptation strategy based on convex optimisation which leads to a class of adaptive importance samplers termed optimised adaptive importance samplers (OAIS). These samplers rely on the iterative minimisation of the $$\chi ^2$$ χ 2 -divergence between an exponential family proposal and the target. The analysed algorithms are closely related to the class of adaptive importance samplers which minimise the variance of the weight function. We first prove non-asymptotic error bounds for the mean squared errors (MSEs) of these algorithms, which explicitly depend on the number of iterations and the number of samples together. The non-asymptotic bounds derived in this paper imply that when the target belongs to the exponential family, the $$L_2$$ L 2 errors of the optimised samplers converge to the optimal rate of $$\mathcal {O}(1/\sqrt{N})$$ O ( 1 / N ) and the rate of convergence in the number of iterations are explicitly provided. When the target does not belong to the exponential family, the rate of convergence is the same but the asymptotic $$L_2$$ L 2 error increases by a factor $$\sqrt{\rho ^\star } > 1$$ ρ ⋆ > 1 , where $$\rho ^\star - 1$$ ρ ⋆ - 1 is the minimum $$\chi ^2$$ χ 2 -divergence between the target and an exponential family proposal.

2014 ◽  
Vol 46 (04) ◽  
pp. 1059-1083 ◽  
Author(s):  
Qifan Song ◽  
Mingqi Wu ◽  
Faming Liang

In this paper we establish the theory of weak convergence (toward a normal distribution) for both single-chain and population stochastic approximation Markov chain Monte Carlo (MCMC) algorithms (SAMCMC algorithms). Based on the theory, we give an explicit ratio of convergence rates for the population SAMCMC algorithm and the single-chain SAMCMC algorithm. Our results provide a theoretic guarantee that the population SAMCMC algorithms are asymptotically more efficient than the single-chain SAMCMC algorithms when the gain factor sequence decreases slower than O(1 / t), where t indexes the number of iterations. This is of interest for practical applications.


2015 ◽  
Vol 52 (3) ◽  
pp. 811-825
Author(s):  
Yves Atchadé ◽  
Yizao Wang

In this paper we study the mixing time of certain adaptive Markov chain Monte Carlo (MCMC) algorithms. Under some regularity conditions, we show that the convergence rate of importance resampling MCMC algorithms, measured in terms of the total variation distance, is O(n-1). By means of an example, we establish that, in general, this algorithm does not converge at a faster rate. We also study the interacting tempering algorithm, a simplified version of the equi-energy sampler, and establish that its mixing time is of order O(n-1/2).


2008 ◽  
Vol 78 (4) ◽  
Author(s):  
Fergal P. Casey ◽  
Joshua J. Waterfall ◽  
Ryan N. Gutenkunst ◽  
Christopher R. Myers ◽  
James P. Sethna

Author(s):  
Anna Dzougoutov ◽  
Kyoung-Sook Moon ◽  
Erik von Schwerin ◽  
Anders Szepessy ◽  
Raúl Tempone

2002 ◽  
Vol 731 ◽  
Author(s):  
Qiang Yu ◽  
Sven K. Esche

AbstractBoth isotropic and anisotropic single-phase grain growth processes modeled using a modified Monte Carlo method exhibit parabolic growth kinetics, and the anisotropy degree affects only the rate of change of the mean grain area. In some cases, with significantly anisotropic grain boundary energies, the normalized grain size distributions are not time-invariant during the lattice evolution.


Sign in / Sign up

Export Citation Format

Share Document