Exponential convergence of adaptive importance sampling for Markov chains

2000 ◽  
Vol 37 (2) ◽  
pp. 342-358 ◽  
Author(s):  
Keith Baggerly ◽  
Dennis Cox ◽  
Rick Picard

We consider adaptive importance sampling for a Markov chain with scoring. It is shown that convergence to the zero-variance importance sampling chain for the mean total score occurs exponentially fast under general conditions. These results extend previous work in Kollman (1993) and in Kollman et al. (1999) for finite state spaces.

2000 ◽  
Vol 37 (02) ◽  
pp. 342-358 ◽  
Author(s):  
Keith Baggerly ◽  
Dennis Cox ◽  
Rick Picard

We consider adaptive importance sampling for a Markov chain with scoring. It is shown that convergence to the zero-variance importance sampling chain for the mean total score occurs exponentially fast under general conditions. These results extend previous work in Kollman (1993) and in Kollman et al. (1999) for finite state spaces.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .


1982 ◽  
Vol 19 (02) ◽  
pp. 272-288 ◽  
Author(s):  
P. J. Brockwell ◽  
S. I. Resnick ◽  
N. Pacheco-Santiago

A study is made of the maximum, minimum and range on [0,t] of the integral processwhereSis a finite state-space Markov chain. Approximate results are derived by establishing weak convergence of a sequence of such processes to a Wiener process. For a particular family of two-state stationary Markov chains we show that the corresponding centered integral processes exhibit the Hurst phenomenon to a remarkable degree in their pre-asymptotic behaviour.


2000 ◽  
Vol 37 (01) ◽  
pp. 15-28 ◽  
Author(s):  
Olivier François

This article describes new estimates for the second largest eigenvalue in absolute value of reversible and ergodic Markov chains on finite state spaces. These estimates apply when the stationary distribution assigns a probability higher than 0.702 to some given state of the chain. Geometric tools are used. The bounds mainly involve the isoperimetric constant of the chain, and hence generalize famous results obtained for the second eigenvalue. Comparison estimates are also established, using the isoperimetric constant of a reference chain. These results apply to the Metropolis-Hastings algorithm in order to solve minimization problems, when the probability of obtaining the solution from the algorithm can be chosen beforehand. For these dynamics, robust bounds are obtained at moderate levels of concentration.


2020 ◽  
Vol 2020 ◽  
pp. 1-11 ◽  
Author(s):  
Zhanfeng Li ◽  
Min Huang ◽  
Xiaohua Meng ◽  
Xiangyu Ge

This paper is intended to study the limit theorem of Markov chain function in the environment of single infinite Markovian systems. Moreover, the problem of the strong law of large numbers in the infinite environment is presented by means of constructing martingale differential sequence for the measurement under some different sufficient conditions. If the sequence of even functions gnx,n≥0 satisfies different conditions when the value ranges of x are different, we have obtained SLLN for function of Markov chain in the environment of single infinite Markovian systems. In addition, the paper studies the strong convergence of the weighted sums of function for finite state Markov Chains in single infinitely Markovian environments. Although the similar conclusions have been carried out, the difference results performed by previous scholars are that we give weaker different sufficient conditions of the strong convergence of weighted sums compared with the previous conclusions.


2007 ◽  
Vol 24 (06) ◽  
pp. 813-829 ◽  
Author(s):  
JEFFREY J. HUNTER

The derivation of mean first passage times in Markov chains involves the solution of a family of linear equations. By exploring the solution of a related set of equations, using suitable generalized inverses of the Markovian kernel I - P, where P is the transition matrix of a finite irreducible Markov chain, we are able to derive elegant new results for finding the mean first passage times. As a by-product we derive the stationary distribution of the Markov chain without the necessity of any further computational procedures. Standard techniques in the literature, using for example Kemeny and Snell's fundamental matrix Z, require the initial derivation of the stationary distribution followed by the computation of Z, the inverse of I - P + eπT where eT = (1, 1, …, 1) and πT is the stationary probability vector. The procedures of this paper involve only the derivation of the inverse of a matrix of simple structure, based upon known characteristics of the Markov chain together with simple elementary vectors. No prior computations are required. Various possible families of matrices are explored leading to different related procedures.


2007 ◽  
Vol 21 (3) ◽  
pp. 381-400 ◽  
Author(s):  
Bernd Heidergott ◽  
Arie Hordijk ◽  
Miranda van Uitert

This article provides series expansions of the stationary distribution of a finite Markov chain. This leads to an efficient numerical algorithm for computing the stationary distribution of a finite Markov chain. Numerical examples are given to illustrate the performance of the algorithm.


2001 ◽  
Vol 38 (1) ◽  
pp. 262-269 ◽  
Author(s):  
Geoffrey Pritchard ◽  
David J. Scott

We consider the problem of estimating the rate of convergence to stationarity of a continuous-time, finite-state Markov chain. This is done via an estimator of the second-largest eigenvalue of the transition matrix, which in turn is based on conventional inference in a parametric model. We obtain a limiting distribution for the eigenvalue estimator. As an example we treat an M/M/c/c queue, and show that the method allows us to estimate the time to stationarity τ within a time comparable to τ.


Sign in / Sign up

Export Citation Format

Share Document