scholarly journals Lumpings of Markov Chains, Entropy Rate Preservation, and Higher-Order Lumpability

2014 ◽  
Vol 51 (4) ◽  
pp. 1114-1132 ◽  
Author(s):  
Bernhard C. Geiger ◽  
Christoph Temmel

A lumping of a Markov chain is a coordinatewise projection of the chain. We characterise the entropy rate preservation of a lumping of an aperiodic and irreducible Markov chain on a finite state space by the random growth rate of the cardinality of the realisable preimage of a finite-length trajectory of the lumped chain and by the information needed to reconstruct original trajectories from their lumped images. Both are purely combinatorial criteria, depending only on the transition graph of the Markov chain and the lumping function. A lumping is strongly k-lumpable, if and only if the lumped process is a kth-order Markov chain for each starting distribution of the original Markov chain. We characterise strong k-lumpability via tightness of stationary entropic bounds. In the sparse setting, we give sufficient conditions on the lumping to both preserve the entropy rate and be strongly k-lumpable.

2014 ◽  
Vol 51 (04) ◽  
pp. 1114-1132 ◽  
Author(s):  
Bernhard C. Geiger ◽  
Christoph Temmel

A lumping of a Markov chain is a coordinatewise projection of the chain. We characterise the entropy rate preservation of a lumping of an aperiodic and irreducible Markov chain on a finite state space by the random growth rate of the cardinality of the realisable preimage of a finite-length trajectory of the lumped chain and by the information needed to reconstruct original trajectories from their lumped images. Both are purely combinatorial criteria, depending only on the transition graph of the Markov chain and the lumping function. A lumping is strongly k-lumpable, if and only if the lumped process is a kth-order Markov chain for each starting distribution of the original Markov chain. We characterise strong k-lumpability via tightness of stationary entropic bounds. In the sparse setting, we give sufficient conditions on the lumping to both preserve the entropy rate and be strongly k-lumpable.


2005 ◽  
Vol 37 (4) ◽  
pp. 1015-1034 ◽  
Author(s):  
Saul D. Jacka ◽  
Zorana Lazic ◽  
Jon Warren

Let (Xt)t≥0 be a continuous-time irreducible Markov chain on a finite state space E, let v be a map v: E→ℝ\{0}, and let (φt)t≥0 be an additive functional defined by φt=∫0tv(Xs)d s. We consider the case in which the process (φt)t≥0 is oscillating and that in which (φt)t≥0 has a negative drift. In each of these cases, we condition the process (Xt,φt)t≥0 on the event that (φt)t≥0 is nonnegative until time T and prove weak convergence of the conditioned process as T→∞.


2005 ◽  
Vol 37 (4) ◽  
pp. 1035-1055 ◽  
Author(s):  
Saul D. Jacka ◽  
Zorana Lazic ◽  
Jon Warren

Let (Xt)t≥0 be a continuous-time irreducible Markov chain on a finite state space E, let v: E→ℝ\{0}, and let (φt)t≥0 be defined by φt=∫0tv(Xs)d s. We consider the case in which the process (φt)t≥0 is oscillating and that in which (φt)t≥0 has a negative drift. In each of these cases, we condition the process (Xt,φt)t≥0 on the event that (φt)t≥0 hits level y before hitting 0 and prove weak convergence of the conditioned process as y→∞. In addition, we show the relationship between the conditioning of the process (φt)t≥0 with a negative drift to oscillate and the conditioning of it to stay nonnegative for a long time, and the relationship between the conditioning of (φt)t≥0 with a negative drift to drift to ∞ and the conditioning of it to hit large levels before hitting 0.


2005 ◽  
Vol 37 (04) ◽  
pp. 1035-1055 ◽  
Author(s):  
Saul D. Jacka ◽  
Zorana Lazic ◽  
Jon Warren

Let (X t ) t≥0 be a continuous-time irreducible Markov chain on a finite state space E, let v: E→ℝ\{0}, and let (φ t ) t≥0 be defined by φ t =∫0 t v(X s )d s. We consider the case in which the process (φ t ) t≥0 is oscillating and that in which (φ t ) t≥0 has a negative drift. In each of these cases, we condition the process (X t ,φ t ) t≥0 on the event that (φ t ) t≥0 hits level y before hitting 0 and prove weak convergence of the conditioned process as y→∞. In addition, we show the relationship between the conditioning of the process (φ t ) t≥0 with a negative drift to oscillate and the conditioning of it to stay nonnegative for a long time, and the relationship between the conditioning of (φ t ) t≥0 with a negative drift to drift to ∞ and the conditioning of it to hit large levels before hitting 0.


1993 ◽  
Vol 113 (2) ◽  
pp. 381-386
Author(s):  
Martin Baxter

In Baxter and Williams [1] we began a study of Abel averages,as opposed to the oft-studied Cesàro averages In Baxter and Williams [2], hereinafter referred to as [BW2], we studied the large-deviation behaviour of these averages. In the case where X is an irreducible Markov chain on a finite state-space S = {1,…, n}, we observed thatandwhere π is the invariant distribution of X. We noted thatwhere v is an n-vector, δ(v):= sup{Re(z): z ∈ spect(Q+ V)}, and where spect(·) denotes spectrum (here the set of eigenvalues), Q is the Q-matrix of X, and V denotes the diagonal matrix diag (vi). It is also true that the large-deviation property holds for Ct with rate function I denned on M = {(xt)i ∈ S: xi ≥ 0, Σixi = 1}.


2005 ◽  
Vol 37 (04) ◽  
pp. 1015-1034 ◽  
Author(s):  
Saul D. Jacka ◽  
Zorana Lazic ◽  
Jon Warren

Let (X t ) t≥0 be a continuous-time irreducible Markov chain on a finite state space E, let v be a map v: E→ℝ\{0}, and let (φ t ) t≥0 be an additive functional defined by φ t =∫0 t v(X s )d s. We consider the case in which the process (φ t ) t≥0 is oscillating and that in which (φ t ) t≥0 has a negative drift. In each of these cases, we condition the process (X t ,φ t ) t≥0 on the event that (φ t ) t≥0 is nonnegative until time T and prove weak convergence of the conditioned process as T→∞.


2009 ◽  
Vol 46 (02) ◽  
pp. 309-329 ◽  
Author(s):  
Wojciech Niemiro ◽  
Piotr Pokarowski

The standard Markov chain Monte Carlo method of estimating an expected value is to generate a Markov chain which converges to the target distribution and then compute correlated sample averages. In many applications the quantity of interest θ is represented as a product of expected values, θ = µ 1 ⋯ µ k , and a natural estimator is a product of averages. To increase the confidence level, we can compute a median of independent runs. The goal of this paper is to analyze such an estimator , i.e. an estimator which is a ‘median of products of averages’ (MPA). Sufficient conditions are given for to have fixed relative precision at a given level of confidence, that is, to satisfy . Our main tool is a new bound on the mean-square error, valid also for nonreversible Markov chains on a finite state space.


2009 ◽  
Vol 46 (2) ◽  
pp. 309-329 ◽  
Author(s):  
Wojciech Niemiro ◽  
Piotr Pokarowski

The standard Markov chain Monte Carlo method of estimating an expected value is to generate a Markov chain which converges to the target distribution and then compute correlated sample averages. In many applications the quantity of interestθis represented as a product of expected values,θ=µ1⋯µk, and a natural estimator is a product of averages. To increase the confidence level, we can compute a median of independent runs. The goal of this paper is to analyze such an estimator, i.e. an estimator which is a ‘median of products of averages’ (MPA). Sufficient conditions are given forto have fixed relative precision at a given level of confidence, that is, to satisfy. Our main tool is a new bound on the mean-square error, valid also for nonreversible Markov chains on a finite state space.


1982 ◽  
Vol 19 (02) ◽  
pp. 272-288 ◽  
Author(s):  
P. J. Brockwell ◽  
S. I. Resnick ◽  
N. Pacheco-Santiago

A study is made of the maximum, minimum and range on [0,t] of the integral processwhereSis a finite state-space Markov chain. Approximate results are derived by establishing weak convergence of a sequence of such processes to a Wiener process. For a particular family of two-state stationary Markov chains we show that the corresponding centered integral processes exhibit the Hurst phenomenon to a remarkable degree in their pre-asymptotic behaviour.


2019 ◽  
Vol 23 ◽  
pp. 739-769
Author(s):  
Paweł Lorek

For a given absorbing Markov chain X* on a finite state space, a chain X is a sharp antidual of X* if the fastest strong stationary time (FSST) of X is equal, in distribution, to the absorption time of X*. In this paper, we show a systematic way of finding such an antidual based on some partial ordering of the state space. We use a theory of strong stationary duality developed recently for Möbius monotone Markov chains. We give several sharp antidual chains for Markov chain corresponding to a generalized coupon collector problem. As a consequence – utilizing known results on the limiting distribution of the absorption time – we indicate separation cutoffs (with their window sizes) in several chains. We also present a chain which (under some conditions) has a prescribed stationary distribution and its FSST is distributed as a prescribed mixture of sums of geometric random variables.


Sign in / Sign up

Export Citation Format

Share Document