scholarly journals Conditioning an additive functional of a Markov chain to stay nonnegative. II. Hitting a high level

2005 ◽  
Vol 37 (4) ◽  
pp. 1035-1055 ◽  
Author(s):  
Saul D. Jacka ◽  
Zorana Lazic ◽  
Jon Warren

Let (Xt)t≥0 be a continuous-time irreducible Markov chain on a finite state space E, let v: E→ℝ\{0}, and let (φt)t≥0 be defined by φt=∫0tv(Xs)d s. We consider the case in which the process (φt)t≥0 is oscillating and that in which (φt)t≥0 has a negative drift. In each of these cases, we condition the process (Xt,φt)t≥0 on the event that (φt)t≥0 hits level y before hitting 0 and prove weak convergence of the conditioned process as y→∞. In addition, we show the relationship between the conditioning of the process (φt)t≥0 with a negative drift to oscillate and the conditioning of it to stay nonnegative for a long time, and the relationship between the conditioning of (φt)t≥0 with a negative drift to drift to ∞ and the conditioning of it to hit large levels before hitting 0.

2005 ◽  
Vol 37 (04) ◽  
pp. 1035-1055 ◽  
Author(s):  
Saul D. Jacka ◽  
Zorana Lazic ◽  
Jon Warren

Let (X t ) t≥0 be a continuous-time irreducible Markov chain on a finite state space E, let v: E→ℝ\{0}, and let (φ t ) t≥0 be defined by φ t =∫0 t v(X s )d s. We consider the case in which the process (φ t ) t≥0 is oscillating and that in which (φ t ) t≥0 has a negative drift. In each of these cases, we condition the process (X t ,φ t ) t≥0 on the event that (φ t ) t≥0 hits level y before hitting 0 and prove weak convergence of the conditioned process as y→∞. In addition, we show the relationship between the conditioning of the process (φ t ) t≥0 with a negative drift to oscillate and the conditioning of it to stay nonnegative for a long time, and the relationship between the conditioning of (φ t ) t≥0 with a negative drift to drift to ∞ and the conditioning of it to hit large levels before hitting 0.


2005 ◽  
Vol 37 (4) ◽  
pp. 1015-1034 ◽  
Author(s):  
Saul D. Jacka ◽  
Zorana Lazic ◽  
Jon Warren

Let (Xt)t≥0 be a continuous-time irreducible Markov chain on a finite state space E, let v be a map v: E→ℝ\{0}, and let (φt)t≥0 be an additive functional defined by φt=∫0tv(Xs)d s. We consider the case in which the process (φt)t≥0 is oscillating and that in which (φt)t≥0 has a negative drift. In each of these cases, we condition the process (Xt,φt)t≥0 on the event that (φt)t≥0 is nonnegative until time T and prove weak convergence of the conditioned process as T→∞.


2005 ◽  
Vol 37 (04) ◽  
pp. 1015-1034 ◽  
Author(s):  
Saul D. Jacka ◽  
Zorana Lazic ◽  
Jon Warren

Let (X t ) t≥0 be a continuous-time irreducible Markov chain on a finite state space E, let v be a map v: E→ℝ\{0}, and let (φ t ) t≥0 be an additive functional defined by φ t =∫0 t v(X s )d s. We consider the case in which the process (φ t ) t≥0 is oscillating and that in which (φ t ) t≥0 has a negative drift. In each of these cases, we condition the process (X t ,φ t ) t≥0 on the event that (φ t ) t≥0 is nonnegative until time T and prove weak convergence of the conditioned process as T→∞.


1993 ◽  
Vol 113 (2) ◽  
pp. 381-386
Author(s):  
Martin Baxter

In Baxter and Williams [1] we began a study of Abel averages,as opposed to the oft-studied Cesàro averages In Baxter and Williams [2], hereinafter referred to as [BW2], we studied the large-deviation behaviour of these averages. In the case where X is an irreducible Markov chain on a finite state-space S = {1,…, n}, we observed thatandwhere π is the invariant distribution of X. We noted thatwhere v is an n-vector, δ(v):= sup{Re(z): z ∈ spect(Q+ V)}, and where spect(·) denotes spectrum (here the set of eigenvalues), Q is the Q-matrix of X, and V denotes the diagonal matrix diag (vi). It is also true that the large-deviation property holds for Ct with rate function I denned on M = {(xt)i ∈ S: xi ≥ 0, Σixi = 1}.


2014 ◽  
Vol 51 (4) ◽  
pp. 1114-1132 ◽  
Author(s):  
Bernhard C. Geiger ◽  
Christoph Temmel

A lumping of a Markov chain is a coordinatewise projection of the chain. We characterise the entropy rate preservation of a lumping of an aperiodic and irreducible Markov chain on a finite state space by the random growth rate of the cardinality of the realisable preimage of a finite-length trajectory of the lumped chain and by the information needed to reconstruct original trajectories from their lumped images. Both are purely combinatorial criteria, depending only on the transition graph of the Markov chain and the lumping function. A lumping is strongly k-lumpable, if and only if the lumped process is a kth-order Markov chain for each starting distribution of the original Markov chain. We characterise strong k-lumpability via tightness of stationary entropic bounds. In the sparse setting, we give sufficient conditions on the lumping to both preserve the entropy rate and be strongly k-lumpable.


1972 ◽  
Vol 9 (01) ◽  
pp. 129-139 ◽  
Author(s):  
P. J. Brockwell

The distribution of the times to first emptiness and first overflow, together with the limiting distribution of content are determined for a dam of finite capacity. It is assumed that the rate of change of the level of the dam is a continuous-time Markov chain with finite state-space (suitably modified when the dam is full or empty).


1972 ◽  
Vol 9 (1) ◽  
pp. 129-139 ◽  
Author(s):  
P. J. Brockwell

The distribution of the times to first emptiness and first overflow, together with the limiting distribution of content are determined for a dam of finite capacity. It is assumed that the rate of change of the level of the dam is a continuous-time Markov chain with finite state-space (suitably modified when the dam is full or empty).


2007 ◽  
Vol 2007 ◽  
pp. 1-14 ◽  
Author(s):  
Frank G. Ball ◽  
Robin K. Milne ◽  
Geoffrey F. Yeo

Patch clamp recordings from ion channels often show bursting behaviour, that is, periods of repetitive activity, which are noticeably separated from each other by periods of inactivity. A number of authors have obtained results for important properties of theoretical and empirical bursts when channel gating is modelled by a continuous-time Markov chain with a finite-state space. We show how the use of marked continuous-time Markov chains can simplify the derivation of (i) the distributions of several burst properties, including the total open time, the total charge transfer, and the number of openings in a burst, and (ii) the form of these distributions when the underlying gating process is time reversible and in equilibrium.


2014 ◽  
Vol 51 (04) ◽  
pp. 1114-1132 ◽  
Author(s):  
Bernhard C. Geiger ◽  
Christoph Temmel

A lumping of a Markov chain is a coordinatewise projection of the chain. We characterise the entropy rate preservation of a lumping of an aperiodic and irreducible Markov chain on a finite state space by the random growth rate of the cardinality of the realisable preimage of a finite-length trajectory of the lumped chain and by the information needed to reconstruct original trajectories from their lumped images. Both are purely combinatorial criteria, depending only on the transition graph of the Markov chain and the lumping function. A lumping is strongly k-lumpable, if and only if the lumped process is a kth-order Markov chain for each starting distribution of the original Markov chain. We characterise strong k-lumpability via tightness of stationary entropic bounds. In the sparse setting, we give sufficient conditions on the lumping to both preserve the entropy rate and be strongly k-lumpable.


1967 ◽  
Vol 4 (1) ◽  
pp. 192-196 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

In a recent paper, the authors have discussed the concept of quasi-stationary distributions for absorbing Markov chains having a finite state space, with the further restriction of discrete time. The purpose of the present note is to summarize the analogous results when the time parameter is continuous.


Sign in / Sign up

Export Citation Format

Share Document