Lumpability for non-irreducible finite markov chains

1984 ◽  
Vol 21 (03) ◽  
pp. 567-574 ◽  
Author(s):  
Atef M. Abdel-Moneim ◽  
Frederick W. Leysieffer

Conditions under which a function of a finite, discrete-time Markov chain, X(t), is again Markov are given, when X(t) is not irreducible. These conditions are given in terms of an interrelationship between two partitions of the state space of X(t), the partition induced by the minimal essential classes of X(t) and the partition with respect to which lumping is to be considered.

1984 ◽  
Vol 21 (3) ◽  
pp. 567-574 ◽  
Author(s):  
Atef M. Abdel-Moneim ◽  
Frederick W. Leysieffer

Conditions under which a function of a finite, discrete-time Markov chain, X(t), is again Markov are given, when X(t) is not irreducible. These conditions are given in terms of an interrelationship between two partitions of the state space of X(t), the partition induced by the minimal essential classes of X(t) and the partition with respect to which lumping is to be considered.


2011 ◽  
Vol 2011 ◽  
pp. 1-15 ◽  
Author(s):  
Mario Lefebvre ◽  
Moussa Kounta

We consider a discrete-time Markov chain with state space {1,1+Δx,…,1+kΔx=N}. We compute explicitly the probability pj that the chain, starting from 1+jΔx, will hit N before 1, as well as the expected number dj of transitions needed to end the game. In the limit when Δx and the time Δt between the transitions decrease to zero appropriately, the Markov chain tends to a geometric Brownian motion. We show that pj and djΔt tend to the corresponding quantities for the geometric Brownian motion.


Author(s):  
Marcel F. Neuts

We consider a stationary discrete-time Markov chain with a finite number m of possible states which we designate by 1,…,m. We assume that at time t = 0 the process is in an initial state i with probability (i = 1,…, m) and such that and .


2001 ◽  
Vol 33 (2) ◽  
pp. 505-519 ◽  
Author(s):  
James Ledoux ◽  
Laurent Truffet

In this paper, we obtain Markovian bounds on a function of a homogeneous discrete time Markov chain. For deriving such bounds, we use well-known results on stochastic majorization of Markov chains and the Rogers–Pitman lumpability criterion. The proposed method of comparison between functions of Markov chains is not equivalent to generalized coupling method of Markov chains, although we obtain same kind of majorization. We derive necessary and sufficient conditions for existence of our Markovian bounds. We also discuss the choice of the geometric invariant related to the lumpability condition that we use.


1976 ◽  
Vol 8 (04) ◽  
pp. 737-771 ◽  
Author(s):  
R. L. Tweedie

The aim of this paper is to present a comprehensive set of criteria for classifying as recurrent, transient, null or positive the sets visited by a general state space Markov chain. When the chain is irreducible in some sense, these then provide criteria for classifying the chain itself, provided the sets considered actually reflect the status of the chain as a whole. The first part of the paper is concerned with the connections between various definitions of recurrence, transience, nullity and positivity for sets and for irreducible chains; here we also elaborate the idea of status sets for irreducible chains. In the second part we give our criteria for classifying sets. When the state space is countable, our results for recurrence, transience and positivity reduce to the classical work of Foster (1953); for continuous-valued chains they extend results of Lamperti (1960), (1963); for general spaces the positivity and recurrence criteria strengthen those of Tweedie (1975b).


1990 ◽  
Vol 4 (1) ◽  
pp. 89-116 ◽  
Author(s):  
Ushlo Sumita ◽  
Maria Rieders

A novel algorithm is developed which computes the ergodic probability vector for large Markov chains. Decomposing the state space into lumps, the algorithm generates a replacement process on each lump, where any exit from a lump is instantaneously replaced at some state in that lump. The replacement distributions are constructed recursively in such a way that, in the limit, the ergodic probability vector for a replacement process on one lump will be proportional to the ergodic probability vector of the original Markov chain restricted to that lump. Inverse matrices computed in the algorithm are of size (M – 1), where M is the number of lumps, thereby providing a substantial rank reduction. When a special structure is present, the procedure for generating the replacement distributions can be simplified. The relevance of the new algorithm to the aggregation-disaggregation algorithm of Takahashi [29] is also discussed.


2000 ◽  
Vol 37 (03) ◽  
pp. 795-806 ◽  
Author(s):  
Laurent Truffet

We propose in this paper two methods to compute Markovian bounds for monotone functions of a discrete time homogeneous Markov chain evolving in a totally ordered state space. The main interest of such methods is to propose algorithms to simplify analysis of transient characteristics such as the output process of a queue, or sojourn time in a subset of states. Construction of bounds are based on two kinds of results: well-known results on stochastic comparison between Markov chains with the same state space; and the fact that in some cases a function of Markov chain is again a homogeneous Markov chain but with smaller state space. Indeed, computation of bounds uses knowledge on the whole initial model. However, only part of this data is necessary at each step of the algorithms.


1980 ◽  
Vol 17 (1) ◽  
pp. 33-46 ◽  
Author(s):  
S. Tavaré

The connection between the age distribution of a discrete-time Markov chain and a certain time-reversed Markov chain is exhibited. A method for finding properties of age distributions follows simply from this approach. The results, which have application in several areas in applied probability, are illustrated by examples from population genetics.


2005 ◽  
Vol 2005 (3) ◽  
pp. 345-351
Author(s):  
Lakhdar Aggoun

We consider a discrete-time Markov chain observed through another Markov chain. The proposed model extends models discussed by Elliott et al. (1995). We propose improved recursive formulae to update smoothed estimates of processes related to the model. These recursive estimates are used to update the parameter of the model via the expectation maximization (EM) algorithm.


2009 ◽  
Vol 46 (03) ◽  
pp. 812-826
Author(s):  
Saul Jacka

Motivated by Feller's coin-tossing problem, we consider the problem of conditioning an irreducible Markov chain never to wait too long at 0. Denoting by τ the first time that the chain,X, waits for at least one unit of time at the origin, we consider conditioning the chain on the event (τ›T). We show that there is a weak limit asT→∞ in the cases where either the state space is finite orXis transient. We give sufficient conditions for the existence of a weak limit in other cases and show that we have vague convergence to a defective limit if the time to hit zero has a lighter tail than τ and τ is subexponential.


Sign in / Sign up

Export Citation Format

Share Document