Chaînes colorées: trois extensions d'une formule de P. Nelson

1978 ◽  
Vol 15 (2) ◽  
pp. 321-339 ◽  
Author(s):  
Gérard Letac

Nelson [9], [10] has computed the generating function of return probabilities to the initial state for a particular Markov chain on permutations of three objects. The formula obtained is The present paper studies three distinct Markov chains generalizing the Nelson chain: the so-called three-coloured chain, with some birth-and-death processes on ℤ as a particular case, a chain on a graph close to the graph of the edges of a cube, and the daisy library. Two other themes piece together these chains: the notion of coloured chain and the technique of computation by additive processes.


1978 ◽  
Vol 15 (02) ◽  
pp. 321-339 ◽  
Author(s):  
Gérard Letac

Nelson [9], [10] has computed the generating function of return probabilities to the initial state for a particular Markov chain on permutations of three objects. The formula obtained is The present paper studies three distinct Markov chains generalizing the Nelson chain: the so-called three-coloured chain, with some birth-and-death processes on ℤ as a particular case, a chain on a graph close to the graph of the edges of a cube, and the daisy library. Two other themes piece together these chains: the notion of coloured chain and the technique of computation by additive processes.



2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .



1976 ◽  
Vol 13 (02) ◽  
pp. 357-360
Author(s):  
Pedro Vit

It is shown that an irreducible and aperiodic Markov chain can be altered preserving irreducibility without altering the nature of the chain in the sense that, the modified chain is transient (recurrent) if and only if the original chain is transient (recurrent). Furthermore, it is shown by means of a counterexample that ergodicity (null-recurrence) is not preserved. An interesting application of this result is a simple proof of Pakes's generalization of Foster's criterion for a chain to be recurrent.



Author(s):  
H. D. Miller

SummaryThis paper is essentially a continuation of the previous one (5) and the notation established therein will be freely repeated. The sequence {ξr} of random variables is defined on a positively regular finite Markov chain {kr} as in (5) and the partial sums and are considered. Let ζn be the first positive ζr and let πjk(y), the ‘ruin’ function or absorption probability, be defined by The main result (Theorem 1) is an asymptotic expression for πjk(y) for large y in the case when , the expectation of ξ1 being computed under the unique stationary distribution for k0, the initial state of the chain, and unconditional on k1.



1995 ◽  
Vol 27 (3) ◽  
pp. 652-691 ◽  
Author(s):  
Harry Kesten

We consider positive matrices Q, indexed by {1,2, …}. Assume that there exists a constant 1 L < ∞ and sequences u1< u2< · ·· and d1d2< · ·· such that Q(i, j) = 0 whenever i < ur < ur + L < j or i > dr + L > dr > j for some r. If Q satisfies some additional uniform irreducibility and aperiodicity assumptions, then for s > 0, Q has at most one positive s-harmonic function and at most one s-invariant measure µ. We use this result to show that if Q is also substochastic, then it has the strong ratio limit property, that is for a suitable R and some R–1-harmonic function f and R–1-invariant measure µ. Under additional conditions µ can be taken as a probability measure on {1,2, …} and exists. An example shows that this limit may fail to exist if Q does not satisfy the restrictions imposed above, even though Q may have a minimal normalized quasi-stationary distribution (i.e. a probability measure µ for which R–1µ = µQ).The results have an immediate interpretation for Markov chains on {0,1,2, …} with 0 as an absorbing state. They give ratio limit theorems for such a chain, conditioned on not yet being absorbed at 0 by time n.



2005 ◽  
Vol 37 (04) ◽  
pp. 1075-1093 ◽  
Author(s):  
Quan-Lin Li ◽  
Yiqiang Q. Zhao

In this paper, we consider the asymptotic behavior of stationary probability vectors of Markov chains of GI/G/1 type. The generating function of the stationary probability vector is explicitly expressed by theR-measure. This expression of the generating function is more convenient for the asymptotic analysis than those in the literature. TheRG-factorization of both the repeating row and the Wiener-Hopf equations for the boundary row are used to provide necessary spectral properties. The stationary probability vector of a Markov chain of GI/G/1 type is shown to be light tailed if the blocks of the repeating row and the blocks of the boundary row are light tailed. We derive two classes of explicit expression for the asymptotic behavior, the geometric tail, and the semigeometric tail, based on the repeating row, the boundary row, or the minimal positive solution of a crucial equation involved in the generating function, and discuss the singularity classes of the stationary probability vector.



1982 ◽  
Vol 92 (3) ◽  
pp. 527-534 ◽  
Author(s):  
Harry Cohn

AbstractSuppose that {Xn} is a countable non-homogeneous Markov chain andIf converges for any i, l, m, j with , thenwhenever lim , whereas if converges, thenwhere and . The behaviour of transition probabilities between various groups of states is studied and criteria for recurrence and transience are given.



Author(s):  
Marcel F. Neuts

We consider a stationary discrete-time Markov chain with a finite number m of possible states which we designate by 1,…,m. We assume that at time t = 0 the process is in an initial state i with probability (i = 1,…, m) and such that and .



Author(s):  
J. Keilson ◽  
D. M. G. Wishart

In a previous paper (3), to which this is a sequel, a central limit theorem was presented for the homogeneous additive processes defined on a finite Markov chain, a class of processes treated extensively by Miller (4). A typical homogeneous process {R(t), X(t)} takes its values in the spaceand is described by a vector distribution F(x, t) with componentsand an increment matrix distribution B(x) governing the transitions. The present paper treats the motion of the process in the same space when its homogeneity is modified by the presence of a set of boundary states in x. Such bounded processes have many applications to the theory of queues, dams, and inventories. Indeed, this paper and its predecessor were motivated initially by a desire to discuss queuing systems with many servers and many service phases. We will treat both absorbing boundaries and associated passage time densities, and reflecting boundaries. For the latter our main objective is an asymptotic discussion of the tails of the ergodic distribution.



2006 ◽  
Vol 43 (01) ◽  
pp. 127-140 ◽  
Author(s):  
Joseph Glaz ◽  
Martin Kulldorff ◽  
Vladimir Pozdnyakov ◽  
J. Michael Steele

Methods using gambling teams and martingales are developed and applied to find formulae for the expected value and the generating function of the waiting time to observation of an element of a finite collection of patterns in a sequence generated by a two-state Markov chain of first, or higher, order.



Sign in / Sign up

Export Citation Format

Share Document