The Number of Visits Until Absorption to Subsets of the State Space by a Discrete-Parameter Markov Chain: the Multivariate Case

Author(s):  
Attila Csenki
1976 ◽  
Vol 8 (04) ◽  
pp. 737-771 ◽  
Author(s):  
R. L. Tweedie

The aim of this paper is to present a comprehensive set of criteria for classifying as recurrent, transient, null or positive the sets visited by a general state space Markov chain. When the chain is irreducible in some sense, these then provide criteria for classifying the chain itself, provided the sets considered actually reflect the status of the chain as a whole. The first part of the paper is concerned with the connections between various definitions of recurrence, transience, nullity and positivity for sets and for irreducible chains; here we also elaborate the idea of status sets for irreducible chains. In the second part we give our criteria for classifying sets. When the state space is countable, our results for recurrence, transience and positivity reduce to the classical work of Foster (1953); for continuous-valued chains they extend results of Lamperti (1960), (1963); for general spaces the positivity and recurrence criteria strengthen those of Tweedie (1975b).


1990 ◽  
Vol 4 (1) ◽  
pp. 89-116 ◽  
Author(s):  
Ushlo Sumita ◽  
Maria Rieders

A novel algorithm is developed which computes the ergodic probability vector for large Markov chains. Decomposing the state space into lumps, the algorithm generates a replacement process on each lump, where any exit from a lump is instantaneously replaced at some state in that lump. The replacement distributions are constructed recursively in such a way that, in the limit, the ergodic probability vector for a replacement process on one lump will be proportional to the ergodic probability vector of the original Markov chain restricted to that lump. Inverse matrices computed in the algorithm are of size (M – 1), where M is the number of lumps, thereby providing a substantial rank reduction. When a special structure is present, the procedure for generating the replacement distributions can be simplified. The relevance of the new algorithm to the aggregation-disaggregation algorithm of Takahashi [29] is also discussed.


1984 ◽  
Vol 21 (03) ◽  
pp. 567-574 ◽  
Author(s):  
Atef M. Abdel-Moneim ◽  
Frederick W. Leysieffer

Conditions under which a function of a finite, discrete-time Markov chain, X(t), is again Markov are given, when X(t) is not irreducible. These conditions are given in terms of an interrelationship between two partitions of the state space of X(t), the partition induced by the minimal essential classes of X(t) and the partition with respect to which lumping is to be considered.


1973 ◽  
Vol 73 (1) ◽  
pp. 119-138 ◽  
Author(s):  
Gerald S. Goodman ◽  
S. Johansen

1. SummaryWe shall consider a non-stationary Markov chain on a countable state space E. The transition probabilities {P(s, t), 0 ≤ s ≤ t <t0 ≤ ∞} are assumed to be continuous in (s, t) uniformly in the state i ε E.


2009 ◽  
Vol 46 (03) ◽  
pp. 812-826
Author(s):  
Saul Jacka

Motivated by Feller's coin-tossing problem, we consider the problem of conditioning an irreducible Markov chain never to wait too long at 0. Denoting by τ the first time that the chain,X, waits for at least one unit of time at the origin, we consider conditioning the chain on the event (τ›T). We show that there is a weak limit asT→∞ in the cases where either the state space is finite orXis transient. We give sufficient conditions for the existence of a weak limit in other cases and show that we have vague convergence to a defective limit if the time to hit zero has a lighter tail than τ and τ is subexponential.


1989 ◽  
Vol 26 (03) ◽  
pp. 446-457 ◽  
Author(s):  
Gerardo Rubino

We analyse the conditions under which the aggregated process constructed from an homogeneous Markov chain over a given partition of the state space is also Markov homogeneous. The past work on the subject is revised and new properties are obtained.


1969 ◽  
Vol 1 (02) ◽  
pp. 123-187 ◽  
Author(s):  
Erhan Çinlar

Consider a stochastic process X(t) (t ≧ 0) taking values in a countable state space, say, {1, 2,3, …}. To be picturesque we think of X(t) as the state which a particle is in at epoch t. Suppose the particle moves from state to state in such a way that the successive states visited form a Markov chain, and that the particle stays in a given state a random amount of time depending on the state it is in as well as on the state to be visited next. Below is a possible realization of such a process.


Sign in / Sign up

Export Citation Format

Share Document