scholarly journals P Systems Computing the Period of Irreducible Markov Chains

Author(s):  
Mónica Cardona-Roca ◽  
M. Ángels Colomer-Cugat ◽  
Agustín Riscos-Núñez ◽  
Miquel Rius-Font

<p>It is well known that any irreducible and aperiodic Markov chain has exactly one stationary distribution, and for any arbitrary initial distribution, the se- quence of distributions at time n converges to the stationary distribution, that is, the Markov chain is approaching equilibrium as n→∞.<br /> In this paper, a characterization of the aperiodicity in existential terms of some state is given. At the same time, a P system with external output is associated with any irre- ducible Markov chain. The designed system provides the aperiodicity of that Markov chain and spends a polynomial amount of resources with respect to the size of the in- put. A comparative analysis with respect to another known solution is described.</p>

2020 ◽  
Vol 52 (4) ◽  
pp. 1249-1283
Author(s):  
Masatoshi Kimura ◽  
Tetsuya Takine

AbstractThis paper considers ergodic, continuous-time Markov chains $\{X(t)\}_{t \in (\!-\infty,\infty)}$ on $\mathbb{Z}^+=\{0,1,\ldots\}$ . For an arbitrarily fixed $N \in \mathbb{Z}^+$ , we study the conditional stationary distribution $\boldsymbol{\pi}(N)$ given the Markov chain being in $\{0,1,\ldots,N\}$ . We first characterize $\boldsymbol{\pi}(N)$ via systems of linear inequalities and identify simplices that contain $\boldsymbol{\pi}(N)$ , by examining the $(N+1) \times (N+1)$ northwest corner block of the infinitesimal generator $\textbf{\textit{Q}}$ and the subset of the first $N+1$ states whose members are directly reachable from at least one state in $\{N+1,N+2,\ldots\}$ . These results are closely related to the augmented truncation approximation (ATA), and we provide some practical implications for the ATA. Next we consider an extension of the above results, using the $(K+1) \times (K+1)$ ( $K > N$ ) northwest corner block of $\textbf{\textit{Q}}$ and the subset of the first $K+1$ states whose members are directly reachable from at least one state in $\{K+1,K+2,\ldots\}$ . Furthermore, we introduce new state transition structures called (K, N)-skip-free sets, using which we obtain the minimum convex polytope that contains $\boldsymbol{\pi}(N)$ .


1982 ◽  
Vol 19 (3) ◽  
pp. 692-694 ◽  
Author(s):  
Mark Scott ◽  
Barry C. Arnold ◽  
Dean L. Isaacson

Characterizations of strong ergodicity for Markov chains using mean visit times have been found by several authors (Huang and Isaacson (1977), Isaacson and Arnold (1978)). In this paper a characterization of uniform strong ergodicity for a continuous-time non-homogeneous Markov chain is given. This extends the characterization, using mean visit times, that was given by Isaacson and Arnold.


1968 ◽  
Vol 5 (2) ◽  
pp. 401-413 ◽  
Author(s):  
Paul J. Schweitzer

A perturbation formalism is presented which shows how the stationary distribution and fundamental matrix of a Markov chain containing a single irreducible set of states change as the transition probabilities vary. Expressions are given for the partial derivatives of the stationary distribution and fundamental matrix with respect to the transition probabilities. Semi-group properties of the generators of transformations from one Markov chain to another are investigated. It is shown that a perturbation formalism exists in the multiple subchain case if and only if the change in the transition probabilities does not alter the number of, or intermix the various subchains. The formalism is presented when this condition is satisfied.


1994 ◽  
Vol 26 (3) ◽  
pp. 756-774 ◽  
Author(s):  
Dimitris N. Politis

A generalization of the notion of a stationary Markov chain in more than one dimension is proposed, and is found to be a special class of homogeneous Markov random fields. Stationary Markov chains in many dimensions are shown to possess a maximum entropy property, analogous to the corresponding property for Markov chains in one dimension. In addition, a representation of Markov chains in many dimensions is provided, together with a method for their generation that converges to their stationary distribution.


1983 ◽  
Vol 20 (01) ◽  
pp. 191-196 ◽  
Author(s):  
R. L. Tweedie

We give conditions under which the stationary distribution π of a Markov chain admits moments of the general form ∫ f(x)π(dx), where f is a general function; specific examples include f(x) = xr and f(x) = esx . In general the time-dependent moments of the chain then converge to the stationary moments. We show that in special cases this convergence of moments occurs at a geometric rate. The results are applied to random walk on [0, ∞).


1977 ◽  
Vol 14 (04) ◽  
pp. 740-747 ◽  
Author(s):  
Ester Samuel-Cahn ◽  
Shmuel Zamir

We consider an infinite Markov chain with states E 0, E 1, …, such that E 1, E 2, … is not closed, and for i ≧ 1 movement to the right is limited by one step. Simple algebraic characterizations are given for persistency of all states, and, if E 0 is absorbing, simple expressions are given for the probabilities of staying forever among the transient states. Examples are furnished, and simple necessary conditions and sufficient conditions for the above characterizations are given.


2019 ◽  
Vol 29 (08) ◽  
pp. 1431-1449
Author(s):  
John Rhodes ◽  
Anne Schilling

We show that the stationary distribution of a finite Markov chain can be expressed as the sum of certain normal distributions. These normal distributions are associated to planar graphs consisting of a straight line with attached loops. The loops touch only at one vertex either of the straight line or of another attached loop. Our analysis is based on our previous work, which derives the stationary distribution of a finite Markov chain using semaphore codes on the Karnofsky–Rhodes and McCammond expansion of the right Cayley graph of the finite semigroup underlying the Markov chain.


1992 ◽  
Vol 29 (01) ◽  
pp. 21-36 ◽  
Author(s):  
Masaaki Kijima

Let {Xn, n= 0, 1, 2, ···} be a transient Markov chain which, when restricted to the state space 𝒩+= {1, 2, ···}, is governed by an irreducible, aperiodic and strictly substochastic matrix𝐏= (pij), and letpij(n) =P∈Xn=j, Xk∈ 𝒩+fork= 0, 1, ···,n|X0=i],i, j𝒩+. The prime concern of this paper is conditions for the existence of the limits,qijsay, ofasn →∞. Ifthe distribution (qij) is called the quasi-stationary distribution of {Xn} and has considerable practical importance. It will be shown that, under some conditions, if a non-negative non-trivial vectorx= (xi) satisfyingrxT=xT𝐏andexists, whereris the convergence norm of𝐏, i.e.r=R–1andand T denotes transpose, then it is unique, positive elementwise, andqij(n) necessarily converge toxjasn →∞.Unlike existing results in the literature, our results can be applied even to theR-null andR-transient cases. Finally, an application to a left-continuous random walk whose governing substochastic matrix isR-transient is discussed to demonstrate the usefulness of our results.


2007 ◽  
Vol 24 (06) ◽  
pp. 813-829 ◽  
Author(s):  
JEFFREY J. HUNTER

The derivation of mean first passage times in Markov chains involves the solution of a family of linear equations. By exploring the solution of a related set of equations, using suitable generalized inverses of the Markovian kernel I - P, where P is the transition matrix of a finite irreducible Markov chain, we are able to derive elegant new results for finding the mean first passage times. As a by-product we derive the stationary distribution of the Markov chain without the necessity of any further computational procedures. Standard techniques in the literature, using for example Kemeny and Snell's fundamental matrix Z, require the initial derivation of the stationary distribution followed by the computation of Z, the inverse of I - P + eπT where eT = (1, 1, …, 1) and πT is the stationary probability vector. The procedures of this paper involve only the derivation of the inverse of a matrix of simple structure, based upon known characteristics of the Markov chain together with simple elementary vectors. No prior computations are required. Various possible families of matrices are explored leading to different related procedures.


Sign in / Sign up

Export Citation Format

Share Document