scholarly journals A Numerical Algorithm on the Computation of the Stationary Distribution of a Discrete Time Homogenous Finite Markov Chain

2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Di Zhao ◽  
Hongyi Li ◽  
Donglin Su

The transition matrix, which characterizes a discrete time homogeneous Markov chain, is a stochastic matrix. A stochastic matrix is a special nonnegative matrix with each row summing up to 1. In this paper, we focus on the computation of the stationary distribution of a transition matrix from the viewpoint of the Perron vector of a nonnegative matrix, based on which an algorithm for the stationary distribution is proposed. The algorithm can also be used to compute the Perron root and the corresponding Perron vector of any nonnegative irreducible matrix. Furthermore, a numerical example is given to demonstrate the validity of the algorithm.

1991 ◽  
Vol 28 (1) ◽  
pp. 96-103 ◽  
Author(s):  
Daniel P. Heyman

We are given a Markov chain with states 0, 1, 2, ···. We want to get a numerical approximation of the steady-state balance equations. To do this, we truncate the chain, keeping the first n states, make the resulting matrix stochastic in some convenient way, and solve the finite system. The purpose of this paper is to provide some sufficient conditions that imply that as n tends to infinity, the stationary distributions of the truncated chains converge to the stationary distribution of the given chain. Our approach is completely probabilistic, and our conditions are given in probabilistic terms. We illustrate how to verify these conditions with five examples.


2000 ◽  
Vol 37 (03) ◽  
pp. 795-806 ◽  
Author(s):  
Laurent Truffet

We propose in this paper two methods to compute Markovian bounds for monotone functions of a discrete time homogeneous Markov chain evolving in a totally ordered state space. The main interest of such methods is to propose algorithms to simplify analysis of transient characteristics such as the output process of a queue, or sojourn time in a subset of states. Construction of bounds are based on two kinds of results: well-known results on stochastic comparison between Markov chains with the same state space; and the fact that in some cases a function of Markov chain is again a homogeneous Markov chain but with smaller state space. Indeed, computation of bounds uses knowledge on the whole initial model. However, only part of this data is necessary at each step of the algorithms.


2013 ◽  
Vol 50 (04) ◽  
pp. 918-930 ◽  
Author(s):  
Marie-Anne Guerry

When a discrete-time homogenous Markov chain is observed at time intervals that correspond to its time unit, then the transition probabilities of the chain can be estimated using known maximum likelihood estimators. In this paper we consider a situation when a Markov chain is observed on time intervals with length equal to twice the time unit of the Markov chain. The issue then arises of characterizing probability matrices whose square root(s) are also probability matrices. This characterization is referred to in the literature as the embedding problem for discrete time Markov chains. The probability matrix which has probability root(s) is called embeddable. In this paper for two-state Markov chains, necessary and sufficient conditions for embeddability are formulated and the probability square roots of the transition matrix are presented in analytic form. In finding conditions for the existence of probability square roots for (k x k) transition matrices, properties of row-normalized matrices are examined. Besides the existence of probability square roots, the uniqueness of these solutions is discussed: In the case of nonuniqueness, a procedure is introduced to identify a transition matrix that takes into account the specificity of the concrete context. In the case of nonexistence of a probability root, the concept of an approximate probability root is introduced as a solution of an optimization problem related to approximate nonnegative matrix factorization.


2019 ◽  
Vol 44 (3) ◽  
pp. 282-308 ◽  
Author(s):  
Brian G. Vegetabile ◽  
Stephanie A. Stout-Oswald ◽  
Elysia Poggi Davis ◽  
Tallie Z. Baram ◽  
Hal S. Stern

Predictability of behavior is an important characteristic in many fields including biology, medicine, marketing, and education. When a sequence of actions performed by an individual can be modeled as a stationary time-homogeneous Markov chain the predictability of the individual’s behavior can be quantified by the entropy rate of the process. This article compares three estimators of the entropy rate of finite Markov processes. The first two methods directly estimate the entropy rate through estimates of the transition matrix and stationary distribution of the process. The third method is related to the sliding-window Lempel–Ziv compression algorithm. The methods are compared via a simulation study and in the context of a study of interactions between mothers and their children.


1999 ◽  
Vol 36 (3) ◽  
pp. 644-653 ◽  
Author(s):  
Philippe Carette

An open hierarchical (manpower) system divided into a totally ordered set of k grades is discussed. The transitions occur only from one grade to the next or to an additional (k+1)th grade representing the external environment of the system. The model used to describe the dynamics of the system is a continuous-time homogeneous Markov chain with k+1 states and infinitesimal generator R = (rij) satisfying rij = 0 if i > j or i + 1 < j ≤ k (i, j = 1,…,k+1), the transition matrix P between times 0 and 1 being P = expR. In this paper, two-wave panel data about the hierarchical system are considered and the resulting fact that, in general, the maximum-likelihood estimated transition matrix cannot be written as an exponential of an infinitesimal generator R having the form described above. The purpose of this paper is to investigate when this can be ascribed to the effect of sampling variability.


Author(s):  
M. Vidyasagar

This chapter deals with nonnegative matrices, which are relevant in the study of Markov processes because the state transition matrix of such a process is a special kind of nonnegative matrix, known as a stochastic matrix. However, it turns out that practically all of the useful properties of a stochastic matrix also hold for the more general class of nonnegative matrices. Hence it is desirable to present the theory in the more general setting, and then specialize to Markov processes. The chapter first considers the canonical form for nonnegative matrices, including irreducible matrices and periodic irreducible matrices, before discussing the Perron–Frobenius theorem for primitive matrices and for irreducible matrices.


2012 ◽  
Vol 479-481 ◽  
pp. 971-976
Author(s):  
Chun Hui Chen ◽  
Ya Fei Nie

Markov prediction is an important method predicts availability of repairable system. How to estimate the transition matrix and profit matrix (if it has) play a fundamental role in Markov prediction. This article introduced briefly homogeneous Markov chain prediction method, study on estimation and algorithm which calculate the transition matrix and profit matrix accompany with transition on the base of historical data about system state and profit. Finally, according to algorithm, we write a customized functions utilizing R software and provide the calling method more details. It did fundamental work on utilizing data about system failure and maintenance to estimate the rate of system failure and the effective degree of steady-state, which enrich the theory and technology of decision models for reliability-centered maintenance and system maintainability modeling.


2014 ◽  
Vol 14 (01) ◽  
pp. 1550003 ◽  
Author(s):  
Liu Yang ◽  
Kai-Xuan Zheng ◽  
Neng-Gang Xie ◽  
Ye Ye ◽  
Lu Wang

For a multi-agent spatial Parrondo's model that it is composed of games A and B, we use the discrete time Markov chains to derive the probability transition matrix. Then, we respectively deduce the stationary distribution for games A and B played individually and the randomized combination of game A + B. We notice that under a specific set of parameters, two absorbing states instead of a fixed stationary distribution exist in game B. However, the randomized game A + B can jump out of the absorbing states of game B and has a fixed stationary distribution because of the "agitating" role of game A. Moreover, starting at different initial states, we deduce the probabilities of absorption at the two absorbing barriers.


Author(s):  
Jeffrey J. Hunter

Questions are posed regarding the influence that the column sums of the transition probabilities of a stochastic matrix (with row sums all one) have on the stationary distribution, the mean first passage times and the Kemeny constant of the associated irreducible discrete time Markov chain. Some new relationships, including some inequalities, and partial answers to the questions, are given using a special generalized matrix inverse that has not previously been considered in the literature on Markov chains.


Sign in / Sign up

Export Citation Format

Share Document