Addenda to processes defined on a finite Markov chain

Author(s):  
Julian Keilson ◽  
David M. G. Wishart

Introduction. Two previous papers (1,2) have dealt with additive processes defined on finite Markov chains. Such a process in discrete time may be treated as a bivariate Markov process {R(k), X(k)}. The process R(k) is an irreducible Markov chain on states r = 1, 2, …, R governed by a stochastic transition matrix B0 with components brs. The marginal process X(k) ‘defined’ on the chain R(k) is a sum of random increments ξ(i) dependent on the chain, i.e. if the ith transition takes the chain from state r to state s, ξ,(i) is chosen from a distribution function Drs(x) indexed by r and s. The distribution of the bivariate process may be represented by a vector F(x, k) with componentsThese are generated recursively by the relationwhere the increment matrix distribution B(x) has components brsDrs(x). We denote the nth moment of B(x) by Bn = ∫xndB(x), so that B0 = B(∞).

1973 ◽  
Vol 5 (3) ◽  
pp. 541-553 ◽  
Author(s):  
John Bather

This paper is concerned with the general problem of finding an optimal transition matrix for a finite Markov chain, where the probabilities for each transition must be chosen from a given convex family of distributions. The immediate cost is determined by this choice, but it is required to minimise the average expected cost in the long run. The problem is investigated by classifying the states according to the accessibility relations between them. If an optimal policy exists, it can be found by considering the convex subsystems associated with the states at different levels in the classification scheme.


Author(s):  
J. Keilson ◽  
D. M. G. Wishart

We shall be concerned in this paper with a class of temporally homogeneous Markov processes, {R(t), X(t)}, in discrete or continuous time taking values in the spaceThe marginal process {X(t)} in discrete time is, in the terminology of Miller (10), a sequence of random variables defined on a finite Markov chain. Probability measures associated with these processes are vectors of the formwhereWe shall call a vector of the form of (0·2) a vector distribution.


Author(s):  
H. D. Miller

SummaryLet {kr} (r = 0, 1, 2, …; 1 ≤ kr ≤ h) be a positively regular, finite Markov chain with transition matrix P = (pjk). For each possible transition j → k let gjk(x)(− ∞ ≤ x ≤ ∞) be a given distribution function. The sequence of random variables {ξr} is defined where ξr has the distribution gjk(x) if the rth transition takes the chain from state j to state k. It is supposed that each distribution gjk(x) admits a two-sided Laplace-Stieltjes transform in a real t-interval surrounding t = 0. Let P(t) denote the matrix {Pjkmjk(t)}. It is shown, using probability arguments, that I − sP(t) admits a Wiener-Hopf type of factorization in two ways for suitable values of s where the plus-factors are non-singular, bounded and have regular elements in a right half of the complex t-plane and the minus-factors have similar properties in an overlapping left half-plane (Theorem 1).


1973 ◽  
Vol 5 (03) ◽  
pp. 541-553 ◽  
Author(s):  
John Bather

This paper is concerned with the general problem of finding an optimal transition matrix for a finite Markov chain, where the probabilities for each transition must be chosen from a given convex family of distributions. The immediate cost is determined by this choice, but it is required to minimise the average expected cost in the long run. The problem is investigated by classifying the states according to the accessibility relations between them. If an optimal policy exists, it can be found by considering the convex subsystems associated with the states at different levels in the classification scheme.


Author(s):  
H. D. Miller

SummaryThis paper is essentially a continuation of the previous one (5) and the notation established therein will be freely repeated. The sequence {ξr} of random variables is defined on a positively regular finite Markov chain {kr} as in (5) and the partial sums and are considered. Let ζn be the first positive ζr and let πjk(y), the ‘ruin’ function or absorption probability, be defined by The main result (Theorem 1) is an asymptotic expression for πjk(y) for large y in the case when , the expectation of ξ1 being computed under the unique stationary distribution for k0, the initial state of the chain, and unconditional on k1.


1970 ◽  
Vol 7 (03) ◽  
pp. 699-711 ◽  
Author(s):  
Julian Keilson ◽  
S. Subba Rao

Additive processes on finite Markov chains have been investigated by Miller ([8], [9]), Keilson and Wishart ([2], [3], [4]) and by Fukushima and Hitsuda [1]. These papers study a two-dimensional Markov Process {X(t),R(t)} whose state space isR1× {1, 2, ···,R} characterized by the following properties:(i)R(t) is an irreducible Markov chain on states 1,2, …,R governed by atransition probability matrixBo= {brs}.(ii)X(t) is a sumof random increments dependent on the chain, i.e., if the ith transition takes the chain from state r to state s, then the incrementhas the distribution(iii)Nt, is t in discrete time while in the continuous time case Nt, might be an independent Poisson process.


2019 ◽  
Vol 44 (3) ◽  
pp. 282-308 ◽  
Author(s):  
Brian G. Vegetabile ◽  
Stephanie A. Stout-Oswald ◽  
Elysia Poggi Davis ◽  
Tallie Z. Baram ◽  
Hal S. Stern

Predictability of behavior is an important characteristic in many fields including biology, medicine, marketing, and education. When a sequence of actions performed by an individual can be modeled as a stationary time-homogeneous Markov chain the predictability of the individual’s behavior can be quantified by the entropy rate of the process. This article compares three estimators of the entropy rate of finite Markov processes. The first two methods directly estimate the entropy rate through estimates of the transition matrix and stationary distribution of the process. The third method is related to the sliding-window Lempel–Ziv compression algorithm. The methods are compared via a simulation study and in the context of a study of interactions between mothers and their children.


2019 ◽  
Vol 29 (08) ◽  
pp. 1431-1449
Author(s):  
John Rhodes ◽  
Anne Schilling

We show that the stationary distribution of a finite Markov chain can be expressed as the sum of certain normal distributions. These normal distributions are associated to planar graphs consisting of a straight line with attached loops. The loops touch only at one vertex either of the straight line or of another attached loop. Our analysis is based on our previous work, which derives the stationary distribution of a finite Markov chain using semaphore codes on the Karnofsky–Rhodes and McCammond expansion of the right Cayley graph of the finite semigroup underlying the Markov chain.


1970 ◽  
Vol 7 (3) ◽  
pp. 699-711 ◽  
Author(s):  
Julian Keilson ◽  
S. Subba Rao

Additive processes on finite Markov chains have been investigated by Miller ([8], [9]), Keilson and Wishart ([2], [3], [4]) and by Fukushima and Hitsuda [1]. These papers study a two-dimensional Markov Process {X(t), R(t)} whose state space is R1 × {1, 2, ···, R} characterized by the following properties: (i)R(t) is an irreducible Markov chain on states 1,2, …,R governed by atransition probability matrix Bo = {brs}.(ii)X(t) is a sum of random increments dependent on the chain, i.e., if the ith transition takes the chain from state r to state s, then the increment has the distribution (iii)Nt, is t in discrete time while in the continuous time case Nt, might be an independent Poisson process.


Author(s):  
J. Keilson ◽  
D. M. G. Wishart

In a previous paper (3), to which this is a sequel, a central limit theorem was presented for the homogeneous additive processes defined on a finite Markov chain, a class of processes treated extensively by Miller (4). A typical homogeneous process {R(t), X(t)} takes its values in the spaceand is described by a vector distribution F(x, t) with componentsand an increment matrix distribution B(x) governing the transitions. The present paper treats the motion of the process in the same space when its homogeneity is modified by the presence of a set of boundary states in x. Such bounded processes have many applications to the theory of queues, dams, and inventories. Indeed, this paper and its predecessor were motivated initially by a desire to discuss queuing systems with many servers and many service phases. We will treat both absorbing boundaries and associated passage time densities, and reflecting boundaries. For the latter our main objective is an asymptotic discussion of the tails of the ergodic distribution.


Sign in / Sign up

Export Citation Format

Share Document