On a characterization property of finite irreducible Markov chains

1970 ◽  
Vol 7 (3) ◽  
pp. 771-775
Author(s):  
I. V. Basawa

Let {Xk}, k = 1, 2, ··· be a sequence of random variables forming a homogeneous Markov chain on a finite state-space, S = {1, 2, ···, s}. Xk could be thought of as the state at time k of some physical system for which are the (one-step) transition probabilities. It is assumed that all the states are inter-communicating, so that the transition matrix P = ((pij)) is irreducible.

1970 ◽  
Vol 7 (03) ◽  
pp. 771-775
Author(s):  
I. V. Basawa

Let {Xk }, k = 1, 2, ··· be a sequence of random variables forming a homogeneous Markov chain on a finite state-space, S = {1, 2, ···, s}. Xk could be thought of as the state at time k of some physical system for which are the (one-step) transition probabilities. It is assumed that all the states are inter-communicating, so that the transition matrix P = ((pij )) is irreducible.


1968 ◽  
Vol 5 (02) ◽  
pp. 350-356 ◽  
Author(s):  
R. G. Khazanie

Consider a finite Markov process {Xn } described by the one-step transition probabilities In describing the transition probabilities in the above manner we are adopting the convention that (0)0 = 1 so that the states 0 and M are absorbing, and the states 1,2,···,M-1 are transient.


1963 ◽  
Vol 3 (3) ◽  
pp. 351-358 ◽  
Author(s):  
P. D. Finch

Let R denote the set of real numbers, B the σ-field of all Borel subsets of R. A homogeneous Markov Chain with state space a Borel subset Ω of R is a sequence {an}, n≧ 0, of random variables, taking values in Ω, with one-step transition probabilities P(1) (ξ, A) defined by for each choice of ξ, ξ0, …, ξn−1 in ω and all Borel subsets A of ω The fact that the right-hand side of (1.1) does not depend on the ξi, 0 ≧ i > n, is of course the Markovian property, the non-dependence on n is the homogeneity of the chain.


1968 ◽  
Vol 5 (2) ◽  
pp. 350-356 ◽  
Author(s):  
R. G. Khazanie

Consider a finite Markov process {Xn} described by the one-step transition probabilities In describing the transition probabilities in the above manner we are adopting the convention that (0)0 = 1 so that the states 0 and M are absorbing, and the states 1,2,···,M-1 are transient.


1988 ◽  
Vol 20 (01) ◽  
pp. 99-111 ◽  
Author(s):  
Nico M. Van Dijk

Consider a perturbation in the one-step transition probabilities and rewards of a discrete-time Markov reward process with an unbounded one-step reward function. A perturbation estimate is derived for the finite horizon and average reward function. Results from [3] are hereby extended to the unbounded case. The analysis is illustrated for one- and two-dimensional queueing processes by an M/M/1-queue and an overflow queueing model with an error bound in the arrival rate.


1999 ◽  
Vol 12 (4) ◽  
pp. 371-392
Author(s):  
Bong Dae Choi ◽  
Sung Ho Choi ◽  
Dan Keun Sung ◽  
Tae-Hee Lee ◽  
Kyu-Seog Song

We analyze the transient behavior of a Markovian arrival queue with congestion control based on a double of thresholds, where the arrival process is a queue-length dependent Markovian arrival process. We consider Markov chain embedded at arrival epochs and derive the one-step transition probabilities. From these results, we obtain the mean delay and the loss probability of the nth arrival packet. Before we study this complex model, first we give a transient analysis of an MAP/M/1 queueing system without congestion control at arrival epochs. We apply our result to a signaling system No. 7 network with a congestion control based on thresholds.


1969 ◽  
Vol 6 (3) ◽  
pp. 704-707 ◽  
Author(s):  
Thomas L. Vlach ◽  
Ralph L. Disney

The departure process from the GI/G/1 queue is shown to be a semi-Markov process imbedded at departure points with a two-dimensional state space. Transition probabilities for this process are defined and derived from the distributions of the arrival and service processes. The one step transition probabilities and a stationary distribution are obtained for the imbedded two-dimensional Markov chain.


1988 ◽  
Vol 20 (1) ◽  
pp. 99-111 ◽  
Author(s):  
Nico M. Van Dijk

Consider a perturbation in the one-step transition probabilities and rewards of a discrete-time Markov reward process with an unbounded one-step reward function. A perturbation estimate is derived for the finite horizon and average reward function. Results from [3] are hereby extended to the unbounded case. The analysis is illustrated for one- and two-dimensional queueing processes by an M/M/1-queue and an overflow queueing model with an error bound in the arrival rate.


1969 ◽  
Vol 6 (03) ◽  
pp. 704-707 ◽  
Author(s):  
Thomas L. Vlach ◽  
Ralph L. Disney

The departure process from the GI/G/1 queue is shown to be a semi-Markov process imbedded at departure points with a two-dimensional state space. Transition probabilities for this process are defined and derived from the distributions of the arrival and service processes. The one step transition probabilities and a stationary distribution are obtained for the imbedded two-dimensional Markov chain.


Author(s):  
J. G. Mauldon

Consider a Markov chain with an enumerable infinity of states, labelled 0, 1, 2, …, whose one-step transition probabilities pij are independent of time. ThenI write and, departing slightly from the usual convention,Then it is known ((1), pp. 324–34, or (6)) that the limits πij always exist, and that


Sign in / Sign up

Export Citation Format

Share Document