Perturbation theory for unbounded Markov reward processes with applications to queueing

1988 ◽  
Vol 20 (1) ◽  
pp. 99-111 ◽  
Author(s):  
Nico M. Van Dijk

Consider a perturbation in the one-step transition probabilities and rewards of a discrete-time Markov reward process with an unbounded one-step reward function. A perturbation estimate is derived for the finite horizon and average reward function. Results from [3] are hereby extended to the unbounded case. The analysis is illustrated for one- and two-dimensional queueing processes by an M/M/1-queue and an overflow queueing model with an error bound in the arrival rate.

1988 ◽  
Vol 20 (01) ◽  
pp. 99-111 ◽  
Author(s):  
Nico M. Van Dijk

Consider a perturbation in the one-step transition probabilities and rewards of a discrete-time Markov reward process with an unbounded one-step reward function. A perturbation estimate is derived for the finite horizon and average reward function. Results from [3] are hereby extended to the unbounded case. The analysis is illustrated for one- and two-dimensional queueing processes by an M/M/1-queue and an overflow queueing model with an error bound in the arrival rate.


1999 ◽  
Vol 12 (4) ◽  
pp. 371-392
Author(s):  
Bong Dae Choi ◽  
Sung Ho Choi ◽  
Dan Keun Sung ◽  
Tae-Hee Lee ◽  
Kyu-Seog Song

We analyze the transient behavior of a Markovian arrival queue with congestion control based on a double of thresholds, where the arrival process is a queue-length dependent Markovian arrival process. We consider Markov chain embedded at arrival epochs and derive the one-step transition probabilities. From these results, we obtain the mean delay and the loss probability of the nth arrival packet. Before we study this complex model, first we give a transient analysis of an MAP/M/1 queueing system without congestion control at arrival epochs. We apply our result to a signaling system No. 7 network with a congestion control based on thresholds.


1969 ◽  
Vol 6 (3) ◽  
pp. 704-707 ◽  
Author(s):  
Thomas L. Vlach ◽  
Ralph L. Disney

The departure process from the GI/G/1 queue is shown to be a semi-Markov process imbedded at departure points with a two-dimensional state space. Transition probabilities for this process are defined and derived from the distributions of the arrival and service processes. The one step transition probabilities and a stationary distribution are obtained for the imbedded two-dimensional Markov chain.


1970 ◽  
Vol 7 (3) ◽  
pp. 771-775
Author(s):  
I. V. Basawa

Let {Xk}, k = 1, 2, ··· be a sequence of random variables forming a homogeneous Markov chain on a finite state-space, S = {1, 2, ···, s}. Xk could be thought of as the state at time k of some physical system for which are the (one-step) transition probabilities. It is assumed that all the states are inter-communicating, so that the transition matrix P = ((pij)) is irreducible.


1991 ◽  
Vol 5 (4) ◽  
pp. 415-428 ◽  
Author(s):  
Bennett L. Fox ◽  
Paul Glasserman

Let x(j) be the expected reward accumulated up to hitting an absorbing set in a Markov chain, starting from state j. Suppose the transition probabilities and the one-step reward function depend on a parameter, and denote by y(j) the derivative of x(j) with respect to that parameter. We estimate y(0) starting from the respective Poisson equations that x = [x(0),x(l),…] and y = [y(0),y(l),…] satisfy. Relative to a likelihood-ratio-method (LRM) estimator, our estimator generally has (much) smaller variance; in a certain sense, it is a conditional expectation of that estimator given x. Unlike LRM, however, we have to estimate certain components of x. Our method has broader scope than LRM: we can estimate sensitivity to opening arcs.


1968 ◽  
Vol 5 (02) ◽  
pp. 350-356 ◽  
Author(s):  
R. G. Khazanie

Consider a finite Markov process {Xn } described by the one-step transition probabilities In describing the transition probabilities in the above manner we are adopting the convention that (0)0 = 1 so that the states 0 and M are absorbing, and the states 1,2,···,M-1 are transient.


1969 ◽  
Vol 6 (03) ◽  
pp. 704-707 ◽  
Author(s):  
Thomas L. Vlach ◽  
Ralph L. Disney

The departure process from the GI/G/1 queue is shown to be a semi-Markov process imbedded at departure points with a two-dimensional state space. Transition probabilities for this process are defined and derived from the distributions of the arrival and service processes. The one step transition probabilities and a stationary distribution are obtained for the imbedded two-dimensional Markov chain.


1970 ◽  
Vol 7 (03) ◽  
pp. 771-775
Author(s):  
I. V. Basawa

Let {Xk }, k = 1, 2, ··· be a sequence of random variables forming a homogeneous Markov chain on a finite state-space, S = {1, 2, ···, s}. Xk could be thought of as the state at time k of some physical system for which are the (one-step) transition probabilities. It is assumed that all the states are inter-communicating, so that the transition matrix P = ((pij )) is irreducible.


1968 ◽  
Vol 5 (2) ◽  
pp. 350-356 ◽  
Author(s):  
R. G. Khazanie

Consider a finite Markov process {Xn} described by the one-step transition probabilities In describing the transition probabilities in the above manner we are adopting the convention that (0)0 = 1 so that the states 0 and M are absorbing, and the states 1,2,···,M-1 are transient.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 729
Author(s):  
Miquel Montero

Random walks with invariant loop probabilities comprise a wide family of Markov processes with site-dependent, one-step transition probabilities. The whole family, which includes the simple random walk, emerges from geometric considerations related to the stereographic projection of an underlying geometry into a line. After a general introduction, we focus our attention on the elliptic case: random walks on a circle with built-in reflexing boundaries.


Sign in / Sign up

Export Citation Format

Share Document