Estimating Derivatives Via Poisson's Equation

1991 ◽  
Vol 5 (4) ◽  
pp. 415-428 ◽  
Author(s):  
Bennett L. Fox ◽  
Paul Glasserman

Let x(j) be the expected reward accumulated up to hitting an absorbing set in a Markov chain, starting from state j. Suppose the transition probabilities and the one-step reward function depend on a parameter, and denote by y(j) the derivative of x(j) with respect to that parameter. We estimate y(0) starting from the respective Poisson equations that x = [x(0),x(l),…] and y = [y(0),y(l),…] satisfy. Relative to a likelihood-ratio-method (LRM) estimator, our estimator generally has (much) smaller variance; in a certain sense, it is a conditional expectation of that estimator given x. Unlike LRM, however, we have to estimate certain components of x. Our method has broader scope than LRM: we can estimate sensitivity to opening arcs.

1988 ◽  
Vol 20 (01) ◽  
pp. 99-111 ◽  
Author(s):  
Nico M. Van Dijk

Consider a perturbation in the one-step transition probabilities and rewards of a discrete-time Markov reward process with an unbounded one-step reward function. A perturbation estimate is derived for the finite horizon and average reward function. Results from [3] are hereby extended to the unbounded case. The analysis is illustrated for one- and two-dimensional queueing processes by an M/M/1-queue and an overflow queueing model with an error bound in the arrival rate.


1969 ◽  
Vol 6 (3) ◽  
pp. 704-707 ◽  
Author(s):  
Thomas L. Vlach ◽  
Ralph L. Disney

The departure process from the GI/G/1 queue is shown to be a semi-Markov process imbedded at departure points with a two-dimensional state space. Transition probabilities for this process are defined and derived from the distributions of the arrival and service processes. The one step transition probabilities and a stationary distribution are obtained for the imbedded two-dimensional Markov chain.


1988 ◽  
Vol 20 (1) ◽  
pp. 99-111 ◽  
Author(s):  
Nico M. Van Dijk

Consider a perturbation in the one-step transition probabilities and rewards of a discrete-time Markov reward process with an unbounded one-step reward function. A perturbation estimate is derived for the finite horizon and average reward function. Results from [3] are hereby extended to the unbounded case. The analysis is illustrated for one- and two-dimensional queueing processes by an M/M/1-queue and an overflow queueing model with an error bound in the arrival rate.


1969 ◽  
Vol 6 (03) ◽  
pp. 704-707 ◽  
Author(s):  
Thomas L. Vlach ◽  
Ralph L. Disney

The departure process from the GI/G/1 queue is shown to be a semi-Markov process imbedded at departure points with a two-dimensional state space. Transition probabilities for this process are defined and derived from the distributions of the arrival and service processes. The one step transition probabilities and a stationary distribution are obtained for the imbedded two-dimensional Markov chain.


1977 ◽  
Vol 14 (02) ◽  
pp. 298-308 ◽  
Author(s):  
Peter R. Nelson

In a single-shelf library having infinitely many books B 1 , B 2 , …, the probability of selecting each book is assumed known. Books are removed one at a time and replaced in position k prior to the next removal. Books are moved either to the right or the left as is necessary to vacate position k. Those arrangements of books where after some finite position all the books are in natural order (book i occupies position i) are considered as states in an infinite Markov chain. When k > 1, we show that the chain can never be positive recurrent. When k = 1, we find the limits of ratios of one-step transition probabilities; and when k = 1 and the chain is transient, we find the Martin exit boundary.


1983 ◽  
Vol 20 (3) ◽  
pp. 482-504 ◽  
Author(s):  
C. Cocozza-Thivent ◽  
C. Kipnis ◽  
M. Roussignol

We investigate how the property of null-recurrence is preserved for Markov chains under a perturbation of the transition probability. After recalling some useful criteria in terms of the one-step transition nucleus we present two methods to determine barrier functions, one in terms of taboo potentials for the unperturbed Markov chain, and the other based on Taylor's formula.


1999 ◽  
Vol 12 (4) ◽  
pp. 371-392
Author(s):  
Bong Dae Choi ◽  
Sung Ho Choi ◽  
Dan Keun Sung ◽  
Tae-Hee Lee ◽  
Kyu-Seog Song

We analyze the transient behavior of a Markovian arrival queue with congestion control based on a double of thresholds, where the arrival process is a queue-length dependent Markovian arrival process. We consider Markov chain embedded at arrival epochs and derive the one-step transition probabilities. From these results, we obtain the mean delay and the loss probability of the nth arrival packet. Before we study this complex model, first we give a transient analysis of an MAP/M/1 queueing system without congestion control at arrival epochs. We apply our result to a signaling system No. 7 network with a congestion control based on thresholds.


1977 ◽  
Vol 14 (2) ◽  
pp. 298-308 ◽  
Author(s):  
Peter R. Nelson

In a single-shelf library having infinitely many books B1, B2, …, the probability of selecting each book is assumed known. Books are removed one at a time and replaced in position k prior to the next removal. Books are moved either to the right or the left as is necessary to vacate position k. Those arrangements of books where after some finite position all the books are in natural order (book i occupies position i) are considered as states in an infinite Markov chain. When k > 1, we show that the chain can never be positive recurrent. When k = 1, we find the limits of ratios of one-step transition probabilities; and when k = 1 and the chain is transient, we find the Martin exit boundary.


1987 ◽  
Vol 24 (4) ◽  
pp. 1006-1011 ◽  
Author(s):  
G. Abdallaoui

Our concern is with a particular problem which arises in connection with a discrete-time Markov chain model for a graded manpower system. In this model, the members of an organisation are classified into distinct classes. As time passes, they move from one class to another, or to the outside world, in a random way governed by fixed transition probabilities. In this paper, the emphasis is placed on evaluating exact values of the probabilities of attaining and maintaining a structure.


2003 ◽  
Vol 03 (04) ◽  
pp. L389-L398 ◽  
Author(s):  
ZORAN MIHAILOVIĆ ◽  
MILAN RAJKOVIĆ

A discrete-time Markov chain solution with exact rules for general computation of transition probabilities of the one-dimensional cooperative Parrondo's games is presented. We show that winning and the occurrence of the paradox depends on the number of players. Analytical results are compared to the results of the computer simulation and to the results based on the mean-field approach.


Sign in / Sign up

Export Citation Format

Share Document