Regenerative derivatives of regenerative sequences

1993 ◽  
Vol 25 (01) ◽  
pp. 116-139 ◽  
Author(s):  
Paul Glasserman

Given a parametric family of regenerative processes on a common probability space, we investigate when the derivatives (with respect to the parameter) are regenerative. We primarily consider sequences satisfying explicit, Lipschitz recursions, such as the waiting times in many queueing systems, and show that derivatives regenerate together with the original sequence under reasonable monotonicity or continuity assumptions. The inputs to our recursions are i.i.d. or, more generally, governed by a Harris-ergodic Markov chain. For i.i.d. input we identify explicit regeneration points; otherwise, we use coupling arguments. We give conditions for the expected steady-state derivative to be the derivative of the steady-state mean of the original sequence. Under these conditions, the derivative of the steady-state mean has a cycle-formula representation.

1993 ◽  
Vol 25 (1) ◽  
pp. 116-139 ◽  
Author(s):  
Paul Glasserman

Given a parametric family of regenerative processes on a common probability space, we investigate when the derivatives (with respect to the parameter) are regenerative. We primarily consider sequences satisfying explicit, Lipschitz recursions, such as the waiting times in many queueing systems, and show that derivatives regenerate together with the original sequence under reasonable monotonicity or continuity assumptions. The inputs to our recursions are i.i.d. or, more generally, governed by a Harris-ergodic Markov chain. For i.i.d. input we identify explicit regeneration points; otherwise, we use coupling arguments. We give conditions for the expected steady-state derivative to be the derivative of the steady-state mean of the original sequence. Under these conditions, the derivative of the steady-state mean has a cycle-formula representation.


2018 ◽  
Vol 16 (1) ◽  
pp. 986-998
Author(s):  
Chun Wen ◽  
Ting-Zhu Huang ◽  
Xian-Ming Gu ◽  
Zhao-Li Shen ◽  
Hong-Fan Zhang ◽  
...  

AbstractStochastic Automata Networks (SANs) have a large amount of applications in modelling queueing systems and communication systems. To find the steady state probability distribution of the SANs, it often needs to solve linear systems which involve their generator matrices. However, some classical iterative methods such as the Jacobi and the Gauss-Seidel are inefficient due to the huge size of the generator matrices. In this paper, the multipreconditioned GMRES (MPGMRES) is considered by using two or more preconditioners simultaneously. Meanwhile, a selective version of the MPGMRES is presented to overcome the rapid increase of the storage requirements and make it practical. Numerical results on two models of SANs are reported to illustrate the effectiveness of these proposed methods.


1991 ◽  
Vol 28 (1) ◽  
pp. 96-103 ◽  
Author(s):  
Daniel P. Heyman

We are given a Markov chain with states 0, 1, 2, ···. We want to get a numerical approximation of the steady-state balance equations. To do this, we truncate the chain, keeping the first n states, make the resulting matrix stochastic in some convenient way, and solve the finite system. The purpose of this paper is to provide some sufficient conditions that imply that as n tends to infinity, the stationary distributions of the truncated chains converge to the stationary distribution of the given chain. Our approach is completely probabilistic, and our conditions are given in probabilistic terms. We illustrate how to verify these conditions with five examples.


1980 ◽  
Vol 17 (3) ◽  
pp. 814-821 ◽  
Author(s):  
J. G. Shanthikumar

Some properties of the number of up- and downcrossings over level u, in a special case of regenerative processes are discussed. Two basic relations between the density functions and the expected number of upcrossings of this process are derived. Using these reults, two examples of controlled M/G/1 queueing systems are solved. Simple relations are derived for the waiting time distribution conditioned on the phase of control encountered by an arriving customer. The Laplace-Stieltjes transform of the distribution function of the waiting time of an arbitrary customer is also derived for each of these two examples.


1973 ◽  
Vol 10 (4) ◽  
pp. 886-890 ◽  
Author(s):  
W. J. Hendricks

In a single-shelf library of N books we suppose that books are selected one at a time and returned to the kth position on the shelf before another selection is made. Books are moved to the right or left as necessary to vacate position k. The probability of selecting each book is assumed to be known, and the N! arrangements of the books are considered as states of an ergodic Markov chain for which we find the stationary distribution.


1968 ◽  
Vol 5 (2) ◽  
pp. 401-413 ◽  
Author(s):  
Paul J. Schweitzer

A perturbation formalism is presented which shows how the stationary distribution and fundamental matrix of a Markov chain containing a single irreducible set of states change as the transition probabilities vary. Expressions are given for the partial derivatives of the stationary distribution and fundamental matrix with respect to the transition probabilities. Semi-group properties of the generators of transformations from one Markov chain to another are investigated. It is shown that a perturbation formalism exists in the multiple subchain case if and only if the change in the transition probabilities does not alter the number of, or intermix the various subchains. The formalism is presented when this condition is satisfied.


2013 ◽  
Vol 339 ◽  
pp. 236-241
Author(s):  
Ying Zhang ◽  
Shi Hang Huang ◽  
De Peng Dang ◽  
Hui Ruan

How to ensure a smooth, fast and efficient emergency response procedure becomes a highly concerned issue. However, a procedure of emergency plan may be confusing and inefficient in reality due to delay caused by waiting for decision-making, responding to conflicts and limited resource during the process of dealing with emergency. In this paper, we propose a colored stochastic Petri net to evaluate the security and complexity of emergency response procedure and the reasonableness of resource flow, so as to effectively analyze the potential deficiencies of emergency response procedure. We firstly establish a colored stochastic Petri net, and then convert the colored stochastic Petri net to an isomorphic Markov chain. Studying the structural properties of the colored stochastic Petri net and steady-state nature of the Markov chainprovides a scientific basis for the perfection of emergency plan. Meanwhile, it also ensures an ordered and efficient implementation of emergency response procedure in an emergency.


Sign in / Sign up

Export Citation Format

Share Document