A method of approximating Markov jump processes

1988 ◽  
Vol 20 (01) ◽  
pp. 33-58
Author(s):  
Keith N. Crank ◽  
Prem S. Puri

We present a method of approximating Markov jump processes which was used by Fuhrmann [7] in a special case. We generalize the method and prove weak convergence results under mild assumptions. In addition we obtain bounds on the rates of convergence of the probabilities at arbitrary fixed times. The technique is demonstrated using a state-dependent branching process as an example.

1988 ◽  
Vol 20 (1) ◽  
pp. 33-58
Author(s):  
Keith N. Crank ◽  
Prem S. Puri

We present a method of approximating Markov jump processes which was used by Fuhrmann [7] in a special case. We generalize the method and prove weak convergence results under mild assumptions. In addition we obtain bounds on the rates of convergence of the probabilities at arbitrary fixed times. The technique is demonstrated using a state-dependent branching process as an example.


2014 ◽  
Vol 51 (3) ◽  
pp. 741-755
Author(s):  
Adam W. Grace ◽  
Dirk P. Kroese ◽  
Werner Sandmann

Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a state-dependent importance sampling approach to this problem that is adaptive and uses Markov chain Monte Carlo to sample from the zero-variance importance sampling distribution. The method is applicable to a wide range of Markov jump processes and achieves high accuracy, while requiring only a small sample to obtain the importance parameters. We demonstrate its efficiency through benchmark examples in queueing theory and stochastic chemical kinetics.


2014 ◽  
Vol 51 (03) ◽  
pp. 741-755
Author(s):  
Adam W. Grace ◽  
Dirk P. Kroese ◽  
Werner Sandmann

Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a state-dependent importance sampling approach to this problem that is adaptive and uses Markov chain Monte Carlo to sample from the zero-variance importance sampling distribution. The method is applicable to a wide range of Markov jump processes and achieves high accuracy, while requiring only a small sample to obtain the importance parameters. We demonstrate its efficiency through benchmark examples in queueing theory and stochastic chemical kinetics.


Sign in / Sign up

Export Citation Format

Share Document