scholarly journals Automated State-Dependent Importance Sampling for Markov Jump Processes via Sampling from the Zero-Variance Distribution

2014 ◽  
Vol 51 (3) ◽  
pp. 741-755
Author(s):  
Adam W. Grace ◽  
Dirk P. Kroese ◽  
Werner Sandmann

Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a state-dependent importance sampling approach to this problem that is adaptive and uses Markov chain Monte Carlo to sample from the zero-variance importance sampling distribution. The method is applicable to a wide range of Markov jump processes and achieves high accuracy, while requiring only a small sample to obtain the importance parameters. We demonstrate its efficiency through benchmark examples in queueing theory and stochastic chemical kinetics.

2014 ◽  
Vol 51 (03) ◽  
pp. 741-755
Author(s):  
Adam W. Grace ◽  
Dirk P. Kroese ◽  
Werner Sandmann

Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a state-dependent importance sampling approach to this problem that is adaptive and uses Markov chain Monte Carlo to sample from the zero-variance importance sampling distribution. The method is applicable to a wide range of Markov jump processes and achieves high accuracy, while requiring only a small sample to obtain the importance parameters. We demonstrate its efficiency through benchmark examples in queueing theory and stochastic chemical kinetics.


2019 ◽  
Author(s):  
Colin S. Gillespie ◽  
Andrew Golightly

AbstractRare event probabilities play an important role in the understanding of the behaviour of biochemical systems. Due to the intractability of the most natural Markov jump process representation of a system of interest, rare event probabilities are typically estimated using importance sampling. While the resulting algorithm is reasonably well developed, the problem of choosing a suitable importance density is far from straightforward. We therefore leverage recent developments on simulation of conditioned jump processes to propose an importance density that is simple to implement and requires no tuning. Our results demonstrate superior performance over some existing approaches.


1994 ◽  
Vol 46 (06) ◽  
pp. 1238-1262 ◽  
Author(s):  
I. Iscoe ◽  
D. Mcdonald ◽  
K. Qian

Abstract We approximate the exit distribution of a Markov jump process into a set of forbidden states and we apply these general results to an ATM multiplexor. In this case the forbidden states represent an overloaded multiplexor. Statistics for this overload or busy period are difficult to obtain since this is such a rare event. Starting from the approximate exit distribution, one may simulate the busy period without wasting simulation time waiting for the overload to occur.


Mathematics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 391
Author(s):  
Oluseyi Odubote ◽  
Daniel F. Linder

Reaction networks are important tools for modeling a variety of biological phenomena across a wide range of scales, for example as models of gene regulation within a cell or infectious disease outbreaks in a population. Hence, calibrating these models to observed data is useful for predicting future system behavior. However, the statistical estimation of the parameters of reaction networks is often challenging due to intractable likelihoods. Here we explore estimating equations to estimate the reaction rate parameters of density dependent Markov jump processes (DDMJP). The variance–covariance weights we propose to use in the estimating equations are obtained from an approximating process, derived from the Fokker–Planck approximation of the chemical master equation for stochastic reaction networks. We investigate the performance of the proposed methodology in a simulation study of the Lotka–Volterra predator–prey model and by fitting a susceptible, infectious, removed (SIR) model to real data from the historical plague outbreak in Eyam, England.


1988 ◽  
Vol 20 (01) ◽  
pp. 33-58
Author(s):  
Keith N. Crank ◽  
Prem S. Puri

We present a method of approximating Markov jump processes which was used by Fuhrmann [7] in a special case. We generalize the method and prove weak convergence results under mild assumptions. In addition we obtain bounds on the rates of convergence of the probabilities at arbitrary fixed times. The technique is demonstrated using a state-dependent branching process as an example.


1988 ◽  
Vol 20 (1) ◽  
pp. 33-58
Author(s):  
Keith N. Crank ◽  
Prem S. Puri

We present a method of approximating Markov jump processes which was used by Fuhrmann [7] in a special case. We generalize the method and prove weak convergence results under mild assumptions. In addition we obtain bounds on the rates of convergence of the probabilities at arbitrary fixed times. The technique is demonstrated using a state-dependent branching process as an example.


Author(s):  
Michael Backenköhler ◽  
Luca Bortolussi ◽  
Gerrit Großmann ◽  
Verena Wolf

AbstractMany probabilistic inference problems such as stochastic filtering or the computation of rare event probabilities require model analysis under initial and terminal constraints. We propose a solution to this bridging problem for the widely used class of population-structured Markov jump processes. The method is based on a state-space lumping scheme that aggregates states in a grid structure. The resulting approximate bridging distribution is used to iteratively refine relevant and truncate irrelevant parts of the state-space. This way, the algorithm learns a well-justified finite-state projection yielding guaranteed lower bounds for the system behavior under endpoint constraints. We demonstrate the method’s applicability to a wide range of problems such as Bayesian inference and the analysis of rare events.


Sign in / Sign up

Export Citation Format

Share Document