scholarly journals Analysis of Markov Jump Processes under Terminal Constraints

Author(s):  
Michael Backenköhler ◽  
Luca Bortolussi ◽  
Gerrit Großmann ◽  
Verena Wolf

AbstractMany probabilistic inference problems such as stochastic filtering or the computation of rare event probabilities require model analysis under initial and terminal constraints. We propose a solution to this bridging problem for the widely used class of population-structured Markov jump processes. The method is based on a state-space lumping scheme that aggregates states in a grid structure. The resulting approximate bridging distribution is used to iteratively refine relevant and truncate irrelevant parts of the state-space. This way, the algorithm learns a well-justified finite-state projection yielding guaranteed lower bounds for the system behavior under endpoint constraints. We demonstrate the method’s applicability to a wide range of problems such as Bayesian inference and the analysis of rare events.

2014 ◽  
Vol 51 (3) ◽  
pp. 741-755
Author(s):  
Adam W. Grace ◽  
Dirk P. Kroese ◽  
Werner Sandmann

Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a state-dependent importance sampling approach to this problem that is adaptive and uses Markov chain Monte Carlo to sample from the zero-variance importance sampling distribution. The method is applicable to a wide range of Markov jump processes and achieves high accuracy, while requiring only a small sample to obtain the importance parameters. We demonstrate its efficiency through benchmark examples in queueing theory and stochastic chemical kinetics.


2000 ◽  
Vol 32 (03) ◽  
pp. 779-799 ◽  
Author(s):  
Ole E. Barndorff-Nielsen ◽  
Fred Espen Benth ◽  
Jens Ledet Jensen

Certain types of Markov jump processes x(t) with continuous state space and one or more absorbing states are studied. Cases where the transition rate in state x is of the form λ(x) = |x|δ in a neighbourhood of the origin in ℝ d are considered, in particular. This type of problem arises from quantum physics in the study of laser cooling of atoms, and the present paper connects to recent work in the physics literature. The main question addressed is that of the asymptotic behaviour of x(t) near the origin for large t. The study involves solution of a renewal equation problem in continuous state space.


1987 ◽  
Vol 12 (3) ◽  
pp. 562-568 ◽  
Author(s):  
Moshe Shaked ◽  
J. George Shanthikumar

2000 ◽  
Vol 32 (3) ◽  
pp. 779-799 ◽  
Author(s):  
Ole E. Barndorff-Nielsen ◽  
Fred Espen Benth ◽  
Jens Ledet Jensen

Certain types of Markov jump processes x(t) with continuous state space and one or more absorbing states are studied. Cases where the transition rate in state x is of the form λ(x) = |x|δ in a neighbourhood of the origin in ℝd are considered, in particular. This type of problem arises from quantum physics in the study of laser cooling of atoms, and the present paper connects to recent work in the physics literature. The main question addressed is that of the asymptotic behaviour of x(t) near the origin for large t. The study involves solution of a renewal equation problem in continuous state space.


2014 ◽  
Vol 51 (03) ◽  
pp. 741-755
Author(s):  
Adam W. Grace ◽  
Dirk P. Kroese ◽  
Werner Sandmann

Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a state-dependent importance sampling approach to this problem that is adaptive and uses Markov chain Monte Carlo to sample from the zero-variance importance sampling distribution. The method is applicable to a wide range of Markov jump processes and achieves high accuracy, while requiring only a small sample to obtain the importance parameters. We demonstrate its efficiency through benchmark examples in queueing theory and stochastic chemical kinetics.


1994 ◽  
Vol 46 (06) ◽  
pp. 1238-1262 ◽  
Author(s):  
I. Iscoe ◽  
D. Mcdonald ◽  
K. Qian

Abstract We approximate the exit distribution of a Markov jump process into a set of forbidden states and we apply these general results to an ATM multiplexor. In this case the forbidden states represent an overloaded multiplexor. Statistics for this overload or busy period are difficult to obtain since this is such a rare event. Starting from the approximate exit distribution, one may simulate the busy period without wasting simulation time waiting for the overload to occur.


Author(s):  
Mark A. Peletier ◽  
Riccarda Rossi ◽  
Giuseppe Savaré ◽  
Oliver Tse

AbstractWe have created a functional framework for a class of non-metric gradient systems. The state space is a space of nonnegative measures, and the class of systems includes the Forward Kolmogorov equations for the laws of Markov jump processes on Polish spaces. This framework comprises a definition of a notion of solutions, a method to prove existence, and an archetype uniqueness result. We do this by using only the structure that is provided directly by the dissipation functional, which need not be homogeneous, and we do not appeal to any metric structure.


Sign in / Sign up

Export Citation Format

Share Document