Controllable Markov Jump Processes. I. Optimum Filtering Based on Complex Observations

2018 ◽  
Vol 57 (6) ◽  
pp. 890-906 ◽  
Author(s):  
A. V. Borisov ◽  
G. B. Miller ◽  
A. I. Stefanovich
2013 ◽  
Vol 150 (1) ◽  
pp. 181-203 ◽  
Author(s):  
Paolo Muratore-Ginanneschi ◽  
Carlos Mejía-Monasterio ◽  
Luca Peliti

Mathematics ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 506
Author(s):  
Andrey Borisov ◽  
Igor Sokolov

The paper is devoted to the optimal state filtering of the finite-state Markov jump processes, given indirect continuous-time observations corrupted by Wiener noise. The crucial feature is that the observation noise intensity is a function of the estimated state, which breaks forthright filtering approaches based on the passage to the innovation process and Girsanov’s measure change. We propose an equivalent observation transform, which allows usage of the classical nonlinear filtering framework. We obtain the optimal estimate as a solution to the discrete–continuous stochastic differential system with both continuous and counting processes on the right-hand side. For effective computer realization, we present a new class of numerical algorithms based on the exact solution to the optimal filtering given the time-discretized observation. The proposed estimate approximations are stable, i.e., have non-negative components and satisfy the normalization condition. We prove the assertions characterizing the approximation accuracy depending on the observation system parameters, time discretization step, the maximal number of allowed state transitions, and the applied scheme of numerical integration.


2014 ◽  
Vol 51 (3) ◽  
pp. 741-755
Author(s):  
Adam W. Grace ◽  
Dirk P. Kroese ◽  
Werner Sandmann

Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a state-dependent importance sampling approach to this problem that is adaptive and uses Markov chain Monte Carlo to sample from the zero-variance importance sampling distribution. The method is applicable to a wide range of Markov jump processes and achieves high accuracy, while requiring only a small sample to obtain the importance parameters. We demonstrate its efficiency through benchmark examples in queueing theory and stochastic chemical kinetics.


Sign in / Sign up

Export Citation Format

Share Document