Non-Markov Processes in Continuous Time with Discrete State Spaces

1974 ◽  
pp. 77-85
Author(s):  
Rodney Coleman
2012 ◽  
Vol 24 (1) ◽  
pp. 49-58 ◽  
Author(s):  
Jerzy Girtler

Abstract The paper provides justification for the necessity to define reliability of diagnosing systems (SDG) in order to develop a diagnosis on state of any technical mechanism being a diagnosed system (SDN). It has been shown that the knowledge of SDG reliability enables defining diagnosis reliability. It has been assumed that the diagnosis reliability can be defined as a diagnosis property which specifies the degree of recognizing by a diagnosing system (SDG) the actual state of the diagnosed system (SDN) which may be any mechanism, and the conditional probability p(S*/K*) of occurrence (existence) of state S* of the mechanism (SDN) as a diagnosis measure provided that at a specified reliability of SDG, the vector K* of values of diagnostic parameters implied by the state, is observed. The probability that SDG is in the state of ability during diagnostic tests and the following diagnostic inferences leading to development of a diagnosis about the SDN state, has been accepted as a measure of SDG reliability. The theory of semi-Markov processes has been used for defining the SDG reliability, that enabled to develop a SDG reliability model in the form of a seven-state (continuous-time discrete-state) semi-Markov process of changes of SDG states.


2020 ◽  
Vol 26 ◽  
pp. 22
Author(s):  
Olivier Guéant ◽  
Iuliia Manziuk

The literature on continuous-time stochastic optimal control seldom deals with the case of discrete state spaces. In this paper, we provide a general framework for the optimal control of continuous-time Markov chains on finite graphs. In particular, we provide results on the long-term behavior of value functions and optimal controls, along with results on the associated ergodic Hamilton-Jacobi equation.


2015 ◽  
Vol 12 (107) ◽  
pp. 20150225 ◽  
Author(s):  
C. M. Pooley ◽  
S. C. Bishop ◽  
G. Marion

Bayesian statistics provides a framework for the integration of dynamic models with incomplete data to enable inference of model parameters and unobserved aspects of the system under study. An important class of dynamic models is discrete state space, continuous-time Markov processes (DCTMPs). Simulated via the Doob–Gillespie algorithm, these have been used to model systems ranging from chemistry to ecology to epidemiology. A new type of proposal, termed ‘model-based proposal’ (MBP), is developed for the efficient implementation of Bayesian inference in DCTMPs using Markov chain Monte Carlo (MCMC). This new method, which in principle can be applied to any DCTMP, is compared (using simple epidemiological SIS and SIR models as easy to follow exemplars) to a standard MCMC approach and a recently proposed particle MCMC (PMCMC) technique. When measurements are made on a single-state variable (e.g. the number of infected individuals in a population during an epidemic), model-based proposal MCMC (MBP-MCMC) is marginally faster than PMCMC (by a factor of 2–8 for the tests performed), and significantly faster than the standard MCMC scheme (by a factor of 400 at least). However, when model complexity increases and measurements are made on more than one state variable (e.g. simultaneously on the number of infected individuals in spatially separated subpopulations), MBP-MCMC is significantly faster than PMCMC (more than 100-fold for just four subpopulations) and this difference becomes increasingly large.


2014 ◽  
Vol 51 ◽  
pp. 725-778 ◽  
Author(s):  
C. R. Shelton ◽  
G. Ciardo

A continuous-time Markov process (CTMP) is a collection of variables indexed by a continuous quantity, time. It obeys the Markov property that the distribution over a future variable is independent of past variables given the state at the present time. We introduce continuous-time Markov process representations and algorithms for filtering, smoothing, expected sufficient statistics calculations, and model estimation, assuming no prior knowledge of continuous-time processes but some basic knowledge of probability and statistics. We begin by describing "flat" or unstructured Markov processes and then move to structured Markov processes (those arising from state spaces consisting of assignments to variables) including Kronecker, decision-diagram, and continuous-time Bayesian network representations. We provide the first connection between decision-diagrams and continuous-time Bayesian networks.


2019 ◽  
Vol 4 (1) ◽  
Author(s):  
Jonathan A. Ward ◽  
Martín López-García

AbstractWe propose a unified framework to represent a wide range of continuous-time discrete-state Markov processes on networks, and show how many network dynamics models in the literature can be represented in this unified framework. We show how a particular sub-set of these models, referred to here as single-vertex-transition (SVT) processes, lead to the analysis of quasi-birth-and-death (QBD) processes in the theory of continuous-time Markov chains. We illustrate how to analyse a number of summary statistics for these processes, such as absorption probabilities and first-passage times. We extend the graph-automorphism lumping approach [Kiss, Miller, Simon, Mathematics of Epidemics on Networks, 2017; Simon, Taylor, Kiss, J. Math. Bio. 62(4), 2011], by providing a matrix-oriented representation of this technique, and show how it can be applied to a very wide range of dynamical processes on networks. This approach can be used not only to solve the master equation of the system, but also to analyse the summary statistics of interest. We also show the interplay between the graph-automorphism lumping approach and the QBD structures when dealing with SVT processes. Finally, we illustrate our theoretical results with examples from the areas of opinion dynamics and mathematical epidemiology.


Sign in / Sign up

Export Citation Format

Share Document