Randomization of intensities in a Markov chain

1979 ◽  
Vol 11 (2) ◽  
pp. 397-421 ◽  
Author(s):  
M. Yadin ◽  
R. Syski

The matrix of intensities of a Markov process with discrete state space and continuous time parameter undergoes random changes in time in such a way that it stays constant between random instants. The resulting non-Markovian process is analyzed with the help of supplementary process defined in terms of variations of the intensity matrix. Several examples are presented.

1979 ◽  
Vol 11 (02) ◽  
pp. 397-421
Author(s):  
M. Yadin ◽  
R. Syski

The matrix of intensities of a Markov process with discrete state space and continuous time parameter undergoes random changes in time in such a way that it stays constant between random instants. The resulting non-Markovian process is analyzed with the help of supplementary process defined in terms of variations of the intensity matrix. Several examples are presented.


1974 ◽  
Vol 11 (04) ◽  
pp. 669-677 ◽  
Author(s):  
D. R. Grey

Results on the behaviour of Markov branching processes as time goes to infinity, hitherto obtained for models which assume a discrete state-space or discrete time or both, are here generalised to a model with both state-space and time continuous. The results are similar but the methods not always so.


Author(s):  
Atefeh Einafshar ◽  
Farrokh Sassani

A new approach to Vulnerability, Uncertainty and Probability (VUP) quantification procedure using Stochastic Petri Nets within a network of interacting satellites is presented. A Stochastic Petri Net based model is developed to quantify VUP in a network of interacting satellites. Three indicators are proposed to determine the VUP definitions in interacting network of satellites. The proposed VUP quantification scheme addresses a methodology which employs a Stochastic Petri Net for quantitative analysis of the behavior of the network. With the random variables associated with the Petri Net transitions, the dynamic behavior of the cooperating satellites in a SPN model can be mapped onto a time-continuous Markov chain with discrete state space. After generating a Markov Stochastic Petri Net model, the probability of a given condition in the network at a specified time can be computed and quantified as well as the vulnerability and uncertainty of the system using the identified indicators.


2015 ◽  
Vol 12 (107) ◽  
pp. 20150225 ◽  
Author(s):  
C. M. Pooley ◽  
S. C. Bishop ◽  
G. Marion

Bayesian statistics provides a framework for the integration of dynamic models with incomplete data to enable inference of model parameters and unobserved aspects of the system under study. An important class of dynamic models is discrete state space, continuous-time Markov processes (DCTMPs). Simulated via the Doob–Gillespie algorithm, these have been used to model systems ranging from chemistry to ecology to epidemiology. A new type of proposal, termed ‘model-based proposal’ (MBP), is developed for the efficient implementation of Bayesian inference in DCTMPs using Markov chain Monte Carlo (MCMC). This new method, which in principle can be applied to any DCTMP, is compared (using simple epidemiological SIS and SIR models as easy to follow exemplars) to a standard MCMC approach and a recently proposed particle MCMC (PMCMC) technique. When measurements are made on a single-state variable (e.g. the number of infected individuals in a population during an epidemic), model-based proposal MCMC (MBP-MCMC) is marginally faster than PMCMC (by a factor of 2–8 for the tests performed), and significantly faster than the standard MCMC scheme (by a factor of 400 at least). However, when model complexity increases and measurements are made on more than one state variable (e.g. simultaneously on the number of infected individuals in spatially separated subpopulations), MBP-MCMC is significantly faster than PMCMC (more than 100-fold for just four subpopulations) and this difference becomes increasingly large.


1974 ◽  
Vol 11 (4) ◽  
pp. 669-677 ◽  
Author(s):  
D. R. Grey

Results on the behaviour of Markov branching processes as time goes to infinity, hitherto obtained for models which assume a discrete state-space or discrete time or both, are here generalised to a model with both state-space and time continuous. The results are similar but the methods not always so.


1972 ◽  
Vol 4 (02) ◽  
pp. 258-270 ◽  
Author(s):  
E. Arjas

A fundamental identity, due to Miller (1961a), (1962a, b) and Kemperman (1961), is generalized to semi-Markov processes. Thus the identity applies to processes defined on a Markov chain with discrete state space and random walks with Markov dependent steps (Section 2). Wald's identity is discussed briefly in Section 3. Section 4 is a study of the maxima of partial sums, and Section 5 of maxima in a semi-Markov process.


1972 ◽  
Vol 4 (2) ◽  
pp. 258-270 ◽  
Author(s):  
E. Arjas

A fundamental identity, due to Miller (1961a), (1962a, b) and Kemperman (1961), is generalized to semi-Markov processes. Thus the identity applies to processes defined on a Markov chain with discrete state space and random walks with Markov dependent steps (Section 2). Wald's identity is discussed briefly in Section 3. Section 4 is a study of the maxima of partial sums, and Section 5 of maxima in a semi-Markov process.


Sign in / Sign up

Export Citation Format

Share Document