scholarly journals The relationship between intensity and stochastic matrices for continuous-time discrete value stochastic non-homogeneous processes with Markov property

2017 ◽  
Vol 13 (3) ◽  
pp. 7244-7256
Author(s):  
Mi los lawa Sokol

The matrices of non-homogeneous Markov processes consist of time-dependent functions whose values at time form typical intensity matrices. For solvingsome problems they must be changed into stochastic matrices. A stochas-tic matrix for non-homogeneous Markov process consists of time-dependent functions, whose values are probabilities and it depend on assumed time pe- riod. In this paper formulas for these functions are derived. Although the formula is not simple, it allows proving some theorems for Markov stochastic processes, well known for homogeneous processes, but for non-homogeneous ones the proofs of them turned out shorter.

1983 ◽  
Vol 20 (01) ◽  
pp. 185-190 ◽  
Author(s):  
Mark Scott ◽  
Dean L. Isaacson

By assuming the proportionality of the intensity functions at each time point for a continuous-time non-homogeneous Markov process, strong ergodicity for the process is determined through strong ergodicity of a related discrete-time Markov process. For processes having proportional intensities, strong ergodicity implies having the limiting matrix L satisfy L · P(s, t) = L, where P(s, t) is the matrix of transition functions.


1985 ◽  
Vol 22 (04) ◽  
pp. 804-815 ◽  
Author(s):  
J. Gani ◽  
Pyke Tin

This paper considers a certain class of continuous-time Markov processes, whose time-dependent and stationary distributions are studied. In the stationary case, the analogy with Whittle's relaxed Markov process is pointed out. The derivation of the probability generating functions of the general process provides useful results for the analysis of some population and queueing processes.


1994 ◽  
Vol 31 (3) ◽  
pp. 626-634 ◽  
Author(s):  
James Ledoux ◽  
Gerardo Rubino ◽  
Bruno Sericola

We characterize the conditions under which an absorbing Markovian finite process (in discrete or continuous time) can be transformed into a new aggregated process conserving the Markovian property, whose states are elements of a given partition of the original state space. To obtain this characterization, a key tool is the quasi-stationary distribution associated with absorbing processes. It allows the absorbing case to be related to the irreducible one. We are able to calculate the set of all initial distributions of the starting process leading to an aggregated homogeneous Markov process by means of a finite algorithm. Finally, it is shown that the continuous-time case can always be reduced to the discrete one using the uniformization technique.


1985 ◽  
Vol 22 (4) ◽  
pp. 804-815 ◽  
Author(s):  
J. Gani ◽  
Pyke Tin

This paper considers a certain class of continuous-time Markov processes, whose time-dependent and stationary distributions are studied. In the stationary case, the analogy with Whittle's relaxed Markov process is pointed out. The derivation of the probability generating functions of the general process provides useful results for the analysis of some population and queueing processes.


1994 ◽  
Vol 31 (03) ◽  
pp. 626-634 ◽  
Author(s):  
James Ledoux ◽  
Gerardo Rubino ◽  
Bruno Sericola

We characterize the conditions under which an absorbing Markovian finite process (in discrete or continuous time) can be transformed into a new aggregated process conserving the Markovian property, whose states are elements of a given partition of the original state space. To obtain this characterization, a key tool is the quasi-stationary distribution associated with absorbing processes. It allows the absorbing case to be related to the irreducible one. We are able to calculate the set of all initial distributions of the starting process leading to an aggregated homogeneous Markov process by means of a finite algorithm. Finally, it is shown that the continuous-time case can always be reduced to the discrete one using the uniformization technique.


2014 ◽  
Vol 51 ◽  
pp. 725-778 ◽  
Author(s):  
C. R. Shelton ◽  
G. Ciardo

A continuous-time Markov process (CTMP) is a collection of variables indexed by a continuous quantity, time. It obeys the Markov property that the distribution over a future variable is independent of past variables given the state at the present time. We introduce continuous-time Markov process representations and algorithms for filtering, smoothing, expected sufficient statistics calculations, and model estimation, assuming no prior knowledge of continuous-time processes but some basic knowledge of probability and statistics. We begin by describing "flat" or unstructured Markov processes and then move to structured Markov processes (those arising from state spaces consisting of assignments to variables) including Kronecker, decision-diagram, and continuous-time Bayesian network representations. We provide the first connection between decision-diagrams and continuous-time Bayesian networks.


1983 ◽  
Vol 20 (1) ◽  
pp. 185-190 ◽  
Author(s):  
Mark Scott ◽  
Dean L. Isaacson

By assuming the proportionality of the intensity functions at each time point for a continuous-time non-homogeneous Markov process, strong ergodicity for the process is determined through strong ergodicity of a related discrete-time Markov process. For processes having proportional intensities, strong ergodicity implies having the limiting matrix L satisfy L · P(s, t) = L, where P(s, t) is the matrix of transition functions.


Author(s):  
UWE FRANZ

We show how classical Markov processes can be obtained from quantum Lévy processes. It is shown that quantum Lévy processes are quantum Markov processes, and sufficient conditions for restrictions to subalgebras to remain quantum Markov processes are given. A classical Markov process (which has the same time-ordered moments as the quantum process in the vacuum state) exists whenever we can restrict to a commutative subalgebra without losing the quantum Markov property.8 Several examples, including the Azéma martingale, with explicit calculations are presented. In particular, the action of the generator of the classical Markov processes on polynomials or their moments are calculated using Hopf algebra duality.


1999 ◽  
Vol 36 (01) ◽  
pp. 48-59 ◽  
Author(s):  
George V. Moustakides

Let ξ0,ξ1,ξ2,… be a homogeneous Markov process and let S n denote the partial sum S n = θ(ξ1) + … + θ(ξ n ), where θ(ξ) is a scalar nonlinearity. If N is a stopping time with 𝔼N < ∞ and the Markov process satisfies certain ergodicity properties, we then show that 𝔼S N = [lim n→∞𝔼θ(ξ n )]𝔼N + 𝔼ω(ξ0) − 𝔼ω(ξ N ). The function ω(ξ) is a well defined scalar nonlinearity directly related to θ(ξ) through a Poisson integral equation, with the characteristic that ω(ξ) becomes zero in the i.i.d. case. Consequently our result constitutes an extension to Wald's first lemma for the case of Markov processes. We also show that, when 𝔼N → ∞, the correction term is negligible as compared to 𝔼N in the sense that 𝔼ω(ξ0) − 𝔼ω(ξ N ) = o(𝔼N).


1973 ◽  
Vol 5 (01) ◽  
pp. 66-102 ◽  
Author(s):  
J. F. C. Kingman

Ifx0is a particular state for a continuous-time Markov processX, the random time setis often of both practical and theoretical interest. Ignoring trivial or pathological cases, there are four different types of structure which this random set can display. To some extent, it is possible to treat all four cases in a unified way, but they raise different questions and require different modes of description. The distributions of various random quantities associated withcan be related to one another by simple and useful formulae.


Sign in / Sign up

Export Citation Format

Share Document