Homecomings of Markov processes

1973 ◽  
Vol 5 (01) ◽  
pp. 66-102 ◽  
Author(s):  
J. F. C. Kingman

Ifx0is a particular state for a continuous-time Markov processX, the random time setis often of both practical and theoretical interest. Ignoring trivial or pathological cases, there are four different types of structure which this random set can display. To some extent, it is possible to treat all four cases in a unified way, but they raise different questions and require different modes of description. The distributions of various random quantities associated withcan be related to one another by simple and useful formulae.

1973 ◽  
Vol 5 (1) ◽  
pp. 66-102 ◽  
Author(s):  
J. F. C. Kingman

If x0 is a particular state for a continuous-time Markov process X, the random time set is often of both practical and theoretical interest. Ignoring trivial or pathological cases, there are four different types of structure which this random set can display. To some extent, it is possible to treat all four cases in a unified way, but they raise different questions and require different modes of description. The distributions of various random quantities associated with can be related to one another by simple and useful formulae.


1983 ◽  
Vol 20 (01) ◽  
pp. 185-190 ◽  
Author(s):  
Mark Scott ◽  
Dean L. Isaacson

By assuming the proportionality of the intensity functions at each time point for a continuous-time non-homogeneous Markov process, strong ergodicity for the process is determined through strong ergodicity of a related discrete-time Markov process. For processes having proportional intensities, strong ergodicity implies having the limiting matrix L satisfy L · P(s, t) = L, where P(s, t) is the matrix of transition functions.


1975 ◽  
Vol 12 (02) ◽  
pp. 289-297
Author(s):  
Andrew D. Barbour

LetX(t) be a continuous time Markov process on the integers such that, ifσis a time at whichXmakes a jump,X(σ)– X(σ–) is distributed independently ofX(σ–), and has finite meanμand variance. Letq(j) denote the residence time parameter for the statej.Iftndenotes the time of thenth jump andXn≡X(tb), it is easy to deduce limit theorems forfrom those for sums of independent identically distributed random variables. In this paper, it is shown how, forμ> 0 and for suitableq(·), these theorems can be translated into limit theorems forX(t), by using the continuous mapping theorem.


2012 ◽  
Vol 24 (1) ◽  
pp. 49-58 ◽  
Author(s):  
Jerzy Girtler

Abstract The paper provides justification for the necessity to define reliability of diagnosing systems (SDG) in order to develop a diagnosis on state of any technical mechanism being a diagnosed system (SDN). It has been shown that the knowledge of SDG reliability enables defining diagnosis reliability. It has been assumed that the diagnosis reliability can be defined as a diagnosis property which specifies the degree of recognizing by a diagnosing system (SDG) the actual state of the diagnosed system (SDN) which may be any mechanism, and the conditional probability p(S*/K*) of occurrence (existence) of state S* of the mechanism (SDN) as a diagnosis measure provided that at a specified reliability of SDG, the vector K* of values of diagnostic parameters implied by the state, is observed. The probability that SDG is in the state of ability during diagnostic tests and the following diagnostic inferences leading to development of a diagnosis about the SDN state, has been accepted as a measure of SDG reliability. The theory of semi-Markov processes has been used for defining the SDG reliability, that enabled to develop a SDG reliability model in the form of a seven-state (continuous-time discrete-state) semi-Markov process of changes of SDG states.


1959 ◽  
Vol 55 (2) ◽  
pp. 177-180 ◽  
Author(s):  
R. A. Sack

1. Introduction. Ledermann(1) has treated the problem of calculating the asymptotic probabilities that a system will be found in any one of a finite number N of possible states if transitions between these states occur as Markov processes with a continuous time parameter t. If we denote by pi(t) the probability that at time t the system is in the ith state and by aij ( ≥ 0) the constant probability per unit time for transitions from the jth to the ith state, the rate of change of pi is given bywhere the sum is to be taken over all j ≠ i. This set of equations can be written in matrix form aswhere P(t) is the vector with components pi(t) and the constant matrix A has elements


2012 ◽  
Vol 2012 ◽  
pp. 1-12 ◽  
Author(s):  
Guglielmo D'Amico ◽  
Jacques Janssen ◽  
Raimondo Manca

Monounireducible nonhomogeneous semi- Markov processes are defined and investigated. The mono- unireducible topological structure is a sufficient condition that guarantees the absorption of the semi-Markov process in a state of the process. This situation is of fundamental importance in the modelling of credit rating migrations because permits the derivation of the distribution function of the time of default. An application in credit rating modelling is given in order to illustrate the results.


2017 ◽  
Vol 13 (3) ◽  
pp. 7244-7256
Author(s):  
Mi los lawa Sokol

The matrices of non-homogeneous Markov processes consist of time-dependent functions whose values at time form typical intensity matrices. For solvingsome problems they must be changed into stochastic matrices. A stochas-tic matrix for non-homogeneous Markov process consists of time-dependent functions, whose values are probabilities and it depend on assumed time pe- riod. In this paper formulas for these functions are derived. Although the formula is not simple, it allows proving some theorems for Markov stochastic processes, well known for homogeneous processes, but for non-homogeneous ones the proofs of them turned out shorter.


1970 ◽  
Vol 7 (02) ◽  
pp. 388-399 ◽  
Author(s):  
C. K. Cheong

Our main concern in this paper is the convergence, as t → ∞, of the quantities i, j ∈ E; where Pij (t) is the transition probability of a semi-Markov process whose state space E is irreducible but not closed (i.e., escape from E is possible), and rj is the probability of eventual escape from E conditional on the initial state being i. The theorems proved here generalize some results of Seneta and Vere-Jones ([8] and [11]) for Markov processes.


1975 ◽  
Vol 12 (2) ◽  
pp. 289-297 ◽  
Author(s):  
Andrew D. Barbour

Let X(t) be a continuous time Markov process on the integers such that, if σ is a time at which X makes a jump, X(σ)– X(σ–) is distributed independently of X(σ–), and has finite mean μ and variance. Let q(j) denote the residence time parameter for the state j. If tn denotes the time of the nth jump and Xn ≡ X(tb), it is easy to deduce limit theorems for from those for sums of independent identically distributed random variables. In this paper, it is shown how, for μ > 0 and for suitable q(·), these theorems can be translated into limit theorems for X(t), by using the continuous mapping theorem.


Author(s):  
G. E. H. Reuter ◽  
W. Ledermann ◽  
M. S. Bartlett

Let pik (s, t) (i, k = 1, 2, …; s ≤ t) be the transition probabilities of a Markov process in a system with an enumerable set of states. The states are labelled by positive integers, and pik (s, t) is the conditional probability that the system be in state k at time t, given that it was in state i at an earlier time s. If certain regularity conditions are imposed on the pik, they can be shown to satisfy the well-known Kolmogorov equations§


Sign in / Sign up

Export Citation Format

Share Document