On invariant measures of extensions of Markov transition functions

1983 ◽  
Vol 113 (1) ◽  
pp. 163-169 ◽  
Author(s):  
Olaf Böhme
2013 ◽  
Vol 83 (3) ◽  
pp. 943-951 ◽  
Author(s):  
Joanna Jaroszewska

1996 ◽  
Vol 33 (1) ◽  
pp. 18-27 ◽  
Author(s):  
F. Papangelou

In the Bayesian estimation of higher-order Markov transition functions on finite state spaces, a prior distribution may assign positive probability to arbitrarily high orders. If there are n observations available, we show (for natural priors) that, with probability one, as n → ∞ the Bayesian posterior distribution ‘discriminates accurately' for orders up to β log n, if β is smaller than an explicitly determined β0. This means that the ‘large deviations' of the posterior are controlled by the relative entropies of the true transition function with respect to all others, much as the large deviations of the empirical distributions are governed by their relative entropies with respect to the true transition function. An example shows that the result can fail even for orders β log n if β is large.


1986 ◽  
Author(s):  
E. A. Carlen ◽  
S. Kusuoka ◽  
D. W. Stroock

1973 ◽  
Vol 10 (01) ◽  
pp. 84-99 ◽  
Author(s):  
Richard L. Tweedie

The problem considered is that of estimating the limit probability distribution (equilibrium distribution) πof a denumerable continuous time Markov process using only the matrix Q of derivatives of transition functions at the origin. We utilise relationships between the limit vector πand invariant measures for the jump-chain of the process (whose transition matrix we write P∗), and apply truncation theorems from Tweedie (1971) to P∗. When Q is regular, we derive algorithms for estimating πfrom truncations of Q; these extend results in Tweedie (1971), Section 4, from q-bounded processes to arbitrary regular processes. Finally, we show that this method can be extended even to non-regular chains of a certain type.


1996 ◽  
Vol 33 (01) ◽  
pp. 18-27
Author(s):  
F. Papangelou

In the Bayesian estimation of higher-order Markov transition functions on finite state spaces, a prior distribution may assign positive probability to arbitrarily high orders. If there are n observations available, we show (for natural priors) that, with probability one, as n → ∞ the Bayesian posterior distribution ‘discriminates accurately' for orders up to β log n, if β is smaller than an explicitly determined β 0. This means that the ‘large deviations' of the posterior are controlled by the relative entropies of the true transition function with respect to all others, much as the large deviations of the empirical distributions are governed by their relative entropies with respect to the true transition function. An example shows that the result can fail even for orders β log n if β is large.


Author(s):  
Azam A. Imomov ◽  

The paper discusses the continuous-time Markov Branching Process allowing Immigration. We are considering a critical case for which the second moment of offspring law and the first moment of immigration law are possibly infinite. Assuming that the nonlinear parts of the appropriate generating functions are regularly varying in the sense of Karamata, we prove theorems on convergence of transition functions of the process to invariant measures. We deduce the speed rate of these convergence providing that slowly varying factors are with remainder


Sign in / Sign up

Export Citation Format

Share Document