On the relative entropy of discrete-time Markov processes with given end-point densities

1996 ◽  
Vol 42 (5) ◽  
pp. 1529-1535 ◽  
Author(s):  
A. Beghi
Cybernetics ◽  
1976 ◽  
Vol 11 (6) ◽  
pp. 970-977
Author(s):  
N. V. Andreev ◽  
D. V. Karachenets ◽  
G. �. Massal'skii

1983 ◽  
Vol 20 (01) ◽  
pp. 185-190 ◽  
Author(s):  
Mark Scott ◽  
Dean L. Isaacson

By assuming the proportionality of the intensity functions at each time point for a continuous-time non-homogeneous Markov process, strong ergodicity for the process is determined through strong ergodicity of a related discrete-time Markov process. For processes having proportional intensities, strong ergodicity implies having the limiting matrix L satisfy L · P(s, t) = L, where P(s, t) is the matrix of transition functions.


1986 ◽  
Vol 18 (03) ◽  
pp. 724-746
Author(s):  
W. J. R. Eplett

The theory of allocation indices for defining the optimal policy in multi-armed bandit problems developed by Gittins is presented in the continuous-time case where the projects (or ‘arms’) are strong Markov processes. Complications peculiar to the continuous-time case are discussed. This motivates investigation of whether approximation of the continuous-time problems by discrete-time versions provides a valid technique with convergent allocation indices and optimal expected rewards. Conditions are presented under which the convergence holds.


Sign in / Sign up

Export Citation Format

Share Document