Solution to the risk-sensitive average cost optimality equation in a class of Markov decision processes with finite state space

2003 ◽  
Vol 57 (2) ◽  
pp. 263-285 ◽  
Author(s):  
Rolando Cavazos-Cadena
2011 ◽  
Vol 2011 ◽  
pp. 1-11
Author(s):  
Epaminondas G. Kyriakidis

We introduce a Markov decision process in continuous time for the optimal control of a simple symmetrical immigration-emigration process by the introduction of total catastrophes. It is proved that a particular control-limit policy is average cost optimal within the class of all stationary policies by verifying that the relative values of this policy are the solution of the corresponding optimality equation.


2018 ◽  
Vol 50 (01) ◽  
pp. 204-230 ◽  
Author(s):  
Rolando Cavazos-Cadena ◽  
Daniel Hernández-Hernández

Abstract This work concerns Markov decision chains on a finite state space. The decision-maker has a constant and nonnull risk sensitivity coefficient, and the performance of a control policy is measured by two different indices, namely, the discounted and average criteria. Motivated by well-known results for the risk-neutral case, the problem of approximating the optimal risk-sensitive average cost in terms of the optimal risk-sensitive discounted value functions is addressed. Under suitable communication assumptions, it is shown that, as the discount factor increases to 1, appropriate normalizations of the optimal discounted value functions converge to the optimal average cost, and to the functional part of the solution of the risk-sensitive average cost optimality equation.


Sign in / Sign up

Export Citation Format

Share Document