scholarly journals On the relative value iteration with a risk-sensitive criterion

2020 ◽  
Vol 122 ◽  
pp. 9-24
Author(s):  
Ari Arapostathis ◽  
Vivek S. Borkar
2005 ◽  
Vol 42 (4) ◽  
pp. 905-918 ◽  
Author(s):  
Rolando Cavazos-Cadena ◽  
Raúl Montes-De-Oca

This work concerns Markov decision chains with finite state spaces and compact action sets. The performance index is the long-run risk-sensitive average cost criterion, and it is assumed that, under each stationary policy, the state space is a communicating class and that the cost function and the transition law depend continuously on the action. These latter data are not directly available to the decision-maker, but convergent approximations are known or are more easily computed. In this context, the nonstationary value iteration algorithm is used to approximate the solution of the optimality equation, and to obtain a nearly optimal stationary policy.


2018 ◽  
Author(s):  
Igor Oliveira Borges ◽  
Karina Valdivia Delgado ◽  
Valdinei Freire

This paper shows an empirical study of Value Iteration Risk Sensitive algorithm proposed by Mihatsch and Neuneier (2002). This approach makes use of a risk factor that allows dealing with different types of risk attitude (prone, neutral or averse) by using a discount factor. We show experiments with the domain of Crossing the River in two different scenarios and we analyze the influence of discount factor and risk factor under two aspects: optimal policy and processing time to convergence. We observed that: (i) the processing cost in extreme risk policies is high with both risk-averse and risk-prone attitude; (ii) a high discount increases time to convergence and reinforces the chosen risk attitude; and (iii) policies with intermediate risk factor values have a low computational cost and show a certain sensitivity to risk based on the discount factor.


Sign in / Sign up

Export Citation Format

Share Document