Optimal Control of Markov Processes.

Author(s):  
Wendell H. Fleming
Cybernetics ◽  
1976 ◽  
Vol 11 (6) ◽  
pp. 970-977
Author(s):  
N. V. Andreev ◽  
D. V. Karachenets ◽  
G. �. Massal'skii

1969 ◽  
Vol 5 (3) ◽  
pp. 273-278
Author(s):  
Kenji ONO ◽  
Masayuki KIMURA

2009 ◽  
Vol 46 (04) ◽  
pp. 1157-1183 ◽  
Author(s):  
O. L. V. Costa ◽  
F. Dufour

This work is concerned with the existence of an optimal control strategy for the long-run average continuous control problem of piecewise-deterministic Markov processes (PDMPs). In Costa and Dufour (2008), sufficient conditions were derived to ensure the existence of an optimal control by using the vanishing discount approach. These conditions were mainly expressed in terms of the relative difference of the α-discount value functions. The main goal of this paper is to derive tractable conditions directly related to the primitive data of the PDMP to ensure the existence of an optimal control. The present work can be seen as a continuation of the results derived in Costa and Dufour (2008). Our main assumptions are written in terms of some integro-differential inequalities related to the so-called expected growth condition, and geometric convergence of the post-jump location kernel associated to the PDMP. An example based on the capacity expansion problem is presented, illustrating the possible applications of the results developed in the paper.


Sign in / Sign up

Export Citation Format

Share Document