Solving the Hamilton-Jacobi-Bellman equation of stochastic control by a semigroup perturbation method
Keyword(s):
We consider the optimal control of deterministic processes with countably many (non-accumulating) random iumps. A necessary and sufficient optimality condition can be given in the form of a Hamilton-jacobi-Bellman equation which is a functionaldifferential equation with boundary conditions in the case considered. Its solution, the value function, is continuously differentiable along the deterministic trajectories if. the random jumps only are controllable and it can be represented as a supremum of smooth subsolutions in the general case, i.e. when both the deterministic motion and the random jumps are controlled (cf. the survey by M. H. A. Davis (p.14)).
2018 ◽
Vol 24
(1)
◽
pp. 355-376
◽
2017 ◽
Vol 49
(2)
◽
pp. 515-548
◽
2019 ◽
Vol 17
(01)
◽
pp. 1940004
◽
2004 ◽
pp. 187-202
◽
2012 ◽
Vol 56
(4)
◽
pp. 1361-1373
◽