Stochastic Optimal Control
Keyword(s):
We study a general stochastic optimal control problem within the framework of a controlled SDE. This problem is studied using dynamic programming and we derive the Hamilton–Jacobi–Bellman PDE. By stating and proving a verification theorem we show that solving this PDE is equivalent to solving the control problem. As an example the theory is then applied to the linear quadratic regulator.
2010 ◽
Vol 10
(1)
◽
2012 ◽
Vol 61
(5)
◽
pp. 623-630
◽
2014 ◽
Vol 29
(1)
◽
pp. 67-85
◽
2008 ◽
Vol 47
(5)
◽
pp. 2616-2641
◽
2019 ◽
Vol 183
(2)
◽
pp. 422-439