Dynamic Programming Principle for One Kind of Stochastic Recursive Optimal Control Problem and Hamilton–Jacobi–Bellman Equation

2008 ◽  
Vol 47 (5) ◽  
pp. 2616-2641 ◽  
Author(s):  
Zhen Wu ◽  
Zhiyong Yu
2018 ◽  
Vol 24 (1) ◽  
pp. 355-376 ◽  
Author(s):  
Jiangyan Pu ◽  
Qi Zhang

In this work we study the stochastic recursive control problem, in which the aggregator (or generator) of the backward stochastic differential equation describing the running cost is continuous but not necessarily Lipschitz with respect to the first unknown variable and the control, and monotonic with respect to the first unknown variable. The dynamic programming principle and the connection between the value function and the viscosity solution of the associated Hamilton-Jacobi-Bellman equation are established in this setting by the generalized comparison theorem for backward stochastic differential equations and the stability of viscosity solutions. Finally we take the control problem of continuous-time Epstein−Zin utility with non-Lipschitz aggregator as an example to demonstrate the application of our study.


Author(s):  
Tomas Björk

We study a general stochastic optimal control problem within the framework of a controlled SDE. This problem is studied using dynamic programming and we derive the Hamilton–Jacobi–Bellman PDE. By stating and proving a verification theorem we show that solving this PDE is equivalent to solving the control problem. As an example the theory is then applied to the linear quadratic regulator.


Author(s):  
Abolhassan Razminia ◽  
Mehdi Asadizadehshiraz ◽  
Delfim F. M. Torres

We consider an extension of the well-known Hamilton–Jacobi–Bellman (HJB) equation for fractional order dynamical systems in which a generalized performance index is considered for the related optimal control problem. Owing to the nonlocality of the fractional order operators, the classical HJB equation, in the usual form, does not hold true for fractional problems. Effectiveness of the proposed technique is illustrated through a numerical example.


2019 ◽  
Vol 19 (03) ◽  
pp. 1950019 ◽  
Author(s):  
R. C. Hu ◽  
X. F. Wang ◽  
X. D. Gu ◽  
R. H. Huan

In this paper, nonlinear stochastic optimal control of multi-degree-of-freedom (MDOF) partially observable linear systems subjected to combined harmonic and wide-band random excitations is investigated. Based on the separation principle, the control problem of a partially observable system is converted into a completely observable one. The dynamic programming equation for the completely observable control problem is then set up based on the stochastic averaging method and stochastic dynamic programming principle, from which the nonlinear optimal control law is derived. To illustrate the feasibility and efficiency of the proposed control strategy, the responses of the uncontrolled and optimal controlled systems are respectively obtained by solving the associated Fokker–Planck–Kolmogorov (FPK) equation. Numerical results show the proposed control strategy can dramatically reduce the response of stochastic systems subjected to both harmonic and wide-band random excitations.


Sign in / Sign up

Export Citation Format

Share Document