scholarly journals Dynamic programming principle for stochastic recursive optimal control problem with delayed systems

2012 ◽  
Vol 18 (4) ◽  
pp. 1005-1026 ◽  
Author(s):  
Li Chen ◽  
Zhen Wu
2019 ◽  
Vol 19 (03) ◽  
pp. 1950019 ◽  
Author(s):  
R. C. Hu ◽  
X. F. Wang ◽  
X. D. Gu ◽  
R. H. Huan

In this paper, nonlinear stochastic optimal control of multi-degree-of-freedom (MDOF) partially observable linear systems subjected to combined harmonic and wide-band random excitations is investigated. Based on the separation principle, the control problem of a partially observable system is converted into a completely observable one. The dynamic programming equation for the completely observable control problem is then set up based on the stochastic averaging method and stochastic dynamic programming principle, from which the nonlinear optimal control law is derived. To illustrate the feasibility and efficiency of the proposed control strategy, the responses of the uncontrolled and optimal controlled systems are respectively obtained by solving the associated Fokker–Planck–Kolmogorov (FPK) equation. Numerical results show the proposed control strategy can dramatically reduce the response of stochastic systems subjected to both harmonic and wide-band random excitations.


2018 ◽  
Vol 24 (2) ◽  
pp. 873-899 ◽  
Author(s):  
Mingshang Hu ◽  
Falei Wang

The present paper considers a stochastic optimal control problem, in which the cost function is defined through a backward stochastic differential equation with infinite horizon driven by G-Brownian motion. Then we study the regularities of the value function and establish the dynamic programming principle. Moreover, we prove that the value function is the unique viscosity solution of the related Hamilton−Jacobi−Bellman−Isaacs (HJBI) equation.


Sign in / Sign up

Export Citation Format

Share Document