scholarly journals Necessary condition for optimal control of doubly stochastic systems

2019 ◽  
Vol 0 (0) ◽  
pp. 0-0
Author(s):  
Liangquan Zhang ◽  
◽  
Qing Zhou ◽  
Juan Yang
Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 118
Author(s):  
Qingfeng Zhu ◽  
Yufeng Shi ◽  
Jiaqiang Wen ◽  
Hui Zhang

This paper is concerned with a type of time-symmetric stochastic system, namely the so-called forward–backward doubly stochastic differential equations (FBDSDEs), in which the forward equations are delayed doubly stochastic differential equations (SDEs) and the backward equations are anticipated backward doubly SDEs. Under some monotonicity assumptions, the existence and uniqueness of measurable solutions to FBDSDEs are obtained. The future development of many processes depends on both their current state and historical state, and these processes can usually be represented by stochastic differential systems with time delay. Therefore, a class of nonzero sum differential game for doubly stochastic systems with time delay is studied in this paper. A necessary condition for the open-loop Nash equilibrium point of the Pontriagin-type maximum principle are established, and a sufficient condition for the Nash equilibrium point is obtained. Furthermore, the above results are applied to the study of nonzero sum differential games for linear quadratic backward doubly stochastic systems with delay. Based on the solution of FBDSDEs, an explicit expression of Nash equilibrium points for such game problems is established.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Jie Xu ◽  
Ruiqiang Lin

In this paper, we study a kind of near optimal control problem which is described by linear quadratic doubly stochastic differential equations with time delay. We consider the near optimality for the linear delayed doubly stochastic system with convex control domain. We discuss the case that all the time delay variables are different. We give the maximum principle of near optimal control for this kind of time delay system. The necessary condition for the control to be near optimal control is deduced by Ekeland’s variational principle and some estimates on the state and the adjoint processes corresponding to the system.


1975 ◽  
Vol 7 (1) ◽  
pp. 154-178 ◽  
Author(s):  
N. U. Ahmed ◽  
K. L. Teo

In this paper, the authors consider a class of stochastic systems described by Ito differential equations for which both controls and parameters are to be chosen optimally with respect to a certain performance index over a fixed time interval. The controls to be optimized depend only on partially observed current states as in a work of Fleming. However, he considered, instead, a problem of optimal control of systems governed by stochastic Ito differential equations with Markov terminal time. The fixed time problems usually give rise to the Cauchy problems (unbounded domain) whereas the Markov time problems give rise to the first boundary value problems (bounded domain). This fact makes the former problems relatively more involved than the latter. For the latter problems, Fleming has reported a necessary condition for optimality and an existence theorem of optimal controls. In this paper, a necessary condition for optimality for both controls and parameters combined together is presented for the former problems.


2012 ◽  
Vol 2012 ◽  
pp. 1-29 ◽  
Author(s):  
Shaolin Ji ◽  
Qingmeng Wei ◽  
Xiumin Zhang

We study the optimal control problem of a controlled time-symmetric forward-backward doubly stochastic differential equation with initial-terminal state constraints. Applying the terminal perturbation method and Ekeland’s variation principle, a necessary condition of the stochastic optimal control, that is, stochastic maximum principle, is derived. Applications to backward doubly stochastic linear-quadratic control models are investigated.


1975 ◽  
Vol 7 (01) ◽  
pp. 154-178
Author(s):  
N. U. Ahmed ◽  
K. L. Teo

In this paper, the authors consider a class of stochastic systems described by Ito differential equations for which both controls and parameters are to be chosen optimally with respect to a certain performance index over a fixed time interval. The controls to be optimized depend only on partially observed current states as in a work of Fleming. However, he considered, instead, a problem of optimal control of systems governed by stochastic Ito differential equations with Markov terminal time. The fixed time problems usually give rise to the Cauchy problems (unbounded domain) whereas the Markov time problems give rise to the first boundary value problems (bounded domain). This fact makes the former problems relatively more involved than the latter. For the latter problems, Fleming has reported a necessary condition for optimality and an existence theorem of optimal controls. In this paper, a necessary condition for optimality for both controls and parameters combined together is presented for the former problems.


2020 ◽  
Vol 26 ◽  
pp. 87
Author(s):  
Tao Hao ◽  
Qingxin Meng

In this paper we prove a maximum principle of optimal control problem for a class of general mean-field forward-backward stochastic systems with jumps in the case where the diffusion coefficients depend on control, the control set does not need to be convex, the coefficients of jump terms are independent of control as well as the coefficients of mean-field backward stochastic differential equations depend on the joint law of (X(t), Y (t)). Since the coefficients depend on measure, higher mean-field terms could be involved. In order to analyse them, two new adjoint equations are brought in and several new generic estimates of their solutions are investigated. Utilizing these subtle estimates, the second-order expansion of the cost functional, which is the key point to analyse the necessary condition, is obtained, and where after the stochastic maximum principle. An illustrative application to mean-field game is considered.


2014 ◽  
Vol 2014 ◽  
pp. 1-12
Author(s):  
Qingmeng Wei

We focus on the fully coupled forward-backward stochastic differential equations with jumps and investigate the associated stochastic optimal control problem (with the nonconvex control and the convex state constraint) along with stochastic maximum principle. To derive the necessary condition (i.e., stochastic maximum principle) for the optimal control, first we transform the fully coupled forward-backward stochastic control system into a fully coupled backward one; then, by using the terminal perturbation method, we obtain the stochastic maximum principle. Finally, we study a linear quadratic model.


Sign in / Sign up

Export Citation Format

Share Document