Optimal controls for stochastic systems with singular noise

1986 ◽  
Vol 7 (1) ◽  
pp. 55-59 ◽  
Author(s):  
Nigel J. Cutland
2018 ◽  
Vol 24 (4) ◽  
pp. 1849-1879 ◽  
Author(s):  
Tianxiao Wang

This paper is concerned with linear quadratic control problems of stochastic differential equations (SDEs, in short) and stochastic Volterra integral equations (SVIEs, in short). Notice that for stochastic systems, the control weight in the cost functional is allowed to be indefinite. This feature is demonstrated here only by open-loop optimal controls but not limited to closed-loop optimal controls in the literature. As to linear quadratic problem of SDEs, some examples are given to point out the issues left by existing papers, and new characterizations of optimal controls are obtained in different manners. For the study of SVIEs with deterministic coefficients, a class of stochastic Fredholm−Volterra integral equations is introduced to replace conventional forward-backward SVIEs. Eventually, instead of using convex variation, we use spike variation to obtain some additional optimality conditions of linear quadratic problems for SVIEs.


1975 ◽  
Vol 7 (1) ◽  
pp. 154-178 ◽  
Author(s):  
N. U. Ahmed ◽  
K. L. Teo

In this paper, the authors consider a class of stochastic systems described by Ito differential equations for which both controls and parameters are to be chosen optimally with respect to a certain performance index over a fixed time interval. The controls to be optimized depend only on partially observed current states as in a work of Fleming. However, he considered, instead, a problem of optimal control of systems governed by stochastic Ito differential equations with Markov terminal time. The fixed time problems usually give rise to the Cauchy problems (unbounded domain) whereas the Markov time problems give rise to the first boundary value problems (bounded domain). This fact makes the former problems relatively more involved than the latter. For the latter problems, Fleming has reported a necessary condition for optimality and an existence theorem of optimal controls. In this paper, a necessary condition for optimality for both controls and parameters combined together is presented for the former problems.


1975 ◽  
Vol 7 (01) ◽  
pp. 154-178
Author(s):  
N. U. Ahmed ◽  
K. L. Teo

In this paper, the authors consider a class of stochastic systems described by Ito differential equations for which both controls and parameters are to be chosen optimally with respect to a certain performance index over a fixed time interval. The controls to be optimized depend only on partially observed current states as in a work of Fleming. However, he considered, instead, a problem of optimal control of systems governed by stochastic Ito differential equations with Markov terminal time. The fixed time problems usually give rise to the Cauchy problems (unbounded domain) whereas the Markov time problems give rise to the first boundary value problems (bounded domain). This fact makes the former problems relatively more involved than the latter. For the latter problems, Fleming has reported a necessary condition for optimality and an existence theorem of optimal controls. In this paper, a necessary condition for optimality for both controls and parameters combined together is presented for the former problems.


Sign in / Sign up

Export Citation Format

Share Document