Optimal feedback control of stochastic McShane differential systems

1974 ◽  
Vol 11 (2) ◽  
pp. 302-309 ◽  
Author(s):  
N. U. Ahmed ◽  
K. L. Teo

In this paper, the optimal control problem of system described by stochastic McShane differential equations is considered. It is shown that this problem can be reduced to an equivalent optimal control problem of distributed parameter systems of parabolic type with controls appearing in the coefficients of the differential operator. Further, to this reduced problem, necessary conditions for optimality and an existence theorem for optimal controls are given.

1974 ◽  
Vol 11 (02) ◽  
pp. 302-309
Author(s):  
N. U. Ahmed ◽  
K. L. Teo

In this paper, the optimal control problem of system described by stochastic McShane differential equations is considered. It is shown that this problem can be reduced to an equivalent optimal control problem of distributed parameter systems of parabolic type with controls appearing in the coefficients of the differential operator. Further, to this reduced problem, necessary conditions for optimality and an existence theorem for optimal controls are given.


1992 ◽  
Vol 45 (2) ◽  
pp. 305-326 ◽  
Author(s):  
Jiongmin Yong ◽  
Pingjian Zhang

Optimal control problem of semilinear evolutionary distributed parameter systems with impulse controls is considered. Necessary conditions of optimal controls are derived. The result generalises the usual Pontryagin's maximum principle.


2012 ◽  
Vol 2012 ◽  
pp. 1-50 ◽  
Author(s):  
Jingtao Shi

This paper deals with the general optimal control problem for fully coupled forward-backward stochastic differential equations with random jumps (FBSDEJs). The control domain is not assumed to be convex, and the control variable appears in both diffusion and jump coefficients of the forward equation. Necessary conditions of Pontraygin's type for the optimal controls are derived by means of spike variation technique and Ekeland variational principle. A linear quadratic stochastic optimal control problem is discussed as an illustrating example.


2013 ◽  
Vol 2013 ◽  
pp. 1-10
Author(s):  
Eun-Young Ju ◽  
Jin-Mun Jeong

We deal with optimal control problems governed by semilinear parabolic type equations and in particular described by variational inequalities. We will also characterize the optimal controls by giving necessary conditions for optimality by proving the Gâteaux differentiability of solution mapping on control variables.


1991 ◽  
Vol 43 (2) ◽  
pp. 211-224
Author(s):  
Nikolaos S. Papageorgiou

In this paper we examine a Lagrange optimal control problem driven by a nonlinear evolution equation involving a nonmonotone, state dependent perturbation term. For this problem we establish the existence of optimal admissible pairs. For the same system we also examine a time optimal control problem involving a moving target set. Finally we work out in detail an example of a strongly nonlinear parabolic distributed parameter system.


2018 ◽  
Vol 36 (3) ◽  
pp. 779-833
Author(s):  
Daniel Bankmann ◽  
Matthias Voigt

Abstract In this work we investigate explicit and implicit difference equations and the corresponding infinite time horizon linear-quadratic optimal control problem. We derive conditions for feasibility of the optimal control problem as well as existence and uniqueness of optimal controls under certain weaker assumptions compared to the standard approaches in the literature which are using algebraic Riccati equations. To this end, we introduce and analyse a discrete-time Lur’e equation and a corresponding Kalman–Yakubovich–Popov (KYP) inequality. We show that solvability of the KYP inequality can be characterized via the spectral structure of a certain palindromic matrix pencil. The deflating subspaces of this pencil are finally used to construct solutions of the Lur’e equation. The results of this work are transferred from the continuous-time case. However, many additional technical difficulties arise in this context.


Sign in / Sign up

Export Citation Format

Share Document