Necessary Condition for Optimal Controls, the Case of Convex Control Domains

Author(s):  
Qi Lü ◽  
Xu Zhang
2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Jie Xu ◽  
Ruiqiang Lin

In this paper, we study a kind of near optimal control problem which is described by linear quadratic doubly stochastic differential equations with time delay. We consider the near optimality for the linear delayed doubly stochastic system with convex control domain. We discuss the case that all the time delay variables are different. We give the maximum principle of near optimal control for this kind of time delay system. The necessary condition for the control to be near optimal control is deduced by Ekeland’s variational principle and some estimates on the state and the adjoint processes corresponding to the system.


Author(s):  
Evgenii Khailov ◽  
Nikolai Grigorenko ◽  
Ellina Grigorieva ◽  
Anna Klimenkova

This book is devoted to a consistent presentation of the recent results obtained by the authors related to controlled systems created based on the Lotka-Volterra competition model, as well as to theoretical and numerical study of the corresponding optimal control problems. These controlled systems describe various modern methods of treating blood cancers, and the optimal control problems stated for such systems, reflect the search for the optimal treatment strategies. The main tool of the theoretical analysis used in this book is the Pontryagin maximum principle - a necessary condition for optimality in optimal control problems. Possible types of the optimal blood cancer treatment - the optimal controls - are obtained as a result of analytical investigations and are confirmed by corresponding numerical calculations. This book can be used as a supplement text in courses of mathematical modeling for upper undergraduate and graduate students. It is our believe that this text will be of interest to all professors teaching such or similar courses as well as for everyone interested in modern optimal control theory and its biomedical applications.


1975 ◽  
Vol 7 (1) ◽  
pp. 154-178 ◽  
Author(s):  
N. U. Ahmed ◽  
K. L. Teo

In this paper, the authors consider a class of stochastic systems described by Ito differential equations for which both controls and parameters are to be chosen optimally with respect to a certain performance index over a fixed time interval. The controls to be optimized depend only on partially observed current states as in a work of Fleming. However, he considered, instead, a problem of optimal control of systems governed by stochastic Ito differential equations with Markov terminal time. The fixed time problems usually give rise to the Cauchy problems (unbounded domain) whereas the Markov time problems give rise to the first boundary value problems (bounded domain). This fact makes the former problems relatively more involved than the latter. For the latter problems, Fleming has reported a necessary condition for optimality and an existence theorem of optimal controls. In this paper, a necessary condition for optimality for both controls and parameters combined together is presented for the former problems.


1975 ◽  
Vol 7 (01) ◽  
pp. 154-178
Author(s):  
N. U. Ahmed ◽  
K. L. Teo

In this paper, the authors consider a class of stochastic systems described by Ito differential equations for which both controls and parameters are to be chosen optimally with respect to a certain performance index over a fixed time interval. The controls to be optimized depend only on partially observed current states as in a work of Fleming. However, he considered, instead, a problem of optimal control of systems governed by stochastic Ito differential equations with Markov terminal time. The fixed time problems usually give rise to the Cauchy problems (unbounded domain) whereas the Markov time problems give rise to the first boundary value problems (bounded domain). This fact makes the former problems relatively more involved than the latter. For the latter problems, Fleming has reported a necessary condition for optimality and an existence theorem of optimal controls. In this paper, a necessary condition for optimality for both controls and parameters combined together is presented for the former problems.


Fluids ◽  
2020 ◽  
Vol 5 (3) ◽  
pp. 144
Author(s):  
Leonardo Chirco ◽  
Sandro Manservisi

Fluid–structure interaction (FSI) systems consist of a fluid which flows and deforms one or more solid surrounding structures. In this paper, we study inverse FSI problems, where the goal is to find the optimal value of some control parameters, such that the FSI solution is close to a desired one. Optimal control problems are formulated with Lagrange multipliers and adjoint variables formalism. In order to recover the symmetry of the stationary state-adjoint system an auxiliary displacement field is introduced and used to extend the velocity field from the fluid into the structure domain. As a consequence, the adjoint interface forces are balanced automatically. We present three different FSI optimal controls: inverse parameter estimation, boundary control and distributed control. The optimality system is derived from the first order necessary condition by taking the Fréchet derivatives of the augmented Lagrangian with respect to all the variables involved. The optimal solution is obtained through a gradient-based algorithm applied to the optimality system. In order to support the proposed approach and compare these three optimal control approaches numerical tests are performed.


2021 ◽  
Vol 58 ◽  
pp. 48-58
Author(s):  
I.V. Izmestyev ◽  
V.I. Ukhobotov

In a normed space of finite dimension, a discrete game problem with fixed duration is considered. The terminal set is determined by the condition that the norm of the phase vector belongs to a segment with positive ends. In this paper, a set defined by this condition is called a ring. At each moment, the vectogram of the first player's controls is a certain ring. The controls of the second player at each moment are taken from balls with given radii. The goal of the first player is to lead a phase vector to the terminal set at a fixed time. The goal of the second player is the opposite. In this paper, necessary and sufficient termination conditions are found, and optimal controls of the players are constructed.


1997 ◽  
Vol 161 ◽  
pp. 267-282 ◽  
Author(s):  
Thierry Montmerle

AbstractFor life to develop, planets are a necessary condition. Likewise, for planets to form, stars must be surrounded by circumstellar disks, at least some time during their pre-main sequence evolution. Much progress has been made recently in the study of young solar-like stars. In the optical domain, these stars are known as «T Tauri stars». A significant number show IR excess, and other phenomena indirectly suggesting the presence of circumstellar disks. The current wisdom is that there is an evolutionary sequence from protostars to T Tauri stars. This sequence is characterized by the initial presence of disks, with lifetimes ~ 1-10 Myr after the intial collapse of a dense envelope having given birth to a star. While they are present, about 30% of the disks have masses larger than the minimum solar nebula. Their disappearance may correspond to the growth of dust grains, followed by planetesimal and planet formation, but this is not yet demonstrated.


Sign in / Sign up

Export Citation Format

Share Document