scholarly journals Pontryagin’s Maximum Principle for Optimal Control of Stochastic SEIR Models

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-5
Author(s):  
Ruimin Xu ◽  
Rongwei Guo

In this paper, we study the necessary conditions as well as sufficient conditions for optimality of stochastic SEIR model. The most distinguishing feature, compared with the well-studied SEIR model, is that the model system follows stochastic differential equations (SDEs) driven by Brownian motions. Hamiltonian function is introduced to derive the necessary conditions. Using the explicit formulation of adjoint variables, desired necessary conditions for optimal control results are obtained. We also establish a sufficient condition which is called verification theorem for the stochastic SEIR model.

1968 ◽  
Vol 8 (1) ◽  
pp. 114-118 ◽  
Author(s):  
A. W. J. Stoddart

In [4], Hanson has obtained necessary conditions and sufficient conditions for optimality of a program in stochastic systems. However, in many cases, especially in a general treatment, a program satisfying these conditions cannot be determined explicitly, so that the question of existence of an optimal program in such systems is significant. In this paper, we obtain conditions sufficient for existence of an optimal program by applying the direct methods of the calculus of variations [9], [6] and the theory of optimal control [7], [5].


1974 ◽  
Vol 6 (04) ◽  
pp. 622-635 ◽  
Author(s):  
R. Morton ◽  
K. H. Wickwire

A control scheme for the immunisation of susceptibles in the Kermack-McKendrick epidemic model for a closed population is proposed. The bounded control appears linearly in both dynamics and integral cost functionals and any optimal policies are of the “bang-bang” type. The approach uses Dynamic Programming and Pontryagin's Maximum Principle and allows one, for certain values of the cost and removal rates, to apply necessary and sufficient conditions for optimality and show that a one-switch candidate is the optimal control. In the remaining cases we are still able to show that an optimal control, if it exists, has at most one switch.


2013 ◽  
Vol 2013 ◽  
pp. 1-4
Author(s):  
Clara Carlota ◽  
Sílvia Chá ◽  
António Ornelas

In applications of the Calculus of Variations, Optimal Control and Differential Inclusions, very important real-life problems are nonconvex vectorial and subject to pointwise constraints. The classical Liapunov convexity theorem is a crucial tool allowing researchers to solve nonconvex vectorial problems involving single integrals. However, the possibility of extending such theorem so as to deal with pointwise constraints has remained an open problem for two decades, in the more realistic case using variable vectorial velocities. We have recently solved it, in the sense of proving necessary conditions and sufficient conditions for solvability of such problem. A quick overview of our results is presented here, the main point being that, somehow, convex constrained nonuniqueness a.e. implies nonconvex constrained existence.


1974 ◽  
Vol 6 (4) ◽  
pp. 622-635 ◽  
Author(s):  
R. Morton ◽  
K. H. Wickwire

A control scheme for the immunisation of susceptibles in the Kermack-McKendrick epidemic model for a closed population is proposed. The bounded control appears linearly in both dynamics and integral cost functionals and any optimal policies are of the “bang-bang” type. The approach uses Dynamic Programming and Pontryagin's Maximum Principle and allows one, for certain values of the cost and removal rates, to apply necessary and sufficient conditions for optimality and show that a one-switch candidate is the optimal control. In the remaining cases we are still able to show that an optimal control, if it exists, has at most one switch.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Maoning Tang

This paper first makes an attempt to investigate the near-optimal control of systems governed by fully nonlinear coupled forward-backward stochastic differential equations (FBSDEs) under the assumption of a convex control domain. By Ekeland’s variational principle and some basic estimates for state processes and adjoint processes, we establish the necessary conditions for anyε-near optimal control in a local form with an error order of exactε1/2. Moreover, under additional convexity conditions on Hamiltonian function, we prove that anε-maximum condition in terms of the Hamiltonian in the integral form is sufficient for near-optimality of orderε1/2.


2021 ◽  
Vol 18 (5) ◽  
pp. 6452-6483
Author(s):  
Keguo Ren ◽  
◽  
Xining Li ◽  
Qimin Zhang ◽  

<abstract><p>Near-optimization is as sensible and important as optimization for both theory and applications. This paper concerns the near-optimal control of an avian influenza model with saturation on heterogeneous complex networks. Firstly, the basic reproduction number $ \mathcal{R}_{0} $ is defined for the model, which can be used to govern the threshold dynamics of influenza disease. Secondly, the near-optimal control problem was formulated by slaughtering poultry and treating infected humans while keeping the loss and cost to a minimum. Thanks to the maximum condition of the Hamiltonian function and the Ekeland's variational principle, we establish both necessary and sufficient conditions for the near-optimality by several delicate estimates for the state and adjoint processes. Finally, a number of examples presented to illustrate our theoretical results.</p></abstract>


Sign in / Sign up

Export Citation Format

Share Document