Optimal control of stochastic Ito differential systems by fixed terminal time

1975 ◽  
Vol 7 (01) ◽  
pp. 154-178
Author(s):  
N. U. Ahmed ◽  
K. L. Teo

In this paper, the authors consider a class of stochastic systems described by Ito differential equations for which both controls and parameters are to be chosen optimally with respect to a certain performance index over a fixed time interval. The controls to be optimized depend only on partially observed current states as in a work of Fleming. However, he considered, instead, a problem of optimal control of systems governed by stochastic Ito differential equations with Markov terminal time. The fixed time problems usually give rise to the Cauchy problems (unbounded domain) whereas the Markov time problems give rise to the first boundary value problems (bounded domain). This fact makes the former problems relatively more involved than the latter. For the latter problems, Fleming has reported a necessary condition for optimality and an existence theorem of optimal controls. In this paper, a necessary condition for optimality for both controls and parameters combined together is presented for the former problems.

1975 ◽  
Vol 7 (1) ◽  
pp. 154-178 ◽  
Author(s):  
N. U. Ahmed ◽  
K. L. Teo

In this paper, the authors consider a class of stochastic systems described by Ito differential equations for which both controls and parameters are to be chosen optimally with respect to a certain performance index over a fixed time interval. The controls to be optimized depend only on partially observed current states as in a work of Fleming. However, he considered, instead, a problem of optimal control of systems governed by stochastic Ito differential equations with Markov terminal time. The fixed time problems usually give rise to the Cauchy problems (unbounded domain) whereas the Markov time problems give rise to the first boundary value problems (bounded domain). This fact makes the former problems relatively more involved than the latter. For the latter problems, Fleming has reported a necessary condition for optimality and an existence theorem of optimal controls. In this paper, a necessary condition for optimality for both controls and parameters combined together is presented for the former problems.


2019 ◽  
Vol 2019 ◽  
pp. 1-13 ◽  
Author(s):  
Fernando Saldaña ◽  
Andrei Korobeinikov ◽  
Ignacio Barradas

We investigate the optimal vaccination and screening strategies to minimize human papillomavirus (HPV) associated morbidity and the interventions cost. We propose a two-sex compartmental model of HPV-infection with time-dependent controls (vaccination of adolescents, adults, and screening) which can act simultaneously. We formulate optimal control problems complementing our model with two different objective functionals. The first functional corresponds to the protection of the vulnerable group and the control problem consists of minimizing the cumulative level of infected females over a fixed time interval. The second functional aims to eliminate the infection, and, thus, the control problem consists of minimizing the total prevalence at the end of the time interval. We prove the existence of solutions for the control problems, characterize the optimal controls, and carry out numerical simulations using various initial conditions. The results and properties and drawbacks of the model are discussed.


Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 118
Author(s):  
Qingfeng Zhu ◽  
Yufeng Shi ◽  
Jiaqiang Wen ◽  
Hui Zhang

This paper is concerned with a type of time-symmetric stochastic system, namely the so-called forward–backward doubly stochastic differential equations (FBDSDEs), in which the forward equations are delayed doubly stochastic differential equations (SDEs) and the backward equations are anticipated backward doubly SDEs. Under some monotonicity assumptions, the existence and uniqueness of measurable solutions to FBDSDEs are obtained. The future development of many processes depends on both their current state and historical state, and these processes can usually be represented by stochastic differential systems with time delay. Therefore, a class of nonzero sum differential game for doubly stochastic systems with time delay is studied in this paper. A necessary condition for the open-loop Nash equilibrium point of the Pontriagin-type maximum principle are established, and a sufficient condition for the Nash equilibrium point is obtained. Furthermore, the above results are applied to the study of nonzero sum differential games for linear quadratic backward doubly stochastic systems with delay. Based on the solution of FBDSDEs, an explicit expression of Nash equilibrium points for such game problems is established.


Author(s):  
Mohammad A. Kazemi

AbstractIn this paper a class of optimal control problems with distributed parameters is considered. The governing equations are nonlinear first order partial differential equations that arise in the study of heterogeneous reactors and control of chemical processes. The main focus of the present paper is the mathematical theory underlying the algorithm. A conditional gradient method is used to devise an algorithm for solving such optimal control problems. A formula for the Fréchet derivative of the objective function is obtained, and its properties are studied. A necessary condition for optimality in terms of the Fréchet derivative is presented, and then it is shown that any accumulation point of the sequence of admissible controls generated by the algorithm satisfies this necessary condition for optimality.


2015 ◽  
Vol 2015 ◽  
pp. 1-7
Author(s):  
Rui Zhang ◽  
Yinjing Guo ◽  
Xiangrong Wang ◽  
Xueqing Zhang

This paper extends the stochastic stability criteria of two measures to the mean stability and proves the stability criteria for a kind of stochastic Itô’s systems. Moreover, by applying optimal control approaches, the mean stability criteria in terms of two measures are also obtained for the stochastic systems with coefficient’s uncertainty.


2012 ◽  
Vol 29 (06) ◽  
pp. 1250033
Author(s):  
VIRTUE U. EKHOSUEHI ◽  
AUGUSTINE A. OSAGIEDE

In this study, we have applied optimal control theory to determine the optimum value of tax revenues accruing to a state given the range of budgeted expenditure on enforcing tax laws and awareness creation on the payment of the correct tax. This is achieved by maximizing the state's net tax revenue over a fixed time interval subject to certain constraints. By assuming that the satisfaction derived by the Federal Government of Nigeria on the ability of the individual states to generate tax revenue which is as near as the optimum tax revenue (via the state's control problem) is described by the logarithmic form of the Cobb–Douglas utility function, a formula for horizontal revenue allocation in Nigeria in its raw form is derived. Afterwards, we illustrate the use of the proposed horizontal revenue allocation formula using hypothetical data.


Author(s):  
Anatolii Fedorovich Kleimenov

The equations of motion of the controlled system in the two-step problem under consideration at a fixed time interval contain the controls of either one player or two players. In the first step (stage) of the controlled process (from the initial moment to a certain predetermined moment), only the first player controls the system, which solves the problem of optimal control with a given terminal functional. In the second step (stage) of the process, the first player decides whether the second player will participate in the control process for the remainder of the time, or not. It is assumed that for participation the second player must pay the first side payment in a fixed amount. If «yes», then a non-antagonistic positional differential game is played out, in which the Nash equilibrium is taken as the solution. In addition, players can use «abnormal» behaviors, which can allow players to increase their winnings. If « no », then until the end of the process continues to solve the problem optimal control.


Sign in / Sign up

Export Citation Format

Share Document