scholarly journals Numerical Optimal Control of HIV Transmission in Octave/MATLAB

2019 ◽  
Vol 25 (1) ◽  
pp. 1 ◽  
Author(s):  
Carlos Campos ◽  
Cristiana J. Silva ◽  
Delfim F. M. Torres

We provide easy and readable GNU Octave/MATLAB code for the simulation of mathematical models described by ordinary differential equations and for the solution of optimal control problems through Pontryagin’s maximum principle. For that, we consider a normalized HIV/AIDS transmission dynamics model based on the one proposed in our recent contribution (Silva, C.J.; Torres, D.F.M. A SICA compartmental model in epidemiology with application to HIV/AIDS in Cape Verde. Ecol. Complex. 2017, 30, 70–75), given by a system of four ordinary differential equations. An HIV initial value problem is solved numerically using the ode45 GNU Octave function and three standard methods implemented by us in Octave/MATLAB: Euler method and second-order and fourth-order Runge–Kutta methods. Afterwards, a control function is introduced into the normalized HIV model and an optimal control problem is formulated, where the goal is to find the optimal HIV prevention strategy that maximizes the fraction of uninfected HIV individuals with the least HIV new infections and cost associated with the control measures. The optimal control problem is characterized analytically using the Pontryagin Maximum Principle, and the extremals are computed numerically by implementing a forward-backward fourth-order Runge–Kutta method. Complete algorithms, for both uncontrolled initial value and optimal control problems, developed under the free GNU Octave software and compatible with MATLAB are provided along the article.

Author(s):  
Shahla Rasulzade ◽  
◽  

One specific optimal control problem with distributed parameters of the Moskalenko type with a multipoint quality functional is considered. To date, the theory of necessary first-order optimality conditions such as the Pontryagin maximum principle or its consequences has been sufficiently developed for various optimal control problems described by ordinary differential equations, i.e. for optimal control problems with lumped parameters. Many controlled processes are described by various partial differential equations (processes with distributed parameters). Some features are inherent in optimal control problems with distributed parameters, and therefore, when studying the optimal control problem with distributed parameters, in particular, when deriving various necessary optimality conditions, non-trivial difficulties arise. In particular, in the study of cases of degeneracy of the established necessary optimality conditions, fundamental difficulties arise. In the present work, we study one optimal control problem described by a system of first-order partial differential equations with a controlled initial condition under the assumption that the initial function is a solution to the Cauchy problem for ordinary differential equations. The objective function (quality criterion) is multi-point. Therefore, it becomes necessary to introduce an unconventional conjugate equation, not in differential (classical), but in integral form. In the work, using one version of the increment method, using the explicit linearization method of the original system, the necessary optimality condition is proved in the form of an analog of the maximum principle of L.S. Pontryagin. It is known that the maximum principle of L.S. Pontryagin for various optimal control problems is the strongest necessary condition for optimality. But the principle of a maximum of L.S. Pontryagin, being a necessary condition of the first order, often degenerates. Such cases are called special, and the corresponding management, special management. Based on these considerations, in the considered problem, we study the case of degeneration of the maximum principle of L.S. Pontryagin for the problem under consideration. For this purpose, a formula for incrementing the quality functional of the second order is constructed. By introducing auxiliary matrix functions, it was possible to obtain a second-order increment formula that is constructive in nature. The necessary optimality condition for special controls in the sense of the maximum principle of L.S. Pontryagin is proved. The proved necessary optimality conditions are explicit.


2009 ◽  
Vol 06 (07) ◽  
pp. 1221-1233 ◽  
Author(s):  
MARÍA BARBERO-LIÑÁN ◽  
MIGUEL C. MUÑOZ-LECANDA

A geometric method is described to characterize the different kinds of extremals in optimal control theory. This comes from the use of a presymplectic constraint algorithm starting from the necessary conditions given by Pontryagin's Maximum Principle. The algorithm must be run twice so as to obtain suitable sets that once projected must be compared. Apart from the design of this general algorithm useful for any optimal control problem, it is shown how to classify the set of extremals and, in particular, how to characterize the strict abnormality. An example of strict abnormal extremal for a particular control-affine system is also given.


Games ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 23
Author(s):  
Alexander Arguchintsev ◽  
Vasilisa Poplevko

This paper deals with an optimal control problem for a linear system of first-order hyperbolic equations with a function on the right-hand side determined from controlled bilinear ordinary differential equations. These ordinary differential equations are linear with respect to state functions with controlled coefficients. Such problems arise in the simulation of some processes of chemical technology and population dynamics. Normally, general optimal control methods are used for these problems because of bilinear ordinary differential equations. In this paper, the problem is reduced to an optimal control problem for a system of ordinary differential equations. The reduction is based on non-classic exact increment formulas for the cost-functional. This treatment allows to use a number of efficient optimal control methods for the problem. An example illustrates the approach.


1974 ◽  
Vol 96 (1) ◽  
pp. 19-24
Author(s):  
P. J. Starr

Dynamic Path Synthesis refers to a class of linkage synthesis problems in which constraint paths between specified positions are determined in such a way as to optimize some measure of the resulting dynamic behavior. These problems can be transformed into nonlinear optimal control problems which are generally non-autonomous. The physical nature of the system allows general comments to be made regarding uniqueness, controllability, and singular control. The ideas are developed in the context of a two-link device yielding a fourth order non-linear control problem, for which a numerical example is presented.


2019 ◽  
Vol 14 (3) ◽  
pp. 310
Author(s):  
Beyza Billur İskender Eroglu ◽  
Dіlara Yapişkan

In this paper, we introduce the transversality conditions of optimal control problems formulated with the conformable derivative. Since the optimal control theory is based on variational calculus, the transversality conditions for variational calculus problems are first investigated and then supported by some illustrative examples. Utilizing from these formulations, the transversality conditions for optimal control problems are attained by using the Hamiltonian formalism and Lagrange multiplier technique. To illustrate the obtained results, the dynamical system on which optimal control problem constructed is taken as a diffusion process modeled in terms of the conformable derivative. The optimal control law is achieved by analytically solving the time dependent conformable differential equations occurring from the eigenfunction expansions of the state and the control functions. All figures are plotted using MATLAB.


2018 ◽  
Vol 21 (6) ◽  
pp. 1439-1470 ◽  
Author(s):  
Xiuwen Li ◽  
Yunxiang Li ◽  
Zhenhai Liu ◽  
Jing Li

Abstract In this paper, a sensitivity analysis of optimal control problem for a class of systems described by nonlinear fractional evolution inclusions (NFEIs, for short) on Banach spaces is investigated. Firstly, the nonemptiness as well as the compactness of the mild solutions set S(ζ) (ζ being the initial condition) for the NFEIs are obtained, and we also present an extension Filippov’s theorem and whose proof differs from previous work only in some technical details. Finally, the optimal control problems described by NFEIs depending on the initial condition ζ and the parameter η are considered and the sensitivity properties of the optimal control problem are also established.


2000 ◽  
Vol 23 (9) ◽  
pp. 605-616 ◽  
Author(s):  
R. Enkhbat

The problem of maximizing a nonsmooth convex function over an arbitrary set is considered. Based on the optimality condition obtained by Strekalovsky in 1987 an algorithm for solving the problem is proposed. We show that the algorithm can be applied to the nonconvex optimal control problem as well. We illustrate the method by describing some computational experiments performed on a few nonconvex optimal control problems.


Sign in / Sign up

Export Citation Format

Share Document