scholarly journals Solution to an Optimal Control Problem via Canonical Dual Method

2009 ◽  
Vol 2009 ◽  
pp. 1-5 ◽  
Author(s):  
Jinghao Zhu ◽  
Jiani Zhou

The analytic solution to an optimal control problem is investigated using the canonical dual method. By means of the Pontryagin principle and a transformation of the cost functional, the optimal control of a nonconvex problem is obtained. It turns out that the optimal control can be expressed by the costate via canonical dual variables. Some examples are illustrated.

Author(s):  
Freya Bachmann ◽  
Gilbert Koch ◽  
Marc Pfister ◽  
Gabor Szinnai ◽  
Johannes Schropp

AbstractProviding the optimal dosing strategy of a drug for an individual patient is an important task in pharmaceutical sciences and daily clinical application. We developed and validated an optimal dosing algorithm (OptiDose) that computes the optimal individualized dosing regimen for pharmacokinetic–pharmacodynamic models in substantially different scenarios with various routes of administration by solving an optimal control problem. The aim is to compute a control that brings the underlying system as closely as possible to a desired reference function by minimizing a cost functional. In pharmacokinetic–pharmacodynamic modeling, the controls are the administered doses and the reference function can be the disease progression. Drug administration at certain time points provides a finite number of discrete controls, the drug doses, determining the drug concentration and its effect on the disease progression. Consequently, rewriting the cost functional gives a finite-dimensional optimal control problem depending only on the doses. Adjoint techniques allow to compute the gradient of the cost functional efficiently. This admits to solve the optimal control problem with robust algorithms such as quasi-Newton methods from finite-dimensional optimization. OptiDose is applied to three relevant but substantially different pharmacokinetic–pharmacodynamic examples.


Automatica ◽  
2014 ◽  
Vol 50 (4) ◽  
pp. 1227-1234 ◽  
Author(s):  
Patrizio Colaneri ◽  
Richard H. Middleton ◽  
Zhiyong Chen ◽  
Danilo Caporale ◽  
Franco Blanchini

Games ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 23
Author(s):  
Alexander Arguchintsev ◽  
Vasilisa Poplevko

This paper deals with an optimal control problem for a linear system of first-order hyperbolic equations with a function on the right-hand side determined from controlled bilinear ordinary differential equations. These ordinary differential equations are linear with respect to state functions with controlled coefficients. Such problems arise in the simulation of some processes of chemical technology and population dynamics. Normally, general optimal control methods are used for these problems because of bilinear ordinary differential equations. In this paper, the problem is reduced to an optimal control problem for a system of ordinary differential equations. The reduction is based on non-classic exact increment formulas for the cost-functional. This treatment allows to use a number of efficient optimal control methods for the problem. An example illustrates the approach.


2012 ◽  
Vol 53 (4) ◽  
pp. 292-307 ◽  
Author(s):  
K. H. WONG ◽  
W. M. TANG

AbstractWe develop a computational method for solving an optimal control problem governed by a switched impulsive dynamical system with time delay. At each time instant, only one subsystem is active. We propose a computational method for solving this optimal control problem where the time spent by the state in each subsystem is treated as a new parameter. These parameters and the jump strengths of the impulses are decision parameters to be optimized. The gradient formula of the cost function is derived in terms of solving a number of delay differential equations forward in time. Based on this, the optimal control problem can be solved as an optimization problem.


Sign in / Sign up

Export Citation Format

Share Document