Stable Sequential Pontryagin Maximum Principle as a Tool for Solving Unstable Optimal Control and Inverse Problems for Distributed Systems

Author(s):  
Mikhail Sumin
Author(s):  
Mikhail Iosifovich Sumin

We consider the regularization of the classical Lagrange principle and the Pontryagin maximum principle in convex problems of mathematical programming and optimal control. On example of the “simplest” problems of constrained infinitedimensional optimization, two main questions are discussed: why is regularization of the classical optimality conditions necessary and what does it give?


2017 ◽  
Vol 890 ◽  
pp. 012042 ◽  
Author(s):  
Mardlijah ◽  
Ahmad Jamil ◽  
Lukman Hanafi ◽  
Suharmadi Sanjaya

Author(s):  
V.I. Sumin ◽  
M.I. Sumin

We consider the regularization of the classical optimality conditions (COCs) — the Lagrange principle and the Pontryagin maximum principle — in a convex optimal control problem with functional constraints of equality and inequality type. The system to be controlled is given by a general linear functional-operator equation of the second kind in the space $L^m_2$, the main operator of the right-hand side of the equation is assumed to be quasinilpotent. The objective functional of the problem is strongly convex. Obtaining regularized COCs in iterative form is based on the use of the iterative dual regularization method. The main purpose of the regularized Lagrange principle and the Pontryagin maximum principle obtained in the work in iterative form is stable generation of minimizing approximate solutions in the sense of J. Warga. Regularized COCs in iterative form are formulated as existence theorems in the original problem of minimizing approximate solutions. They “overcome” the ill-posedness properties of the COCs and are regularizing algorithms for solving optimization problems. As an illustrative example, we consider an optimal control problem associated with a hyperbolic system of first-order differential equations.


Sign in / Sign up

Export Citation Format

Share Document