A class of nonlinear optimal control problems governed by Fredholm integro-differential equations with delay

2019 ◽  
Vol 93 (9) ◽  
pp. 2199-2211
Author(s):  
Hamid Reza Marzban ◽  
Mehrdad Rostami Ashani
Author(s):  
Inseok Hwang ◽  
Jinhua Li ◽  
Dzung Du

A novel numerical method based on the differential transformation is proposed for solving nonlinear optimal control problems in this paper. The differential transformation is a linear operator that transforms a function from the original time and/or space domain into another domain in order to simplify the differential calculations. The optimality conditions for the optimal control problems can be represented by algebraic and differential equations. Using the differential transformation, these algebraic and differential equations with their boundary conditions are first converted into a system of nonlinear algebraic equations. Then the numerical optimal solutions are obtained in the form of finite-term Taylor series by solving the system of nonlinear algebraic equations. The differential transformation algorithm is similar to the spectral element methods in that the computational region splits into several subregions but it uses polynomials of high degrees by keeping a small number of subregions. The differential transformation algorithm could solve the finite- (or infinite-) time horizon optimal control problems formulated as either the algebraic and ordinary differential equations using Pontryagin’s minimum principle or the Hamilton–Jacobi–Bellman partial differential equation using dynamic programming in one unified framework. In addition, the differential transformation algorithm can efficiently solve optimal control problems with the piecewise continuous dynamics and/or nonsmooth control. The performance is demonstrated through illustrative examples.


Author(s):  
Mohammad A. Kazemi

AbstractIn this paper a class of optimal control problems with distributed parameters is considered. The governing equations are nonlinear first order partial differential equations that arise in the study of heterogeneous reactors and control of chemical processes. The main focus of the present paper is the mathematical theory underlying the algorithm. A conditional gradient method is used to devise an algorithm for solving such optimal control problems. A formula for the Fréchet derivative of the objective function is obtained, and its properties are studied. A necessary condition for optimality in terms of the Fréchet derivative is presented, and then it is shown that any accumulation point of the sequence of admissible controls generated by the algorithm satisfies this necessary condition for optimality.


Sign in / Sign up

Export Citation Format

Share Document