Optimal Control Problems for Some First and Second Order Differential Equations

Author(s):  
Nicolae H. Pavel ◽  
G. S. Wang ◽  
Yong Kang Huang
Author(s):  
K.B. Mansimov ◽  
◽  
R.O. Mastaliyev ◽  
◽  

The article considers second-order system of linear stochastic partial differential equations of hyperbolic type with Goursat boundary conditions. Earlier, in a number of papers, representations of the solution Goursat problem for linear stochastic equations of hyperbolic type in the classical way under the assumption of sufficient smoothness of the coefficients of the terms included in the right-hand side of the equation were obtained. Meanwhile, study of many stochastic applied optimal control problems described by linear or nonlinear second-order stochastic differential equations, in partial derivatives hyperbolic type, the assumptions of sufficient smoothness of these equations are not natural. Proceeding from this, in the considered Goursat problem, in contrast to the known works, the smoothness of the coefficients of the terms in the right-hand side of the equation is not assumed. They are considered only measurable and bounded matrix functions. These assumptions, being natural, allow us to further investigate a wide class of optimal control problems described by systems of second-order stochastic hyperbolic equations. In this work, a stochastic analogue of the Riemann matrix is introduced, an integral representation of the solution of considered boundary value problem in explicit form through the boundary conditions is obtained. An analogue of the Riemann matrix was introduced as a solution of a two-dimensional matrix integral equation of the Volterra type with one-dimensional terms, a number of properties of an analogue of the Riemann matrix were studied.


Author(s):  
Mohammad A. Kazemi

AbstractIn this paper a class of optimal control problems with distributed parameters is considered. The governing equations are nonlinear first order partial differential equations that arise in the study of heterogeneous reactors and control of chemical processes. The main focus of the present paper is the mathematical theory underlying the algorithm. A conditional gradient method is used to devise an algorithm for solving such optimal control problems. A formula for the Fréchet derivative of the objective function is obtained, and its properties are studied. A necessary condition for optimality in terms of the Fréchet derivative is presented, and then it is shown that any accumulation point of the sequence of admissible controls generated by the algorithm satisfies this necessary condition for optimality.


Sign in / Sign up

Export Citation Format

Share Document