A time-fuel optimal control problem of a cruise missile

2010 ◽  
Vol 51 ◽  
Author(s):  
Rui Li ◽  
Y J Shi
2020 ◽  
Vol 7 (3) ◽  
pp. 11-22
Author(s):  
VALERY ANDREEV ◽  
◽  
ALEXANDER POPOV

A reduced model has been developed to describe the time evolution of a discharge in an iron core tokamak, taking into account the nonlinear behavior of the ferromagnetic during the discharge. The calculation of the discharge scenario and program regime in the tokamak is formulated as an inverse problem - the optimal control problem. The methods for solving the problem are compared and the analysis of the correctness and stability of the control problem is carried out. A model of “quasi-optimal” control is proposed, which allows one to take into account real power sources. The discharge scenarios are calculated for the T-15 tokamak with an iron core.


Games ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 23
Author(s):  
Alexander Arguchintsev ◽  
Vasilisa Poplevko

This paper deals with an optimal control problem for a linear system of first-order hyperbolic equations with a function on the right-hand side determined from controlled bilinear ordinary differential equations. These ordinary differential equations are linear with respect to state functions with controlled coefficients. Such problems arise in the simulation of some processes of chemical technology and population dynamics. Normally, general optimal control methods are used for these problems because of bilinear ordinary differential equations. In this paper, the problem is reduced to an optimal control problem for a system of ordinary differential equations. The reduction is based on non-classic exact increment formulas for the cost-functional. This treatment allows to use a number of efficient optimal control methods for the problem. An example illustrates the approach.


Author(s):  
Andrea Pesare ◽  
Michele Palladino ◽  
Maurizio Falcone

AbstractIn this paper, we will deal with a linear quadratic optimal control problem with unknown dynamics. As a modeling assumption, we will suppose that the knowledge that an agent has on the current system is represented by a probability distribution $$\pi $$ π on the space of matrices. Furthermore, we will assume that such a probability measure is opportunely updated to take into account the increased experience that the agent obtains while exploring the environment, approximating with increasing accuracy the underlying dynamics. Under these assumptions, we will show that the optimal control obtained by solving the “average” linear quadratic optimal control problem with respect to a certain $$\pi $$ π converges to the optimal control driven related to the linear quadratic optimal control problem governed by the actual, underlying dynamics. This approach is closely related to model-based reinforcement learning algorithms where prior and posterior probability distributions describing the knowledge on the uncertain system are recursively updated. In the last section, we will show a numerical test that confirms the theoretical results.


Sign in / Sign up

Export Citation Format

Share Document