scholarly journals Convergence results for an averaged LQR problem with applications to reinforcement learning

Author(s):  
Andrea Pesare ◽  
Michele Palladino ◽  
Maurizio Falcone

AbstractIn this paper, we will deal with a linear quadratic optimal control problem with unknown dynamics. As a modeling assumption, we will suppose that the knowledge that an agent has on the current system is represented by a probability distribution $$\pi $$ π on the space of matrices. Furthermore, we will assume that such a probability measure is opportunely updated to take into account the increased experience that the agent obtains while exploring the environment, approximating with increasing accuracy the underlying dynamics. Under these assumptions, we will show that the optimal control obtained by solving the “average” linear quadratic optimal control problem with respect to a certain $$\pi $$ π converges to the optimal control driven related to the linear quadratic optimal control problem governed by the actual, underlying dynamics. This approach is closely related to model-based reinforcement learning algorithms where prior and posterior probability distributions describing the knowledge on the uncertain system are recursively updated. In the last section, we will show a numerical test that confirms the theoretical results.

Axioms ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 137
Author(s):  
Vladimir Turetsky

Two inverse ill-posed problems are considered. The first problem is an input restoration of a linear system. The second one is a restoration of time-dependent coefficients of a linear ordinary differential equation. Both problems are reformulated as auxiliary optimal control problems with regularizing cost functional. For the coefficients restoration problem, two control models are proposed. In the first model, the control coefficients are approximated by the output and the estimates of its derivatives. This model yields an approximating linear-quadratic optimal control problem having a known explicit solution. The derivatives are also obtained as auxiliary linear-quadratic tracking controls. The second control model is accurate and leads to a bilinear-quadratic optimal control problem. The latter is tackled in two ways: by an iterative procedure and by a feedback linearization. Simulation results show that a bilinear model provides more accurate coefficients estimates.


2018 ◽  
Vol 36 (3) ◽  
pp. 779-833
Author(s):  
Daniel Bankmann ◽  
Matthias Voigt

Abstract In this work we investigate explicit and implicit difference equations and the corresponding infinite time horizon linear-quadratic optimal control problem. We derive conditions for feasibility of the optimal control problem as well as existence and uniqueness of optimal controls under certain weaker assumptions compared to the standard approaches in the literature which are using algebraic Riccati equations. To this end, we introduce and analyse a discrete-time Lur’e equation and a corresponding Kalman–Yakubovich–Popov (KYP) inequality. We show that solvability of the KYP inequality can be characterized via the spectral structure of a certain palindromic matrix pencil. The deflating subspaces of this pencil are finally used to construct solutions of the Lur’e equation. The results of this work are transferred from the continuous-time case. However, many additional technical difficulties arise in this context.


Sign in / Sign up

Export Citation Format

Share Document