scholarly journals Discounted-cost Linear Quadratic Regulation of Switched Linear Systems

Author(s):  
Ma Ruicheng ◽  
Aoxue Xiang

Abstract In this paper, we will investigate the design of discounted-cost linear quadratic regulator for switched linear systems. The distinguishing feature of the proposed method is that the designed discounted-cost linear quadratic regulator will achieve not only the desired optimization index, but also the exponentially convergent of the state trajectory of the closed-loop switched linear systems. First, we adopt the embedding transformation to transform the studied problem into a quadratic-programming problem. Then, the bang-bang-type solution of the embedded optimal control problem on a finite time horizon is the optimal solution to the original problems. The bang-bang-type solutions of the embedded optimal control problem is to be shown the optimization solution of the studied problem. Then, the computable sufficient conditions on discounted-cost linear quadratic regulator are proposed for finite-time and infinite-time horizon case, respectively. Finally, an example is provided to demonstrate the effectiveness of the proposed method.

Author(s):  
Tomas Björk

We study a general stochastic optimal control problem within the framework of a controlled SDE. This problem is studied using dynamic programming and we derive the Hamilton–Jacobi–Bellman PDE. By stating and proving a verification theorem we show that solving this PDE is equivalent to solving the control problem. As an example the theory is then applied to the linear quadratic regulator.


Author(s):  
Andrea Pesare ◽  
Michele Palladino ◽  
Maurizio Falcone

AbstractIn this paper, we will deal with a linear quadratic optimal control problem with unknown dynamics. As a modeling assumption, we will suppose that the knowledge that an agent has on the current system is represented by a probability distribution $$\pi $$ π on the space of matrices. Furthermore, we will assume that such a probability measure is opportunely updated to take into account the increased experience that the agent obtains while exploring the environment, approximating with increasing accuracy the underlying dynamics. Under these assumptions, we will show that the optimal control obtained by solving the “average” linear quadratic optimal control problem with respect to a certain $$\pi $$ π converges to the optimal control driven related to the linear quadratic optimal control problem governed by the actual, underlying dynamics. This approach is closely related to model-based reinforcement learning algorithms where prior and posterior probability distributions describing the knowledge on the uncertain system are recursively updated. In the last section, we will show a numerical test that confirms the theoretical results.


Axioms ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 137
Author(s):  
Vladimir Turetsky

Two inverse ill-posed problems are considered. The first problem is an input restoration of a linear system. The second one is a restoration of time-dependent coefficients of a linear ordinary differential equation. Both problems are reformulated as auxiliary optimal control problems with regularizing cost functional. For the coefficients restoration problem, two control models are proposed. In the first model, the control coefficients are approximated by the output and the estimates of its derivatives. This model yields an approximating linear-quadratic optimal control problem having a known explicit solution. The derivatives are also obtained as auxiliary linear-quadratic tracking controls. The second control model is accurate and leads to a bilinear-quadratic optimal control problem. The latter is tackled in two ways: by an iterative procedure and by a feedback linearization. Simulation results show that a bilinear model provides more accurate coefficients estimates.


Sign in / Sign up

Export Citation Format

Share Document