scholarly journals The generalised discrete algebraic Riccati equation in linear-quadratic optimal control

Automatica ◽  
2013 ◽  
Vol 49 (2) ◽  
pp. 471-478 ◽  
Author(s):  
Augusto Ferrante ◽  
Lorenzo Ntogramatzidis
2019 ◽  
Vol 2019 ◽  
pp. 1-17
Author(s):  
Avinash Kumar ◽  
Tushar Jain

This paper revisits the problem of synthesizing the optimal control law for linear systems with a quadratic cost. For this problem, traditionally, the state feedback gain matrix of the optimal controller is computed by solving the Riccati equation, which is primarily obtained using calculus of variations- (CoV-) based and Hamilton–Jacobi–Bellman (HJB) equation-based approaches. To obtain the Riccati equation, these approaches require some assumptions in the solution procedure; that is, the former approach requires the notion of costates and then their relationship with states is exploited to obtain the closed-form expression of the optimal control law, while the latter requires a priori knowledge regarding the optimal cost function. In this paper, we propose a novel method for computing linear quadratic optimal control laws by using the global optimal control framework introduced by V. F. Krotov. As shall be illustrated in this article, this framework does not require the notion of costates and any a priori information regarding the optimal cost function. Nevertheless, using this framework, the optimal control problem gets translated to a nonconvex optimization problem. The novelty of the proposed method lies in transforming this nonconvex optimization problem into a convex problem. The convexity imposition results in a linear matrix inequality (LMI), whose analysis is reported in this work. Furthermore, this LMI reduces to the Riccati equation upon imposing optimality requirements. The insights along with the future directions of the work are presented and gathered at appropriate locations in this article. Finally, numerical results are provided to demonstrate the proposed methodology.


2015 ◽  
Vol 2015 ◽  
pp. 1-11
Author(s):  
Xikui Liu ◽  
Guiling Li ◽  
Yan Li

The Karush-Kuhn-Tucker (KKT) theorem is used to study stochastic linear quadratic optimal control with terminal constraint for discrete-time systems, allowing the control weighting matrices in the cost to be indefinite. A generalized difference Riccati equation is derived, which is different from those without constraint case. It is proved that the well-posedness and the attainability of stochastic linear quadratic optimal control problem are equivalent. Moreover, an optimal control can be denoted by the solution of the generalized difference Riccati equation.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Yan Chen ◽  
Jie Xu

In this paper, the delayed doubly stochastic linear quadratic optimal control problem is discussed. It deduces the expression of the optimal control for the general delayed doubly stochastic control system which contained time delay both in the state variable and in the control variable at the same time and proves its uniqueness by using the classical parallelogram rule. The paper is concerned with the generalized matrix value Riccati equation for a special delayed doubly stochastic linear quadratic control system and aims to give the expression of optimal control and value function by the solution of the Riccati equation.


Author(s):  
Andrea Pesare ◽  
Michele Palladino ◽  
Maurizio Falcone

AbstractIn this paper, we will deal with a linear quadratic optimal control problem with unknown dynamics. As a modeling assumption, we will suppose that the knowledge that an agent has on the current system is represented by a probability distribution $$\pi $$ π on the space of matrices. Furthermore, we will assume that such a probability measure is opportunely updated to take into account the increased experience that the agent obtains while exploring the environment, approximating with increasing accuracy the underlying dynamics. Under these assumptions, we will show that the optimal control obtained by solving the “average” linear quadratic optimal control problem with respect to a certain $$\pi $$ π converges to the optimal control driven related to the linear quadratic optimal control problem governed by the actual, underlying dynamics. This approach is closely related to model-based reinforcement learning algorithms where prior and posterior probability distributions describing the knowledge on the uncertain system are recursively updated. In the last section, we will show a numerical test that confirms the theoretical results.


Axioms ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 137
Author(s):  
Vladimir Turetsky

Two inverse ill-posed problems are considered. The first problem is an input restoration of a linear system. The second one is a restoration of time-dependent coefficients of a linear ordinary differential equation. Both problems are reformulated as auxiliary optimal control problems with regularizing cost functional. For the coefficients restoration problem, two control models are proposed. In the first model, the control coefficients are approximated by the output and the estimates of its derivatives. This model yields an approximating linear-quadratic optimal control problem having a known explicit solution. The derivatives are also obtained as auxiliary linear-quadratic tracking controls. The second control model is accurate and leads to a bilinear-quadratic optimal control problem. The latter is tackled in two ways: by an iterative procedure and by a feedback linearization. Simulation results show that a bilinear model provides more accurate coefficients estimates.


Sign in / Sign up

Export Citation Format

Share Document