Linear–quadratic optimal steady state controllers for engineering students and practicing engineers

Author(s):  
Verica Radisavljevic-Gajic

This paper is an overview of fundamental linear–quadratic optimal control techniques used for linear dynamic systems. The presentation is suitable for undergraduate and graduate students and practicing engineers. The paper can be used by class instructors as supplemental material for undergraduate and graduate control system courses. The paper shows how to find the solution to a dynamic optimization problem: optimize an integral quadratic performance criterion along trajectories of a linear dynamic system over an infinite time period (steady-state linear–quadratic optimal control problem). The solution is obtained by solving a static optimization problem. All derivations done in the paper require only elementary knowledge of linear algebra and state space linear system analysis. The results are presented also for the observer-driven linear–quadratic steady-state optimal controller, output feedback-based linear–quadratic optimal controller, and the Kalman filter-driven linear–quadratic stochastic optimal controller. Having full understanding of derivations of the linear–quadratic optimal controller, observer-driven linear–quadratic optimal controller, optimal linear–quadratic output feedback controller, and optimal linear–quadratic stochastic controller, students and engineers will feel confident to use these controllers in numerous engineering and scientific applications. Several optimal linear–quadratic control case studies involving models of real physical systems, with the corresponding Simulink block diagrams and MATLAB codes, are included in the paper.

2019 ◽  
Vol 2019 ◽  
pp. 1-17
Author(s):  
Avinash Kumar ◽  
Tushar Jain

This paper revisits the problem of synthesizing the optimal control law for linear systems with a quadratic cost. For this problem, traditionally, the state feedback gain matrix of the optimal controller is computed by solving the Riccati equation, which is primarily obtained using calculus of variations- (CoV-) based and Hamilton–Jacobi–Bellman (HJB) equation-based approaches. To obtain the Riccati equation, these approaches require some assumptions in the solution procedure; that is, the former approach requires the notion of costates and then their relationship with states is exploited to obtain the closed-form expression of the optimal control law, while the latter requires a priori knowledge regarding the optimal cost function. In this paper, we propose a novel method for computing linear quadratic optimal control laws by using the global optimal control framework introduced by V. F. Krotov. As shall be illustrated in this article, this framework does not require the notion of costates and any a priori information regarding the optimal cost function. Nevertheless, using this framework, the optimal control problem gets translated to a nonconvex optimization problem. The novelty of the proposed method lies in transforming this nonconvex optimization problem into a convex problem. The convexity imposition results in a linear matrix inequality (LMI), whose analysis is reported in this work. Furthermore, this LMI reduces to the Riccati equation upon imposing optimality requirements. The insights along with the future directions of the work are presented and gathered at appropriate locations in this article. Finally, numerical results are provided to demonstrate the proposed methodology.


Author(s):  
Andrea Pesare ◽  
Michele Palladino ◽  
Maurizio Falcone

AbstractIn this paper, we will deal with a linear quadratic optimal control problem with unknown dynamics. As a modeling assumption, we will suppose that the knowledge that an agent has on the current system is represented by a probability distribution $$\pi $$ π on the space of matrices. Furthermore, we will assume that such a probability measure is opportunely updated to take into account the increased experience that the agent obtains while exploring the environment, approximating with increasing accuracy the underlying dynamics. Under these assumptions, we will show that the optimal control obtained by solving the “average” linear quadratic optimal control problem with respect to a certain $$\pi $$ π converges to the optimal control driven related to the linear quadratic optimal control problem governed by the actual, underlying dynamics. This approach is closely related to model-based reinforcement learning algorithms where prior and posterior probability distributions describing the knowledge on the uncertain system are recursively updated. In the last section, we will show a numerical test that confirms the theoretical results.


Axioms ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 137
Author(s):  
Vladimir Turetsky

Two inverse ill-posed problems are considered. The first problem is an input restoration of a linear system. The second one is a restoration of time-dependent coefficients of a linear ordinary differential equation. Both problems are reformulated as auxiliary optimal control problems with regularizing cost functional. For the coefficients restoration problem, two control models are proposed. In the first model, the control coefficients are approximated by the output and the estimates of its derivatives. This model yields an approximating linear-quadratic optimal control problem having a known explicit solution. The derivatives are also obtained as auxiliary linear-quadratic tracking controls. The second control model is accurate and leads to a bilinear-quadratic optimal control problem. The latter is tackled in two ways: by an iterative procedure and by a feedback linearization. Simulation results show that a bilinear model provides more accurate coefficients estimates.


Author(s):  
Nacira Agram ◽  
Bernt Øksendal

The classical maximum principle for optimal stochastic control states that if a control [Formula: see text] is optimal, then the corresponding Hamiltonian has a maximum at [Formula: see text]. The first proofs for this result assumed that the control did not enter the diffusion coefficient. Moreover, it was assumed that there were no jumps in the system. Subsequently, it was discovered by Shige Peng (still assuming no jumps) that one could also allow the diffusion coefficient to depend on the control, provided that the corresponding adjoint backward stochastic differential equation (BSDE) for the first-order derivative was extended to include an extra BSDE for the second-order derivatives. In this paper, we present an alternative approach based on Hida–Malliavin calculus and white noise theory. This enables us to handle the general case with jumps, allowing both the diffusion coefficient and the jump coefficient to depend on the control, and we do not need the extra BSDE with second-order derivatives. The result is illustrated by an example of a constrained linear-quadratic optimal control.


Sign in / Sign up

Export Citation Format

Share Document