scholarly journals Linear Quadratic Optimal Control Design: A Novel Approach Based on Krotov Conditions

2019 ◽  
Vol 2019 ◽  
pp. 1-17
Author(s):  
Avinash Kumar ◽  
Tushar Jain

This paper revisits the problem of synthesizing the optimal control law for linear systems with a quadratic cost. For this problem, traditionally, the state feedback gain matrix of the optimal controller is computed by solving the Riccati equation, which is primarily obtained using calculus of variations- (CoV-) based and Hamilton–Jacobi–Bellman (HJB) equation-based approaches. To obtain the Riccati equation, these approaches require some assumptions in the solution procedure; that is, the former approach requires the notion of costates and then their relationship with states is exploited to obtain the closed-form expression of the optimal control law, while the latter requires a priori knowledge regarding the optimal cost function. In this paper, we propose a novel method for computing linear quadratic optimal control laws by using the global optimal control framework introduced by V. F. Krotov. As shall be illustrated in this article, this framework does not require the notion of costates and any a priori information regarding the optimal cost function. Nevertheless, using this framework, the optimal control problem gets translated to a nonconvex optimization problem. The novelty of the proposed method lies in transforming this nonconvex optimization problem into a convex problem. The convexity imposition results in a linear matrix inequality (LMI), whose analysis is reported in this work. Furthermore, this LMI reduces to the Riccati equation upon imposing optimality requirements. The insights along with the future directions of the work are presented and gathered at appropriate locations in this article. Finally, numerical results are provided to demonstrate the proposed methodology.

Author(s):  
Verica Radisavljevic-Gajic

This paper is an overview of fundamental linear–quadratic optimal control techniques used for linear dynamic systems. The presentation is suitable for undergraduate and graduate students and practicing engineers. The paper can be used by class instructors as supplemental material for undergraduate and graduate control system courses. The paper shows how to find the solution to a dynamic optimization problem: optimize an integral quadratic performance criterion along trajectories of a linear dynamic system over an infinite time period (steady-state linear–quadratic optimal control problem). The solution is obtained by solving a static optimization problem. All derivations done in the paper require only elementary knowledge of linear algebra and state space linear system analysis. The results are presented also for the observer-driven linear–quadratic steady-state optimal controller, output feedback-based linear–quadratic optimal controller, and the Kalman filter-driven linear–quadratic stochastic optimal controller. Having full understanding of derivations of the linear–quadratic optimal controller, observer-driven linear–quadratic optimal controller, optimal linear–quadratic output feedback controller, and optimal linear–quadratic stochastic controller, students and engineers will feel confident to use these controllers in numerous engineering and scientific applications. Several optimal linear–quadratic control case studies involving models of real physical systems, with the corresponding Simulink block diagrams and MATLAB codes, are included in the paper.


2015 ◽  
Vol 2015 ◽  
pp. 1-11
Author(s):  
Xikui Liu ◽  
Guiling Li ◽  
Yan Li

The Karush-Kuhn-Tucker (KKT) theorem is used to study stochastic linear quadratic optimal control with terminal constraint for discrete-time systems, allowing the control weighting matrices in the cost to be indefinite. A generalized difference Riccati equation is derived, which is different from those without constraint case. It is proved that the well-posedness and the attainability of stochastic linear quadratic optimal control problem are equivalent. Moreover, an optimal control can be denoted by the solution of the generalized difference Riccati equation.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Yan Chen ◽  
Jie Xu

In this paper, the delayed doubly stochastic linear quadratic optimal control problem is discussed. It deduces the expression of the optimal control for the general delayed doubly stochastic control system which contained time delay both in the state variable and in the control variable at the same time and proves its uniqueness by using the classical parallelogram rule. The paper is concerned with the generalized matrix value Riccati equation for a special delayed doubly stochastic linear quadratic control system and aims to give the expression of optimal control and value function by the solution of the Riccati equation.


Sign in / Sign up

Export Citation Format

Share Document