A Reference Governor for Nonlinear Systems Based on Quadratic Programming

Author(s):  
Nan I. Li ◽  
Ilya Kolmanovsky ◽  
Anouck Girard

The reference governor modifies set-point commands to a closed-loop system in order to enforce state and control constraints. In this paper, we describe an approach to reference governor implementation for nonlinear systems, which is based on bounding (covering) the response of a nonlinear system by the response of a linear model with a set-bounded disturbance input. Such a design strategy is of interest as it reduces the online optimization problem to a convex quadratic programming (QP) problem with linear inequality constraints, thereby permitting standard QP solvers to be used. A numerical example is reported.

1995 ◽  
Vol 117 (2) ◽  
pp. 126-133 ◽  
Author(s):  
Suhada Jayasuriya

The problem of explicitly determining the worst persistent input disturbance that a closed-loop system can tolerate under prespecified state and control constraints is studied. Verification of designs specifically aimed at maximizing the size of persistent bounded disturbances while satisfying system constraints, typically requires extensive simulations because the exact nature of the worst input is not known. In this paper the exact nature of the worst input is completely characterized for both SISO and MIMO cases. A finite number of specific impulse responses of the closed-loop system determines the worst persistent input disturbance. In the case of a SISO with n state constraints (|xi| ≤ βi), a control constraint (|u| ≤ βu) and an output constraint (|y| ≤ βo), n + 2 impulse responses are generally needed. With this new result the large number of simulations that is typically needed for design verification can be significantly reduced. Two examples illustrate how the new characterization can be utilized.


Author(s):  
Kaiwen Liu ◽  
Nan Li ◽  
Ilya Kolmanovsky ◽  
Denise Rizzo ◽  
Anouck Girard

Abstract This paper proposes a learning reference governor (LRG) approach to enforce state and control constraints in systems for which an accurate model is unavailable; and this approach enables the reference governor to gradually improve command tracking performance through learning while enforcing the constraints during learning and after learning is completed. The learning can be performed either on a black-box type model of the system or directly on the hardware. After introducing the LRG algorithm and outlining its theoretical properties, this paper investigates LRG application to fuel truck (tank truck) rollover avoidance. Through simulations based on a fuel truck model that accounts for liquid fuel sloshing effects, we show that the proposed LRG can effectively protect fuel trucks from rollover accidents under various operating conditions.


Sign in / Sign up

Export Citation Format

Share Document