Safety-Critical Optimal Control for Autonomous Systems

2021 ◽  
Vol 34 (5) ◽  
pp. 1723-1742
Author(s):  
Wei Xiao ◽  
G. Christos Cassandras ◽  
Calin Belta
2019 ◽  
Vol 50 (6) ◽  
pp. 1275-1289 ◽  
Author(s):  
Chi Zhang ◽  
Minggang Gan ◽  
Jingang Zhao

2021 ◽  
Vol 8 ◽  
Author(s):  
S. M. Nahid Mahmud ◽  
Scott A. Nivison ◽  
Zachary I. Bell ◽  
Rushikesh Kamalapurkar

Reinforcement learning has been established over the past decade as an effective tool to find optimal control policies for dynamical systems, with recent focus on approaches that guarantee safety during the learning and/or execution phases. In general, safety guarantees are critical in reinforcement learning when the system is safety-critical and/or task restarts are not practically feasible. In optimal control theory, safety requirements are often expressed in terms of state and/or control constraints. In recent years, reinforcement learning approaches that rely on persistent excitation have been combined with a barrier transformation to learn the optimal control policies under state constraints. To soften the excitation requirements, model-based reinforcement learning methods that rely on exact model knowledge have also been integrated with the barrier transformation framework. The objective of this paper is to develop safe reinforcement learning method for deterministic nonlinear systems, with parametric uncertainties in the model, to learn approximate constrained optimal policies without relying on stringent excitation conditions. To that end, a model-based reinforcement learning technique that utilizes a novel filtered concurrent learning method, along with a barrier transformation, is developed in this paper to realize simultaneous learning of unknown model parameters and approximate optimal state-constrained control policies for safety-critical systems.


2016 ◽  
Vol 232 ◽  
pp. 79-90 ◽  
Author(s):  
Adina Aniculaesei ◽  
Daniel Arnsberger ◽  
Falk Howar ◽  
Andreas Rausch

Author(s):  
Mo Chen ◽  
Claire J. Tomlin

Autonomous systems are becoming pervasive in everyday life, and many of these systems are complex and safety-critical. Formal verification is important for providing performance and safety guarantees for these systems. In particular, Hamilton–Jacobi (HJ) reachability is a formal verification tool for nonlinear and hybrid systems; however, it is computationally intractable for analyzing complex systems, and computational burden is in general a difficult challenge in formal verification. In this review, we begin by briefly presenting background on reachability analysis with an emphasis on the HJ formulation. We then present recent work showing how high-dimensional reachability verification can be made more tractable by focusing on two areas of development: system decomposition for general nonlinear systems, and traffic protocols for unmanned airspace management. By tackling the curse of dimensionality, tractable verification of practical systems is becoming a reality, paving the way for more pervasive and safer automation.


Sign in / Sign up

Export Citation Format

Share Document