Sufficient conditions for optimal control with state and control constraints

1971 ◽  
Vol 7 (2) ◽  
pp. 118-135 ◽  
Author(s):  
Harold Stalford
2006 ◽  
Vol 15 (03) ◽  
pp. 373-387
Author(s):  
M. VASSILAKI ◽  
G. BITSORIS

In this paper the regulation problem of linear discrete-time systems with uncertain parameters under state and control constraints is studied. In the first part of the paper, two theorems concerning necessary and sufficient conditions for the existence of a solution to this problem are presented. Due to the constructive form of the proof of these theorems, these results can be used to the development of techniques for the derivation of a control law transferring to the origin any state belonging to a given set of initial states while respecting the state and control constraints.


2021 ◽  
Author(s):  
Xinglong Zhang ◽  
Yaoqian Peng ◽  
Biao Luo ◽  
Wei Pan ◽  
Xin Xu ◽  
...  

<div>Recently, barrier function-based safe reinforcement learning (RL) with the actor-critic structure for continuous control tasks has received increasing attention. It is still challenging to learn a near-optimal control policy with safety and convergence guarantees. Also, few works have addressed the safe RL algorithm design under time-varying safety constraints. This paper proposes a model-based safe RL algorithm for optimal control of nonlinear systems with time-varying state and control constraints. In the proposed approach, we construct a novel barrier-based control policy structure that can guarantee control safety. A multi-step policy evaluation mechanism is proposed to predict the policy's safety risk under time-varying safety constraints and guide the policy to update safely. Theoretical results on stability and robustness are proven. Also, the convergence of the actor-critic learning algorithm is analyzed. The performance of the proposed algorithm outperforms several state-of-the-art RL algorithms in the simulated Safety Gym environment. Furthermore, the approach is applied to the integrated path following and collision avoidance problem for two real-world intelligent vehicles. A differential-drive vehicle and an Ackermann-drive one are used to verify the offline deployment performance and the online learning performance, respectively. Our approach shows an impressive sim-to-real transfer capability and a satisfactory online control performance in the experiment.</div>


2018 ◽  
Vol 24 (3) ◽  
pp. 1181-1206 ◽  
Author(s):  
Susanne C. Brenner ◽  
Thirupathi Gudi ◽  
Kamana Porwal ◽  
Li-yeng Sung

We design and analyze a Morley finite element method for an elliptic distributed optimal control problem with pointwise state and control constraints on convex polygonal domains. It is based on the formulation of the optimal control problem as a fourth order variational inequality. Numerical results that illustrate the performance of the method are also presented.


Sign in / Sign up

Export Citation Format

Share Document