New methods for mode-independent robust control of Markov jump linear systems

2016 ◽  
Vol 90 ◽  
pp. 38-44 ◽  
Author(s):  
Marcos G. Todorov ◽  
Marcelo D. Fragoso
2020 ◽  
Vol 42 (15) ◽  
pp. 3043-3051
Author(s):  
Yaogang Chen ◽  
Jiwei Wen ◽  
Xiaoli Luan ◽  
Fei Liu

In this paper, an online temporal differences (TD) learning approach is proposed to solve the robust control problem for discrete-time Markov jump linear systems (MJLS) subject to completely unknown transition probabilities (TP). The TD learning algorithm consists of two parts: policy evaluation and policy improvement. In the first part, by observing the mode jumping trajectories instead of solving a set of coupled algebraic Riccati equations, value functions are updated and approximate the TP related matrices. In the second part, new robust controllers can be obtained until value functions converge in the previous part. Moreover, the convergence of the value functions is proved by initializing a feasible control policy. Finally, two examples are presented to illustrate the effectiveness of the proposed approach by comparing with existing results.


Sign in / Sign up

Export Citation Format

Share Document