Optimal tracking control for completely unknown nonlinear discrete-time Markov jump systems using data-based reinforcement learning method

2016 ◽  
Vol 194 ◽  
pp. 176-182 ◽  
Author(s):  
He Jiang ◽  
Huaguang Zhang ◽  
Yanhong Luo ◽  
Junyi Wang
Automatica ◽  
2014 ◽  
Vol 50 (4) ◽  
pp. 1167-1175 ◽  
Author(s):  
Bahare Kiumarsi ◽  
Frank L. Lewis ◽  
Hamidreza Modares ◽  
Ali Karimpour ◽  
Mohammad-Bagher Naghibi-Sistani

2020 ◽  
Vol 53 (5-6) ◽  
pp. 778-787
Author(s):  
Jingren Zhang ◽  
Qingfeng Wang ◽  
Tao Wang

In this article, a novel continuous-time optimal tracking controller is proposed for the single-input-single-output linear system with completely unknown dynamics. Unlike those existing solutions to the optimal tracking control problem, the proposed controller introduces an integral compensation to reduce the steady-state error and regulates the feedforward part simultaneously with the feedback part. An augmented system composed of the integral compensation, error dynamics, and desired trajectory is established to formulate the optimal tracking control problem. The input energy and tracking error of the optimal controller are minimized according to the objective function in the infinite horizon. With the application of reinforcement learning techniques, the proposed controller does not require any prior knowledge of the system drift or input dynamics. The integral reinforcement learning method is employed to approximate the Q-function and update the critic network on-line. And the actor network is updated with the deterministic learning method. The Lyapunov stability is proved under the persistence of excitation condition. A case study on a hydraulic loading system has shown the effectiveness of the proposed controller by simulation and experiment.


Sign in / Sign up

Export Citation Format

Share Document