Path tracking control based on Deep reinforcement learning in Autonomous driving

Author(s):  
Le Jiang ◽  
Yafei Wang ◽  
Lin Wang ◽  
Jingkai Wu
Symmetry ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 31
Author(s):  
Jichang Ma ◽  
Hui Xie ◽  
Kang Song ◽  
Hao Liu

The path tracking control system is a crucial component for autonomous vehicles; it is challenging to realize accurate tracking control when approaching a wide range of uncertain situations and dynamic environments, particularly when such control must perform as well as, or better than, human drivers. While many methods provide state-of-the-art tracking performance, they tend to emphasize constant PID control parameters, calibrated by human experience, to improve tracking accuracy. A detailed analysis shows that PID controllers inefficiently reduce the lateral error under various conditions, such as complex trajectories and variable speed. In addition, intelligent driving vehicles are highly non-linear objects, and high-fidelity models are unavailable in most autonomous systems. As for the model-based controller (MPC or LQR), the complex modeling process may increase the computational burden. With that in mind, a self-optimizing, path tracking controller structure, based on reinforcement learning, is proposed. For the lateral control of the vehicle, a steering method based on the fusion of the reinforcement learning and traditional PID controllers is designed to adapt to various tracking scenarios. According to the pre-defined path geometry and the real-time status of the vehicle, the interactive learning mechanism, based on an RL framework (actor–critic—a symmetric network structure), can realize the online optimization of PID control parameters in order to better deal with the tracking error under complex trajectories and dynamic changes of vehicle model parameters. The adaptive performance of velocity changes was also considered in the tracking process. The proposed controlling approach was tested in different path tracking scenarios, both the driving simulator platforms and on-site vehicle experiments have verified the effects of our proposed self-optimizing controller. The results show that the approach can adaptively change the weights of PID to maintain a tracking error (simulation: within ±0.071 m; realistic vehicle: within ±0.272 m) and steering wheel vibration standard deviations (simulation: within ±0.04°; realistic vehicle: within ±80.69°); additionally, it can adapt to high-speed simulation scenarios (the maximum speed is above 100 km/h and the average speed through curves is 63–76 km/h).


2020 ◽  
Vol 69 (10) ◽  
pp. 10581-10595
Author(s):  
Yunxiao Shan ◽  
Boli Zheng ◽  
Longsheng Chen ◽  
Long Chen ◽  
De Chen

2021 ◽  
Vol 54 (10) ◽  
pp. 443-448
Author(s):  
Jiangfeng Nan ◽  
Bingxu Shang ◽  
Weiwen Deng ◽  
Bingtao Ren ◽  
Yang Liu

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Qing Gu ◽  
Guoxing Bai ◽  
Yu Meng ◽  
Guodong Wang ◽  
Jiazang Zhang ◽  
...  

This paper proposes a path tracking control algorithm of tracked mobile robots based on Preview Linear Model Predictive Control (MPC), which is used to achieve autonomous driving in the unstructured environment under an emergency rescue scenario. It is the future trend to realize the communication and control of rescue equipment with 6G and edge cloud cooperation. In this framework, linear MPC (LMPC) is suitable for the path tracking control of rescue robots due to its advantages of less computing resources and good real-time performance. However, in such a scene, the driving environment is complex and the path curvature changes greatly. Since LMPC can only introduce linearized feedforward information, the tracking accuracy of the path with large curvature changes is low. To overcome this issue, combined with the idea of preview control, preview-linear MPC is designed in this paper. The controller is verified by MATLAB/Simulink simulation and prototype experiment. The results show that the proposed method can improve the tracking accuracy while ensuring real-time performance and has better tracking performance for the path with large curvature variation.


2019 ◽  
Vol 7 (12) ◽  
pp. 443 ◽  
Author(s):  
Yushan Sun ◽  
Chenming Zhang ◽  
Guocheng Zhang ◽  
Hao Xu ◽  
Xiangrui Ran

In this paper, the three-dimensional (3D) path tracking control of an autonomous underwater vehicle (AUV) under the action of sea currents was researched. A novel reward function was proposed to improve learning ability and a disturbance observer was developed to observe the disturbance caused by currents. Based on existing models, the dynamic and kinematic models of the AUV were established. Deep Deterministic Policy Gradient, a deep reinforcement learning, was employed for designing the path tracking controller. Compared with the backstepping sliding mode controller, the controller proposed in this article showed excellent performance, at least in the particular study developed in this article. The improved reward function and the disturbance observer were also found to work well with improving path tracking performance.


Sign in / Sign up

Export Citation Format

Share Document