Multi-Kernel Online Reinforcement Learning for Path Tracking Control of Intelligent Vehicles

Author(s):  
Jiahang Liu ◽  
Zhenhua Huang ◽  
Xin Xu ◽  
Xinglong Zhang ◽  
Shiliang Sun ◽  
...  
2020 ◽  
Vol 10 (24) ◽  
pp. 9100
Author(s):  
Chenxu Li ◽  
Haobin Jiang ◽  
Shidian Ma ◽  
Shaokang Jiang ◽  
Yue Li

As a key technology for intelligent vehicles, automatic parking is becoming increasingly popular in the area of research. Automatic parking technology is available for safe and quick parking operations without a driver, and improving the driving comfort while greatly reducing the probability of parking accidents. An automatic parking path planning and tracking control method is proposed in this paper to resolve the following issues presented in the existing automatic parking systems, that is, low degree of automation in vehicle control; lack of conformity between segmented path planning and real vehicle motion models; and low success rates of parking due to poor path tracking. To this end, this paper innovatively proposes preview correction which can be applied to parking path planning, and detects the curvature outliers in the parking path through the preview algorithm. In addition, it is also available for correction in advance to optimize the reasonable parking path. Meanwhile, the dual sliding mode variable structure control algorithm is used to formulate path tracking control strategies to improve the path tracking control effect and the vehicle control automation. Based on the above algorithm, an automatic parking system was developed and the real vehicle test was completed, thus exploring a highly intelligent automatic parking technology roadmap. This paper provides two key aspects of system solutions for an automatic parking system, i.e., parking path planning and path tracking control.


Symmetry ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 31
Author(s):  
Jichang Ma ◽  
Hui Xie ◽  
Kang Song ◽  
Hao Liu

The path tracking control system is a crucial component for autonomous vehicles; it is challenging to realize accurate tracking control when approaching a wide range of uncertain situations and dynamic environments, particularly when such control must perform as well as, or better than, human drivers. While many methods provide state-of-the-art tracking performance, they tend to emphasize constant PID control parameters, calibrated by human experience, to improve tracking accuracy. A detailed analysis shows that PID controllers inefficiently reduce the lateral error under various conditions, such as complex trajectories and variable speed. In addition, intelligent driving vehicles are highly non-linear objects, and high-fidelity models are unavailable in most autonomous systems. As for the model-based controller (MPC or LQR), the complex modeling process may increase the computational burden. With that in mind, a self-optimizing, path tracking controller structure, based on reinforcement learning, is proposed. For the lateral control of the vehicle, a steering method based on the fusion of the reinforcement learning and traditional PID controllers is designed to adapt to various tracking scenarios. According to the pre-defined path geometry and the real-time status of the vehicle, the interactive learning mechanism, based on an RL framework (actor–critic—a symmetric network structure), can realize the online optimization of PID control parameters in order to better deal with the tracking error under complex trajectories and dynamic changes of vehicle model parameters. The adaptive performance of velocity changes was also considered in the tracking process. The proposed controlling approach was tested in different path tracking scenarios, both the driving simulator platforms and on-site vehicle experiments have verified the effects of our proposed self-optimizing controller. The results show that the approach can adaptively change the weights of PID to maintain a tracking error (simulation: within ±0.071 m; realistic vehicle: within ±0.272 m) and steering wheel vibration standard deviations (simulation: within ±0.04°; realistic vehicle: within ±80.69°); additionally, it can adapt to high-speed simulation scenarios (the maximum speed is above 100 km/h and the average speed through curves is 63–76 km/h).


2021 ◽  
Author(s):  
Haiqing Li ◽  
Yongfu Li ◽  
Taixiong Zheng ◽  
Jiufei Luo ◽  
Zonghuan Guo

Abstract Path tracking control strategy of emergency collision avoidance is the research hotspot for intelligent vehicles, and active four-wheel steering and integrated chassis control such as differential braking are the development trend for the control system of intelligent vehicle. Considering both driving performance and path tracking performance, an active obstacle avoidance controller integrated with four-wheel steering (4WS), active rear steering (ARS) and differential braking control (RBC) based on adaptive model predictive theory (AMPC) is proposed. The designed active obstacle avoidance control architecture is composed of two parts, a supervisor and an MPC controller. The supervisor is responsible for selecting the appropriate control mode based on driving vehicle information and threshold of lateral and roll stability. In addition, a non-linear predict model is employed to obtain the future states of the driving vehicle. Then the AMPC is used to calculate the desired steering angle and differential braking toque when the driving stability indexes exceed the safety threshold. Finally, the proposed collision avoidance path tracking control strategy was simulated under emergency conditions via Carsim-Simulink co-simulation. The results show that the controller based on AMPC can be used to tracking the path of obstacle avoidance and has good performance in driving stability under emergencies.


2019 ◽  
Vol 7 (12) ◽  
pp. 443 ◽  
Author(s):  
Yushan Sun ◽  
Chenming Zhang ◽  
Guocheng Zhang ◽  
Hao Xu ◽  
Xiangrui Ran

In this paper, the three-dimensional (3D) path tracking control of an autonomous underwater vehicle (AUV) under the action of sea currents was researched. A novel reward function was proposed to improve learning ability and a disturbance observer was developed to observe the disturbance caused by currents. Based on existing models, the dynamic and kinematic models of the AUV were established. Deep Deterministic Policy Gradient, a deep reinforcement learning, was employed for designing the path tracking controller. Compared with the backstepping sliding mode controller, the controller proposed in this article showed excellent performance, at least in the particular study developed in this article. The improved reward function and the disturbance observer were also found to work well with improving path tracking performance.


Sign in / Sign up

Export Citation Format

Share Document