Path Following Predictive Control for Autonomous Vehicles Subject to Uncertain Tire-ground Adhesion and Varied Road Curvature

2019 ◽  
Vol 17 (1) ◽  
pp. 193-202 ◽  
Author(s):  
Lu Yang ◽  
Ming Yue ◽  
Teng Ma
2020 ◽  
Vol 14 (14) ◽  
pp. 2092-2101
Author(s):  
Yixiao Liang ◽  
Yinong Li ◽  
Amir Khajepour ◽  
Ling Zheng

2021 ◽  
Vol 18 (3) ◽  
pp. 172988142110195
Author(s):  
Sorin Grigorescu ◽  
Cosmin Ginerica ◽  
Mihai Zaha ◽  
Gigel Macesanu ◽  
Bogdan Trasnea

In this article, we introduce a learning-based vision dynamics approach to nonlinear model predictive control (NMPC) for autonomous vehicles, coined learning-based vision dynamics (LVD) NMPC. LVD-NMPC uses an a-priori process model and a learned vision dynamics model used to calculate the dynamics of the driving scene, the controlled system’s desired state trajectory, and the weighting gains of the quadratic cost function optimized by a constrained predictive controller. The vision system is defined as a deep neural network designed to estimate the dynamics of the image scene. The input is based on historic sequences of sensory observations and vehicle states, integrated by an augmented memory component. Deep Q-learning is used to train the deep network, which once trained can also be used to calculate the desired trajectory of the vehicle. We evaluate LVD-NMPC against a baseline dynamic window approach (DWA) path planning executed using standard NMPC and against the PilotNet neural network. Performance is measured in our simulation environment GridSim, on a real-world 1:8 scaled model car as well as on a real size autonomous test vehicle and the nuScenes computer vision dataset.


Sign in / Sign up

Export Citation Format

Share Document