Assuring the Safety of End-to-End Learning-Based Autonomous Driving through Runtime Monitoring

Author(s):  
Jorg Grieser ◽  
Meng Zhang ◽  
Tim Warnecke ◽  
Andreas Rausch
Author(s):  
Baiyu Peng ◽  
Qi Sun ◽  
Shengbo Eben Li ◽  
Dongsuk Kum ◽  
Yuming Yin ◽  
...  

AbstractRecent years have seen the rapid development of autonomous driving systems, which are typically designed in a hierarchical architecture or an end-to-end architecture. The hierarchical architecture is always complicated and hard to design, while the end-to-end architecture is more promising due to its simple structure. This paper puts forward an end-to-end autonomous driving method through a deep reinforcement learning algorithm Dueling Double Deep Q-Network, making it possible for the vehicle to learn end-to-end driving by itself. This paper firstly proposes an architecture for the end-to-end lane-keeping task. Unlike the traditional image-only state space, the presented state space is composed of both camera images and vehicle motion information. Then corresponding dueling neural network structure is introduced, which reduces the variance and improves sampling efficiency. Thirdly, the proposed method is applied to The Open Racing Car Simulator (TORCS) to demonstrate its great performance, where it surpasses human drivers. Finally, the saliency map of the neural network is visualized, which indicates the trained network drives by observing the lane lines. A video for the presented work is available online, https://youtu.be/76ciJmIHMD8 or https://v.youku.com/v_show/id_XNDM4ODc0MTM4NA==.html.


Author(s):  
Wenjie Song ◽  
Shixian Liu ◽  
Yujun Li ◽  
Yi Yang ◽  
Changle Xiang

Author(s):  
Yunpeng Pan ◽  
Ching-An Cheng ◽  
Kamil Saigol ◽  
Keuntaek Lee ◽  
Xinyan Yan ◽  
...  

2021 ◽  
Author(s):  
Keishi Ishihara ◽  
Anssi Kanervisto ◽  
Jun Miura ◽  
Ville Hautamaki

Author(s):  
Nicholas Merrill ◽  
Azim Eskandarian

Abstract The traditional approaches to autonomous, vision-based vehicle systems are limited by their dependency on robust algorithms, sensor fusion, detailed scene construction, and high-quality maps. End-to-end models offer a means of circumventing these limitations by directly mapping an image input to a steering angle output for lateral control. Existing end-to-end models, however, either fail to capture temporally dynamic information or rely on computationally expensive Recurrent Neural Networks (RNN), which are prone to error accumulation via feedback. This paper proposes a Multi-Task Learning (MTL) network architecture that leverages available dynamic sensor data as a target for auxiliary tasks. This method improves steering angle prediction by facilitating the extraction of temporal dependencies from sequentially stacked image inputs. Evaluations performed on the publicly available Comma.ai dataset show a 28.6% improvement in steering angle prediction over existing end-to-end methods.


Sign in / Sign up

Export Citation Format

Share Document