scholarly journals Multi-modal Sensor Fusion-Based Deep Neural Network for End-to-end Autonomous Driving with Scene Understanding

2020 ◽  
pp. 1-1
Author(s):  
Zhiyu Huang ◽  
Chen Lv ◽  
Yang Xing ◽  
Jingda Wu
Sensors ◽  
2019 ◽  
Vol 19 (9) ◽  
pp. 2064 ◽  
Author(s):  
Jelena Kocić ◽  
Nenad Jovičić ◽  
Vujo Drndarević

In this paper, one solution for an end-to-end deep neural network for autonomous driving is presented. The main objective of our work was to achieve autonomous driving with a light deep neural network suitable for deployment on embedded automotive platforms. There are several end-to-end deep neural networks used for autonomous driving, where the input to the machine learning algorithm are camera images and the output is the steering angle prediction, but those convolutional neural networks are significantly more complex than the network architecture we are proposing. The network architecture, computational complexity, and performance evaluation during autonomous driving using our network are compared with two other convolutional neural networks that we re-implemented with the aim to have an objective evaluation of the proposed network. The trained model of the proposed network is four times smaller than the PilotNet model and about 250 times smaller than AlexNet model. While complexity and size of the novel network are reduced in comparison to other models, which leads to lower latency and higher frame rate during inference, our network maintained the performance, achieving successful autonomous driving with similar efficiency compared to autonomous driving using two other models. Moreover, the proposed deep neural network downsized the needs for real-time inference hardware in terms of computational power, cost, and size.


Author(s):  
Baiyu Peng ◽  
Qi Sun ◽  
Shengbo Eben Li ◽  
Dongsuk Kum ◽  
Yuming Yin ◽  
...  

AbstractRecent years have seen the rapid development of autonomous driving systems, which are typically designed in a hierarchical architecture or an end-to-end architecture. The hierarchical architecture is always complicated and hard to design, while the end-to-end architecture is more promising due to its simple structure. This paper puts forward an end-to-end autonomous driving method through a deep reinforcement learning algorithm Dueling Double Deep Q-Network, making it possible for the vehicle to learn end-to-end driving by itself. This paper firstly proposes an architecture for the end-to-end lane-keeping task. Unlike the traditional image-only state space, the presented state space is composed of both camera images and vehicle motion information. Then corresponding dueling neural network structure is introduced, which reduces the variance and improves sampling efficiency. Thirdly, the proposed method is applied to The Open Racing Car Simulator (TORCS) to demonstrate its great performance, where it surpasses human drivers. Finally, the saliency map of the neural network is visualized, which indicates the trained network drives by observing the lane lines. A video for the presented work is available online, https://youtu.be/76ciJmIHMD8 or https://v.youku.com/v_show/id_XNDM4ODc0MTM4NA==.html.


2020 ◽  
Vol 174 ◽  
pp. 505-517
Author(s):  
Qingqiao Hu ◽  
Siyang Yin ◽  
Huiyang Ni ◽  
Yisiyuan Huang

2021 ◽  
Vol 6 (4) ◽  
pp. 8647-8654
Author(s):  
Qi Wang ◽  
Jian Chen ◽  
Jianqiang Deng ◽  
Xinfang Zhang

2021 ◽  
Author(s):  
Dennis J. Lee ◽  
John Mulcahy-Stanislawczyk ◽  
Edward Jimenez ◽  
Derek West ◽  
Ryan Goodner ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document