Visual SLAM for Automated Driving: Exploring the Applications of Deep Learning

Author(s):  
Stefan Milz ◽  
Georg Arbeiter ◽  
Christian Witt ◽  
Bassam Abdallah ◽  
Senthil Yogamani
Author(s):  
Riichi Kudo ◽  
Kahoko Takahashi ◽  
Takeru Inoue ◽  
Kohei Mizuno

Abstract Various smart connected devices are emerging like automated driving cars, autonomous robots, and remote-controlled construction vehicles. These devices have vision systems to conduct their operations without collision. Machine vision technology is becoming more accessible to perceive self-position and/or the surrounding environment thanks to the great advances in deep learning technologies. The accurate perception information of these smart connected devices makes it possible to predict wireless link quality (LQ). This paper proposes an LQ prediction scheme that applies machine learning to HD camera output to forecast the influence of surrounding mobile objects on LQ. The proposed scheme utilizes object detection based on deep learning and learns the relationship between the detected object position information and the LQ. Outdoor experiments show that LQ prediction proposal can well predict the throughput for around 1 s into the future in a 5.6-GHz wireless LAN channel.


2019 ◽  
Vol 10 (1) ◽  
pp. 253 ◽  
Author(s):  
Donghoon Shin ◽  
Hyun-geun Kim ◽  
Kang-moon Park ◽  
Kyongsu Yi

This paper describes the development of deep learning based human-centered threat assessment for application to automated driving vehicle. To achieve naturalistic driver model that would feel natural while safe to a human driver, manual driving characteristics are investigated through real-world driving test data. A probabilistic threat assessment with predicted collision time and collision probability is conducted to evaluate driving situations. On the basis of collision risk analysis, two kinds of deep learning have been implemented to reflect human driving characteristics for automated driving. A deep neural network (DNN) and recurrent neural network (RNN) are designed by neural architecture search (NAS), and by learning from the sequential data, respectively. The NAS is used to automatically design the individual driver’s neural network for efficient and effortless design process while ensuring training performance. Sequential trends in the host vehicle’s state can be incorporated through hand-made RNN. It has been shown from human-centered risk assessment simulations that two successfully designed deep learning driver models can provide conservative and progressive driving behavior similar to a manual human driver in both acceleration and deceleration situations by preventing collision.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1932
Author(s):  
Malik Haris ◽  
Adam Glowacz

Automated driving and vehicle safety systems need object detection. It is important that object detection be accurate overall and robust to weather and environmental conditions and run in real-time. As a consequence of this approach, they require image processing algorithms to inspect the contents of images. This article compares the accuracy of five major image processing algorithms: Region-based Fully Convolutional Network (R-FCN), Mask Region-based Convolutional Neural Networks (Mask R-CNN), Single Shot Multi-Box Detector (SSD), RetinaNet, and You Only Look Once v4 (YOLOv4). In this comparative analysis, we used a large-scale Berkeley Deep Drive (BDD100K) dataset. Their strengths and limitations are analyzed based on parameters such as accuracy (with/without occlusion and truncation), computation time, precision-recall curve. The comparison is given in this article helpful in understanding the pros and cons of standard deep learning-based algorithms while operating under real-time deployment restrictions. We conclude that the YOLOv4 outperforms accurately in detecting difficult road target objects under complex road scenarios and weather conditions in an identical testing environment.


2019 ◽  
Vol 1 (3) ◽  
pp. 177-184
Author(s):  
Chao Duan ◽  
Steffen Junginger ◽  
Jiahao Huang ◽  
Kairong Jin ◽  
Kerstin Thurow

Abstract Visual SLAM (Simultaneously Localization and Mapping) is a solution to achieve localization and mapping of robots simultaneously. Significant achievements have been made during the past decades, geography-based methods are becoming more and more successful in dealing with static environments. However, they still cannot handle a challenging environment. With the great achievements of deep learning methods in the field of computer vision, there is a trend of applying deep learning methods to visual SLAM. In this paper, the latest research progress of deep learning applied to the field of visual SLAM is reviewed. The outstanding research results of deep learning visual odometry and deep learning loop closure detect are summarized. Finally, future development directions of visual SLAM based on deep learning is prospected.


2018 ◽  
Vol 8 (12) ◽  
pp. 2590 ◽  
Author(s):  
Halil Beglerovic ◽  
Thomas Schloemicher ◽  
Steffen Metzner ◽  
Martin Horn

Test, verification, and development activities of vehicles with ADAS (Advanced Driver Assistance Systems) and ADF (Automated Driving Functions) generate large amounts of measurement data. To efficiently evaluate and use this data, a generic understanding and classification of the relevant driving scenarios is necessary. Currently, such understanding is obtained by using heuristic algorithms or even by manual inspection of sensor signals. In this paper, we apply deep learning on sensor time series data to automatically extract relevant features for classification of driving scenarios relevant for a Lane-Keep-Assist System. We compare the performance of convolutional and recurrent neural networks and propose two classification models. The first one is an online model for scenario classification during driving. The second one is an offline model for post-processing, providing higher accuracy.


Sign in / Sign up

Export Citation Format

Share Document