Possibilities of deep learning for automated driving with focus on environmental perception

Author(s):  
Heinrich Gotzig
Author(s):  
Riichi Kudo ◽  
Kahoko Takahashi ◽  
Takeru Inoue ◽  
Kohei Mizuno

Abstract Various smart connected devices are emerging like automated driving cars, autonomous robots, and remote-controlled construction vehicles. These devices have vision systems to conduct their operations without collision. Machine vision technology is becoming more accessible to perceive self-position and/or the surrounding environment thanks to the great advances in deep learning technologies. The accurate perception information of these smart connected devices makes it possible to predict wireless link quality (LQ). This paper proposes an LQ prediction scheme that applies machine learning to HD camera output to forecast the influence of surrounding mobile objects on LQ. The proposed scheme utilizes object detection based on deep learning and learns the relationship between the detected object position information and the LQ. Outdoor experiments show that LQ prediction proposal can well predict the throughput for around 1 s into the future in a 5.6-GHz wireless LAN channel.


2019 ◽  
Vol 10 (1) ◽  
pp. 253 ◽  
Author(s):  
Donghoon Shin ◽  
Hyun-geun Kim ◽  
Kang-moon Park ◽  
Kyongsu Yi

This paper describes the development of deep learning based human-centered threat assessment for application to automated driving vehicle. To achieve naturalistic driver model that would feel natural while safe to a human driver, manual driving characteristics are investigated through real-world driving test data. A probabilistic threat assessment with predicted collision time and collision probability is conducted to evaluate driving situations. On the basis of collision risk analysis, two kinds of deep learning have been implemented to reflect human driving characteristics for automated driving. A deep neural network (DNN) and recurrent neural network (RNN) are designed by neural architecture search (NAS), and by learning from the sequential data, respectively. The NAS is used to automatically design the individual driver’s neural network for efficient and effortless design process while ensuring training performance. Sequential trends in the host vehicle’s state can be incorporated through hand-made RNN. It has been shown from human-centered risk assessment simulations that two successfully designed deep learning driver models can provide conservative and progressive driving behavior similar to a manual human driver in both acceleration and deceleration situations by preventing collision.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1932
Author(s):  
Malik Haris ◽  
Adam Glowacz

Automated driving and vehicle safety systems need object detection. It is important that object detection be accurate overall and robust to weather and environmental conditions and run in real-time. As a consequence of this approach, they require image processing algorithms to inspect the contents of images. This article compares the accuracy of five major image processing algorithms: Region-based Fully Convolutional Network (R-FCN), Mask Region-based Convolutional Neural Networks (Mask R-CNN), Single Shot Multi-Box Detector (SSD), RetinaNet, and You Only Look Once v4 (YOLOv4). In this comparative analysis, we used a large-scale Berkeley Deep Drive (BDD100K) dataset. Their strengths and limitations are analyzed based on parameters such as accuracy (with/without occlusion and truncation), computation time, precision-recall curve. The comparison is given in this article helpful in understanding the pros and cons of standard deep learning-based algorithms while operating under real-time deployment restrictions. We conclude that the YOLOv4 outperforms accurately in detecting difficult road target objects under complex road scenarios and weather conditions in an identical testing environment.


2018 ◽  
Vol 8 (12) ◽  
pp. 2590 ◽  
Author(s):  
Halil Beglerovic ◽  
Thomas Schloemicher ◽  
Steffen Metzner ◽  
Martin Horn

Test, verification, and development activities of vehicles with ADAS (Advanced Driver Assistance Systems) and ADF (Automated Driving Functions) generate large amounts of measurement data. To efficiently evaluate and use this data, a generic understanding and classification of the relevant driving scenarios is necessary. Currently, such understanding is obtained by using heuristic algorithms or even by manual inspection of sensor signals. In this paper, we apply deep learning on sensor time series data to automatically extract relevant features for classification of driving scenarios relevant for a Lane-Keep-Assist System. We compare the performance of convolutional and recurrent neural networks and propose two classification models. The first one is an online model for scenario classification during driving. The second one is an offline model for post-processing, providing higher accuracy.


2018 ◽  
Vol 2 (3) ◽  
pp. 57 ◽  
Author(s):  
Shehan Caldera ◽  
Alexander Rassau ◽  
Douglas Chai

For robots to attain more general-purpose utility, grasping is a necessary skill to master. Such general-purpose robots may use their perception abilities to visually identify grasps for a given object. A grasp describes how a robotic end-effector can be arranged to securely grab an object and successfully lift it without slippage. Traditionally, grasp detection requires expert human knowledge to analytically form the task-specific algorithm, but this is an arduous and time-consuming approach. During the last five years, deep learning methods have enabled significant advancements in robotic vision, natural language processing, and automated driving applications. The successful results of these methods have driven robotics researchers to explore the use of deep learning methods in task-generalised robotic applications. This paper reviews the current state-of-the-art in regards to the application of deep learning methods to generalised robotic grasping and discusses how each element of the deep learning approach has improved the overall performance of robotic grasp detection. Several of the most promising approaches are evaluated and the most suitable for real-time grasp detection is identified as the one-shot detection method. The availability of suitable volumes of appropriate training data is identified as a major obstacle for effective utilisation of the deep learning approaches, and the use of transfer learning techniques is proposed as a potential mechanism to address this. Finally, current trends in the field and future potential research directions are discussed.


Author(s):  
Rui Li ◽  
Weitian Wang ◽  
Yi Chen ◽  
Srivatsan Srinivasan ◽  
Venkat N. Krovi

Fully automatic parking (FAP) is a key step towards the age of autonomous vehicle. Motivated by the contribution of human vision to human parking, in this paper, we propose a computer vision based FAP method for the autonomous vehicles. Based on the input images from a rear camera on the vehicle, a convolutional neural network (CNN) is trained to automatically output the steering and velocity commands for the vehicle controlling. The CNN is trained by Caffe deep learning framework. A 1/10th autonomous vehicle research platform (1/10-SAVRP), which configured with a vehicle controller unit, an automated driving processor, and a rear camera, is used for demonstrating the parking maneuver. The experimental results suggested that the proposed approach enabled the vehicle to gain the ability of parking independently without human input in different driving settings.


Sign in / Sign up

Export Citation Format

Share Document