Recognition Assistant Framework Based on Deep Learning for Autonomous Driving: Restoring Damaged Road Sign Information

2022 ◽  
Author(s):  
Jeongeun Park ◽  
Kisu Lee ◽  
Ha Young Kim
Author(s):  
Yao Deng ◽  
Tiehua Zhang ◽  
Guannan Lou ◽  
Xi Zheng ◽  
Jiong Jin ◽  
...  

Author(s):  
Khan Muhammad ◽  
Amin Ullah ◽  
Jaime Lloret ◽  
Javier Del Ser ◽  
Victor Hugo C. de Albuquerque

2020 ◽  
Vol 13 (1) ◽  
pp. 89
Author(s):  
Manuel Carranza-García ◽  
Jesús Torres-Mateo ◽  
Pedro Lara-Benítez ◽  
Jorge García-Gutiérrez

Object detection using remote sensing data is a key task of the perception systems of self-driving vehicles. While many generic deep learning architectures have been proposed for this problem, there is little guidance on their suitability when using them in a particular scenario such as autonomous driving. In this work, we aim to assess the performance of existing 2D detection systems on a multi-class problem (vehicles, pedestrians, and cyclists) with images obtained from the on-board camera sensors of a car. We evaluate several one-stage (RetinaNet, FCOS, and YOLOv3) and two-stage (Faster R-CNN) deep learning meta-architectures under different image resolutions and feature extractors (ResNet, ResNeXt, Res2Net, DarkNet, and MobileNet). These models are trained using transfer learning and compared in terms of both precision and efficiency, with special attention to the real-time requirements of this context. For the experimental study, we use the Waymo Open Dataset, which is the largest existing benchmark. Despite the rising popularity of one-stage detectors, our findings show that two-stage detectors still provide the most robust performance. Faster R-CNN models outperform one-stage detectors in accuracy, being also more reliable in the detection of minority classes. Faster R-CNN Res2Net-101 achieves the best speed/accuracy tradeoff but needs lower resolution images to reach real-time speed. Furthermore, the anchor-free FCOS detector is a slightly faster alternative to RetinaNet, with similar precision and lower memory usage.


2020 ◽  
pp. 106617
Author(s):  
Guofa Li ◽  
Yifan Yang ◽  
Xingda Qu ◽  
Dongpu Cao ◽  
Keqiang Li

2022 ◽  
Author(s):  
Mesfer Al Duhayyim ◽  
Fahd N. Al-Wesabi ◽  
Anwer Mustafa Hilal ◽  
Manar Ahmed Hamza ◽  
Shalini Goel ◽  
...  

2021 ◽  
pp. 228-245
Author(s):  
Zheyi Chen ◽  
Pu Tian ◽  
Weixian Liao ◽  
Wei Yu

Author(s):  
Gaojian Huang ◽  
Clayton Steele ◽  
Xinrui Zhang ◽  
Brandon J. Pitts

The rapid growth of autonomous vehicles is expected to improve roadway safety. However, certain levels of vehicle automation will still require drivers to ‘takeover’ during abnormal situations, which may lead to breakdowns in driver-vehicle interactions. To date, there is no agreement on how to best support drivers in accomplishing a takeover task. Therefore, the goal of this study was to investigate the effectiveness of multimodal alerts as a feasible approach. In particular, we examined the effects of uni-, bi-, and trimodal combinations of visual, auditory, and tactile cues on response times to takeover alerts. Sixteen participants were asked to detect 7 multimodal signals (i.e., visual, auditory, tactile, visual-auditory, visual-tactile, auditory-tactile, and visual-auditory-tactile) while driving under two conditions: with SAE Level 3 automation only or with SAE Level 3 automation in addition to performing a road sign detection task. Performance on the signal and road sign detection tasks, pupil size, and perceived workload were measured. Findings indicate that trimodal combinations result in the shortest response time. Also, response times were longer and perceived workload was higher when participants were engaged in a secondary task. Findings may contribute to the development of theory regarding the design of takeover request alert systems within (semi) autonomous vehicles.


Author(s):  
Ying Li ◽  
Lingfei Ma ◽  
Zilong Zhong ◽  
Fei Liu ◽  
Michael A. Chapman ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4703
Author(s):  
Yookhyun Yoon ◽  
Taeyeon Kim ◽  
Ho Lee ◽  
Jahnghyon Park

For driving safely and comfortably, the long-term trajectory prediction of surrounding vehicles is essential for autonomous vehicles. For handling the uncertain nature of trajectory prediction, deep-learning-based approaches have been proposed previously. An on-road vehicle must obey road geometry, i.e., it should run within the constraint of the road shape. Herein, we present a novel road-aware trajectory prediction method which leverages the use of high-definition maps with a deep learning network. We developed a data-efficient learning framework for the trajectory prediction network in the curvilinear coordinate system of the road and a lane assignment for the surrounding vehicles. Then, we proposed a novel output-constrained sequence-to-sequence trajectory prediction network to incorporate the structural constraints of the road. Our method uses these structural constraints as prior knowledge for the prediction network. It is not only used as an input to the trajectory prediction network, but is also included in the constrained loss function of the maneuver recognition network. Accordingly, the proposed method can predict a feasible and realistic intention of the driver and trajectory. Our method has been evaluated using a real traffic dataset, and the results thus obtained show that it is data-efficient and can predict reasonable trajectories at merging sections.


Sign in / Sign up

Export Citation Format

Share Document