Road Sign Detection Using Variants of YOLO and R-CNN: An Analysis from the Perspective of Bangladesh

2021 ◽  
pp. 555-565
Author(s):  
Aklima Akter Lima ◽  
Md. Mohsin Kabir ◽  
Sujoy Chandra Das ◽  
Md. Nahid Hasan ◽  
M. F. Mridha
Keyword(s):  
2021 ◽  
Vol 13 (5) ◽  
pp. 879
Author(s):  
Zhu Mao ◽  
Fan Zhang ◽  
Xianfeng Huang ◽  
Xiangyang Jia ◽  
Yiping Gong ◽  
...  

Oblique photogrammetry-based three-dimensional (3D) urban models are widely used for smart cities. In 3D urban models, road signs are small but provide valuable information for navigation. However, due to the problems of sliced shape features, blurred texture and high incline angles, road signs cannot be fully reconstructed in oblique photogrammetry, even with state-of-the-art algorithms. The poor reconstruction of road signs commonly leads to less informative guidance and unsatisfactory visual appearance. In this paper, we present a pipeline for embedding road sign models based on deep convolutional neural networks (CNNs). First, we present an end-to-end balanced-learning framework for small object detection that takes advantage of the region-based CNN and a data synthesis strategy. Second, under the geometric constraints placed by the bounding boxes, we use the scale-invariant feature transform (SIFT) to extract the corresponding points on the road signs. Third, we obtain the coarse location of a single road sign by triangulating the corresponding points and refine the location via outlier removal. Least-squares fitting is then applied to the refined point cloud to fit a plane for orientation prediction. Finally, we replace the road signs with computer-aided design models in the 3D urban scene with the predicted location and orientation. The experimental results show that the proposed method achieves a high mAP in road sign detection and produces visually plausible embedded results, which demonstrates its effectiveness for road sign modeling in oblique photogrammetry-based 3D scene reconstruction.


Author(s):  
Murali Keshav ◽  
Amartya Anshuman ◽  
Shashank Gupta ◽  
Shivanshu Mahim ◽  
Parakram Singh ◽  
...  

2007 ◽  
Vol 8 (2) ◽  
pp. 264-278 ◽  
Author(s):  
Saturnino Maldonado-Bascon ◽  
Sergio Lafuente-Arroyo ◽  
Pedro Gil-Jimenez ◽  
Hilario Gomez-Moreno ◽  
Francisco Lopez-Ferreras

Author(s):  
Gaojian Huang ◽  
Clayton Steele ◽  
Xinrui Zhang ◽  
Brandon J. Pitts

The rapid growth of autonomous vehicles is expected to improve roadway safety. However, certain levels of vehicle automation will still require drivers to ‘takeover’ during abnormal situations, which may lead to breakdowns in driver-vehicle interactions. To date, there is no agreement on how to best support drivers in accomplishing a takeover task. Therefore, the goal of this study was to investigate the effectiveness of multimodal alerts as a feasible approach. In particular, we examined the effects of uni-, bi-, and trimodal combinations of visual, auditory, and tactile cues on response times to takeover alerts. Sixteen participants were asked to detect 7 multimodal signals (i.e., visual, auditory, tactile, visual-auditory, visual-tactile, auditory-tactile, and visual-auditory-tactile) while driving under two conditions: with SAE Level 3 automation only or with SAE Level 3 automation in addition to performing a road sign detection task. Performance on the signal and road sign detection tasks, pupil size, and perceived workload were measured. Findings indicate that trimodal combinations result in the shortest response time. Also, response times were longer and perceived workload was higher when participants were engaged in a secondary task. Findings may contribute to the development of theory regarding the design of takeover request alert systems within (semi) autonomous vehicles.


Author(s):  
Manuel Kehl ◽  
Markus Enzweiler ◽  
Bjoern Froehlich ◽  
Uwe Franke ◽  
Wolfgang Heiden
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document