Real-Time GPU Based Road Sign Detection and Classification

Author(s):  
Roberto Ugolotti ◽  
Youssef S. G. Nashed ◽  
Stefano Cagnoni
Keyword(s):  
Author(s):  
Jianping Wu ◽  
Maoxin Si ◽  
Fangyong Tan ◽  
Caidong Gu
Keyword(s):  

Detection and monitoring of real-time road signs are becoming today's study in the autonomous car industry. The number of car users in Malaysia risen every year as well as the rate of car crashes. Different types, shapes, and colour of road signs lead the driver to neglect them, and this attitude contributing to a high rate of accidents. The purpose of this paper is to implement image processing using the real-time video Road Sign Detection and Tracking (RSDT) with an autonomous car. The detection of road signs is carried out by using Video and Image Processing technique control in Python by applying deep learning process to detect an object in a video’s motion. The extracted features from the video frame will continue to template matching on recognition processes which are based on the database. The experiment for the fixed distance shows an accuracy of 99.9943% while the experiment with the various distance showed the inversely proportional relation between distances and accuracies. This system was also able to detect and recognize five types of road signs using a convolutional neural network. Lastly, the experimental results proved the system capability to detect and recognize the road sign accurately.


Author(s):  
Amal Bouti ◽  
Mohamed Adnane Mahraz ◽  
Jamal Riffi ◽  
Hamid Tairi

In this chapter, the authors report a system for detection and classification of road signs. This system consists of two parts. The first part detects the road signs in real time. The second part classifies the German traffic signs (GTSRB) dataset and makes the prediction using the road signs detected in the first part to test the effectiveness. The authors used HOG and SVM in the detection part to detect the road signs captured by the camera. Then they used a convolutional neural network based on the LeNet model in which some modifications were added in the classification part. The system obtains an accuracy rate of 96.85% in the detection part and 96.23% in the classification part.


Author(s):  
C. YOON ◽  
H. LEE ◽  
E. KIM ◽  
M. PARK
Keyword(s):  

2021 ◽  
Vol 13 (5) ◽  
pp. 879
Author(s):  
Zhu Mao ◽  
Fan Zhang ◽  
Xianfeng Huang ◽  
Xiangyang Jia ◽  
Yiping Gong ◽  
...  

Oblique photogrammetry-based three-dimensional (3D) urban models are widely used for smart cities. In 3D urban models, road signs are small but provide valuable information for navigation. However, due to the problems of sliced shape features, blurred texture and high incline angles, road signs cannot be fully reconstructed in oblique photogrammetry, even with state-of-the-art algorithms. The poor reconstruction of road signs commonly leads to less informative guidance and unsatisfactory visual appearance. In this paper, we present a pipeline for embedding road sign models based on deep convolutional neural networks (CNNs). First, we present an end-to-end balanced-learning framework for small object detection that takes advantage of the region-based CNN and a data synthesis strategy. Second, under the geometric constraints placed by the bounding boxes, we use the scale-invariant feature transform (SIFT) to extract the corresponding points on the road signs. Third, we obtain the coarse location of a single road sign by triangulating the corresponding points and refine the location via outlier removal. Least-squares fitting is then applied to the refined point cloud to fit a plane for orientation prediction. Finally, we replace the road signs with computer-aided design models in the 3D urban scene with the predicted location and orientation. The experimental results show that the proposed method achieves a high mAP in road sign detection and produces visually plausible embedded results, which demonstrates its effectiveness for road sign modeling in oblique photogrammetry-based 3D scene reconstruction.


Sign in / Sign up

Export Citation Format

Share Document