scholarly journals A Multiscale Recognition Method for the Optimization of Traffic Signs Using GMM and Category Quality Focal Loss

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4850
Author(s):  
Mingyu Gao ◽  
Chao Chen ◽  
Jie Shi ◽  
Chun Sing Lai ◽  
Yuxiang Yang ◽  
...  

Effective traffic sign recognition algorithms can assist drivers or automatic driving systems in detecting and recognizing traffic signs in real-time. This paper proposes a multiscale recognition method for traffic signs based on the Gaussian Mixture Model (GMM) and Category Quality Focal Loss (CQFL) to enhance recognition speed and recognition accuracy. Specifically, GMM is utilized to cluster the prior anchors, which are in favor of reducing the clustering error. Meanwhile, considering the most common issue in supervised learning (i.e., the imbalance of data set categories), the category proportion factor is introduced into Quality Focal Loss, which is referred to as CQFL. Furthermore, a five-scale recognition network with a prior anchor allocation strategy is designed for small target objects i.e., traffic sign recognition. Combining five existing tricks, the best speed and accuracy tradeoff on our data set (40.1% mAP and 15 FPS on a single 1080Ti GPU), can be achieved. The experimental results demonstrate that the proposed method is superior to the existing mainstream algorithms, in terms of recognition accuracy and recognition speed.

2010 ◽  
Vol 121-122 ◽  
pp. 596-599 ◽  
Author(s):  
Ni An Cai ◽  
Wen Zhao Liang ◽  
Shao Qiu Xu ◽  
Fang Zhen Li

A recognition method for traffic signs based on an SIFT features is proposed to solve the problems of distortion and occlusion. SIFT features are first extracted from traffic signs and matched by using the Euclidean distance. Then the recognition is implemented based on the similarity. Experimental results show that the proposed method, superior to traditional method, can excellently recognize traffic signs with the transformation of scale, rotation, and distortion and has a good ability of anti-noise and anti-occlusion.


2018 ◽  
Vol 7 (3.14) ◽  
pp. 233
Author(s):  
Mohd Safirin Karis ◽  
Nursabillilah Mohd Ali ◽  
Nur Aisyah Abdul Ghafor ◽  
Muhamad Aizuddin Akmal Che Jusoh ◽  
Nurasmiza Selamat ◽  
...  

In this paper, 19 cautionary traffic signs were selected as a database and 3 types of conditions have been proposed. The conditions are 5 different time of image taken; hidden region and anticlockwise rotation are all the experiments design that will shows all the errors in producing the it’s mean value and the performance of traffic sign recognition. Initial hypothesis was made as the error will become larger as the interruption getting bigger. Based on the results of the five-different time of image taken, the error gives the best performance; less error when time is between 8am to 12am due to the brightness factors and the sign can be recognize clearly during noon session. The hidden region conditions show good performances of the detection and recognition of the system depend on the lesser coverage of the hidden region introduce on traffic sign because if the hidden region coverage is huge the database will get confuse and take a longer time to do the recognition process. Lastly, in anticlockwise rotation shows that 90o gave large value of error causing the system unable to recognize sign perfectly rather than 135o angle. To sum-up, detection and recognition process are not depending on higher number of angle but the process solely depending on their value of sample for each traffic signs. The error will give the impact towards traffic sign recognition and detection process. In conclusion, SNN can perform the detection and recognition process to all objects as in the future the system will become more stable with the right technique on spiking models and well-developed technology in this field.  


Author(s):  
Manjiri Bichkar ◽  
Suyasha Bobhate ◽  
Prof. Sonal Chaudhari

This paper presents an effective solution to detecting traffic signs on road by first classifying the traffic sign images us-ing Convolutional Neural Network (CNN) on the German Traffic Sign Recognition Benchmark (GTSRB)[1] and then detecting the images of Indian Traffic Signs using the Indian Dataset which will be used as testing dataset while building classification model. Therefore this system helps electric cars or self driving cars to recognise the traffic signs efficiently and correctly. The system involves two parts, detection of traffic signs from the environment and classification based on CNN thereby recognising the traffic sign. The classification involves building a CNN model of different filters of dimensions 3 × 3, 5 × 5, 9 × 9, 13 × 13, 15 × 15,19 × 19, 23 × 23, 25 × 25 and 31 ×31 from which the most efficient filter is chosen for further classifying the image detected. The detection involves detecting the traffic sign using YOLO v3-v4 and BLOB detection. Transfer Learning is used for using the trained model for detecting Indian traffic sign images.


2018 ◽  
Vol 35 (11) ◽  
pp. 1907 ◽  
Author(s):  
Ayoub Ellahyani ◽  
Mohamed El Ansari ◽  
Redouan Lahmyed ◽  
Alain Trémeau

2013 ◽  
Vol 644 ◽  
pp. 16-19
Author(s):  
Ming Xi Xiao ◽  
Han Ling Zhang

This paper presents a new method for traffic sign recognition in Intelligent Transport System,which base on low-rank approximation and support vector machine (SVM),the method including traffic signs correction and SVM identification.first we extraction traffic sign region and internal texture,according to the characteristics of internal texture,combine with the spare and low-rank approximation,to correct the texture automatically ,next to extract the feature vectors of traffic signs texture,finally identification in the database.The experimental results show: the method base on low-rank approximation can corrected the deformation traffic signs effectively and accurately,improve the recognition rate of the SVM,it has good feasibility and real-time.


Author(s):  
Bhaumik Vaidya ◽  
Chirag Paunwala

Traffic sign recognition is a vital part for any driver assistance system which can help in making complex driving decision based on the detected traffic signs. Traffic sign detection (TSD) is essential in adverse weather conditions or when the vehicle is being driven on the hilly roads. Traffic sign recognition is a complex computer vision problem as generally the signs occupy a very small portion of the entire image. A lot of research is going on to solve this issue accurately but still it has not been solved till the satisfactory performance. The goal of this paper is to propose a deep learning architecture which can be deployed on embedded platforms for driver assistant system with limited memory and computing resources without sacrificing on detection accuracy. The architecture uses various architectural modification to the well-known Convolutional Neural Network (CNN) architecture for object detection. It uses a trainable Color Transformer Network (CTN) with the existing CNN architecture for making the system invariant to illumination and light changes. The architecture uses feature fusion module for detecting small traffic signs accurately. In the proposed work, receptive field calculation is used for choosing the number of convolutional layer for prediction and the right scales for default bounding boxes. The architecture is deployed on Jetson Nano GPU Embedded development board for performance evaluation at the edge and it has been tested on well-known German Traffic Sign Detection Benchmark (GTSDB) and Tsinghua-Tencent 100k dataset. The architecture only requires 11 MB for storage which is almost ten times better than the previous architectures. The architecture has one sixth parameters than the best performing architecture and 50 times less floating point operations per second (FLOPs). The architecture achieves running time of 220[Formula: see text]ms on desktop GPU and 578 ms on Jetson Nano which is also better compared to other similar implementation. It also achieves comparable accuracy in terms of mean average precision (mAP) for both the datasets.


Sign in / Sign up

Export Citation Format

Share Document