Traffic sign recognition and tracking for a vision-based autonomous vehicle using optimally selected features

Author(s):  
Wahyono ◽  
Laksono Kurnianggoro ◽  
Kang-Hyun Jo
Author(s):  
Zhenhua Zhang ◽  
Leon Stenneth ◽  
Xiyuan Liu

The state-of-the-art traffic sign recognition (TSR) algorithms are designed to recognize the textual information of a traffic sign at over 95% accuracy. Even though, they are still not ready for complex roadworks near ramps. In real-world applications, when the vehicles are running on the freeway, they may misdetect the traffic signs for the ramp, which will become inaccurate feedback to the autonomous driving applications and result in unexpected speed reduction. The misdetection problems have drawn minimal attention in recent TSR studies. In this paper, it is proposed that the existing TSR studies should transform from the point-based sign recognition to path-based sign learning. In the proposed pipeline, the confidence of the TSR observations from normal vehicles can be increased by clustering and location adjustment. A supervised learning model is employed to classify the clustered learned signs and complement their path information. Test drives are conducted in 12 European countries to calibrate the models and validate the path information of the learned sign. After model implementation, the path accuracy over 1,000 learned signs can be increased from 75.04% to 89.80%. This study proves the necessity of the path-based TSR studies near freeway ramps and the proposed pipeline demonstrates a good utility and broad applicability for sensor-based autonomous vehicle applications.


Computer ◽  
2021 ◽  
Vol 54 (8) ◽  
pp. 66-76
Author(s):  
Koorosh Aslansefat ◽  
Sohag Kabir ◽  
Amr Abdullatif ◽  
Vinod Vasudevan ◽  
Yiannis Papadopoulos

Author(s):  
Di Zang ◽  
Zhihua Wei ◽  
Maomao Bao ◽  
Jiujun Cheng ◽  
Dongdong Zhang ◽  
...  

Being one of the key techniques for unmanned autonomous vehicle, traffic sign recognition is applied to assist autopilot. Colors are very important clues to identify traffic signs; however, color-based methods suffer performance degradation in the case of light variation. Convolutional neural network, as one of the deep learning methods, is able to hierarchically learn high-level features from the raw input. It has been proved that convolutional neural network–based approaches outperform the color-based ones. At present, inputs of convolutional neural networks are processed either as gray images or as three independent color channels; the learned color features are still not enough to represent traffic signs. Apart from colors, temporal constraint is also crucial to recognize video-based traffic signs. The characteristics of traffic signs in the time domain require further exploration. Quaternion numbers are able to encode multi-dimensional information, and they have been employed to describe color images. In this article, we are inspired to present a quaternion convolutional neural network–based approach to recognize traffic signs by fusing spatial and temporal features in a single framework. Experimental results illustrate that the proposed method can yield correct recognition results and obtain better performance when compared with the state-of-the-art work.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Christine Dewi ◽  
Rung-Ching Chen ◽  
Yan-Ting Liu ◽  
Xiaoyi Jiang ◽  
Kristoko Dwi Hartomo

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 686
Author(s):  
Ke Zhou ◽  
Yufei Zhan ◽  
Dongmei Fu

Traffic sign recognition in poor environments has always been a challenge in self-driving. Although a few works have achieved good results in the field of traffic sign recognition, there is currently a lack of traffic sign benchmarks containing many complex factors and a robust network. In this paper, we propose an ice environment traffic sign recognition benchmark (ITSRB) and detection benchmark (ITSDB), marked in the COCO2017 format. The benchmarks include 5806 images with 43,290 traffic sign instances with different climate, light, time, and occlusion conditions. Second, we tested the robustness of the Libra-RCNN and HRNetv2p on the ITSDB compared with Faster-RCNN. The Libra-RCNN performed well and proved that our ITSDB dataset did increase the challenge in this task. Third, we propose an attention network based on high-resolution traffic sign classification (PFANet), and conduct ablation research on the design parallel fusion attention module. Experiments show that our representation reached 93.57% accuracy in ITSRB, and performed as well as the newest and most effective networks in the German traffic sign recognition dataset (GTSRB).


Sign in / Sign up

Export Citation Format

Share Document