Pattern analysis for autonomous vehicles with the region- and feature-based neural network: global self-localization and traffic sign recognition

Author(s):  
J.A. Janet ◽  
M.W. White ◽  
T.A. Chase ◽  
R.C. Luo ◽  
J.C. Sutton
Author(s):  
Di Zang ◽  
Zhihua Wei ◽  
Maomao Bao ◽  
Jiujun Cheng ◽  
Dongdong Zhang ◽  
...  

Being one of the key techniques for unmanned autonomous vehicle, traffic sign recognition is applied to assist autopilot. Colors are very important clues to identify traffic signs; however, color-based methods suffer performance degradation in the case of light variation. Convolutional neural network, as one of the deep learning methods, is able to hierarchically learn high-level features from the raw input. It has been proved that convolutional neural network–based approaches outperform the color-based ones. At present, inputs of convolutional neural networks are processed either as gray images or as three independent color channels; the learned color features are still not enough to represent traffic signs. Apart from colors, temporal constraint is also crucial to recognize video-based traffic signs. The characteristics of traffic signs in the time domain require further exploration. Quaternion numbers are able to encode multi-dimensional information, and they have been employed to describe color images. In this article, we are inspired to present a quaternion convolutional neural network–based approach to recognize traffic signs by fusing spatial and temporal features in a single framework. Experimental results illustrate that the proposed method can yield correct recognition results and obtain better performance when compared with the state-of-the-art work.


2018 ◽  
Vol 55 (12) ◽  
pp. 121009 ◽  
Author(s):  
马永杰 Ma Yongjie ◽  
李雪燕 Li Xueyan ◽  
宋晓凤 Song Xiaofeng

2020 ◽  
Vol 100 ◽  
pp. 107160 ◽  
Author(s):  
Shichao Zhou ◽  
Chenwei Deng ◽  
Zhengquan Piao ◽  
Baojun Zhao

Electronics ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 105 ◽  
Author(s):  
Fanjie Meng ◽  
Xinqing Wang ◽  
Faming Shao ◽  
Dong Wang ◽  
Xia Hua

Deep-learning convolutional neural networks (CNNs) have proven to be successful in various cognitive applications with a multilayer structure. The high computational energy and time requirements hinder the practical application of CNNs; hence, the realization of a highly energy-efficient and fast-learning neural network has aroused interest. In this work, we address the computing-resource-saving problem by developing a deep model, termed the Gabor convolutional neural network (Gabor CNN), which incorporates highly expression-efficient Gabor kernels into CNNs. In order to effectively imitate the structural characteristics of traditional weight kernels, we improve upon the traditional Gabor filters, having stronger frequency and orientation representations. In addition, we propose a procedure to train Gabor CNNs, termed the fast training method (FTM). In FTM, we design a new training method based on the multipopulation genetic algorithm (MPGA) and evaluation structure to optimize improved Gabor kernels, but train the rest of the Gabor CNN parameters with back-propagation. The training of improved Gabor kernels with MPGA is much more energy-efficient with less samples and iterations. Simple tasks, like character recognition on the Mixed National Institute of Standards and Technology database (MNIST), traffic sign recognition on the German Traffic Sign Recognition Benchmark (GTSRB), and face detection on the Olivetti Research Laboratory database (ORL), are implemented using LeNet architecture. The experimental result of the Gabor CNN and MPGA training method shows a 17–19% reduction in computational energy and time and an 18–21% reduction in storage requirements with a less than 1% accuracy decrease. We eliminated a significant fraction of the computation-hungry components in the training process by incorporating highly expression-efficient Gabor kernels into CNNs.


Sign in / Sign up

Export Citation Format

Share Document