scholarly journals Biologically Visual Perceptual Model and Discriminative Model for Road Markings Detection and Recognition

2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Huiqun Jia ◽  
Zhonghui Wei ◽  
Xin He ◽  
You Lv ◽  
Dinglong He ◽  
...  

The detection and recognition of arrow markings is a basic task of autonomous driving. To achieve all-day detection and recognition of arrow markings in complex environment, we propose a hybrid model by exploiting the advantages of biologically visual perceptual model and discriminative model. Firstly, the arrow markings are extracted from the complex background in the region of interest (ROI) by the biologically visual perceptual model using the frequency-tuned (FT) algorithm. Then candidates for road markings are detected as maximally stable extremal regions (MSER). In recognition stage, biologically visual perceptual model calculates the sparse solution of arrow markings using sparse learning theory. Finally, discriminative model uses the Adaptive Boosting (AdaBoost) classifier trained by sparse solution to classify arrow markings. Experimental results show that the hybrid model achieves detection and recognition of arrow markings in complex road conditions with the precision, recall, and F-measure being 0.966, 0.88, and 0.92, respectively. The hybrid model is robust and has some advantages compared with other state-of-the-art methods. The hybrid model proposed in this paper has important theoretical significance and practical value for all-day detection and recognition in complex environment.

2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Fan Zhang ◽  
Jiaxing Luan ◽  
Zhichao Xu ◽  
Wei Chen

Deep learning-based object detection method has been applied in various fields, such as ITS (intelligent transportation systems) and ADS (autonomous driving systems). Meanwhile, text detection and recognition in different scenes have also attracted much attention and research effort. In this article, we propose a new object-text detection and recognition method termed “DetReco” to detect objects and texts and recognize the text contents. The proposed method is composed of object-text detection network and text recognition network. YOLOv3 is used as the algorithm for the object-text detection task and CRNN is employed to deal with the text recognition task. We combine the datasets of general objects and texts together to train the networks. At test time, the detection network detects various objects in an image. Then, the text images are passed to the text recognition network to derive the text contents. The experiments show that the proposed method achieves 78.3 mAP (mean Average Precision) for general objects and 72.8 AP (Average Precision) for texts in regard to detection performance. Furthermore, the proposed method is able to detect and recognize affine transformed or occluded texts with robustness. In addition, for the texts detected around general objects, the text contents can be used as the identifier to distinguish the object.


2017 ◽  
Vol 5 ◽  
pp. 179-189 ◽  
Author(s):  
Ryo Fujii ◽  
Ryo Domoto ◽  
Daichi Mochihashi

This paper presents a novel hybrid generative/discriminative model of word segmentation based on nonparametric Bayesian methods. Unlike ordinary discriminative word segmentation which relies only on labeled data, our semi-supervised model also leverages a huge amounts of unlabeled text to automatically learn new “words”, and further constrains them by using a labeled data to segment non-standard texts such as those found in social networking services. Specifically, our hybrid model combines a discriminative classifier (CRF; Lafferty et al. (2001) and unsupervised word segmentation (NPYLM; Mochihashi et al. (2009)), with a transparent exchange of information between these two model structures within the semi-supervised framework (JESS-CM; Suzuki and Isozaki (2008)). We confirmed that it can appropriately segment non-standard texts like those in Twitter and Weibo and has nearly state-of-the-art accuracy on standard datasets in Japanese, Chinese, and Thai.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 194228-194239 ◽  
Author(s):  
Yanfen Li ◽  
Hanxiang Wang ◽  
L. Minh Dang ◽  
Tan N. Nguyen ◽  
Dongil Han ◽  
...  

Electronics ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 1373 ◽  
Author(s):  
Wahyu Rahmaniar ◽  
Wen-June Wang ◽  
Hsiang-Chieh Chen

Detection of moving objects by unmanned aerial vehicles (UAVs) is an important application in the aerial transportation system. However, there are many problems to be handled such as high-frequency jitter from UAVs, small size objects, low-quality images, computation time reduction, and detection correctness. This paper considers the problem of the detection and recognition of moving objects in a sequence of images captured from a UAV. A new and efficient technique is proposed to achieve the above objective in real time and in real environment. First, the feature points between two successive frames are found for estimating the camera movement to stabilize sequence of images. Then, region of interest (ROI) of the objects are detected as the moving object candidate (foreground). Furthermore, static and dynamic objects are classified based on the most motion vectors that occur in the foreground and background. Based on the experiment results, the proposed method achieves a precision rate of 94% and the computation time of 47.08 frames per second (fps). In comparison to other methods, the performance of the proposed method surpasses those of existing methods.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 109817-109832 ◽  
Author(s):  
Toan Minh Hoang ◽  
Se Hyun Nam ◽  
Kang Ryoung Park

2021 ◽  
Vol 2021 ◽  
pp. 1-25
Author(s):  
Yuanhang Chen ◽  
Guodong Feng ◽  
Shaofang Wu ◽  
Xiaojun Tan

Autonomous driving is an appealing research topic for integrating advanced intelligent algorithms to transform automotive industries and human commuting. This paper focuses on a hybrid model predictive controller (MPC) design for an adaptive cruise. The driving modes are divided into following and cruising, and the MPC algorithm based on simplified dual neural network (SDNN) and proportional-integral-derivative (PID) based on single neuron (SN) are applied to the following mode and the cruising mode, respectively. SDNN is used to accelerate the solution of the quadratic programming (QP) problem of the proposed MPC algorithm to improve the computation efficiency, while PID based on SN performs well in the nonlinear and time-varying conditions in the ACC system. Moreover, lateral dynamics control is integrated into the designed system to fulfill cruise control in the curved road conditions. Furthermore, to improve the energy efficiency of the electric vehicle, an energy feedback strategy is proposed. The simulation results show that the proposed ACC system is effective on both straight roads and curved roads.


2020 ◽  
pp. 002029402095247
Author(s):  
Houzhong Zhang ◽  
Jiasheng Liang ◽  
Haobin Jiang ◽  
Yingfeng Cai ◽  
Xing Xu

The visual guidance of AGV (automated guided vehicle) has gradually become one of the most important perception methods. Aiming at the problem that it is difficult to extract lane line accurately when AGV is running in complex working environment (such as uneven illumination, overexposure, lane line is not obvious, etc.), a scheme of lane line recognition under complex environment is proposed. Firstly, the variable scale image correction is carried out for the uneven illumination area in ROI (region of interest), and the threshold of Canny algorithm is adjusted adaptively according to the luminance of ROI region by Fuzzy-Canny algorithm; Secondly, the edge points matching the lane width feature are extracted by the way of aerial view. Finally, the curve fitting method based on RANSAC (Random Sample Consensus) is used to fit a curve with the lowest error rate and then get the lane center curve. The experimental results show that the processing algorithm used in this paper is feasible and effective, has strong robustness and fast computing performance, and can meet the requirements of intelligent AGV in various complex environments.


Sign in / Sign up

Export Citation Format

Share Document