Achieving fast lane detection of autonomous driving by CNN based differentiation

2021 ◽  
Author(s):  
Xingzhi Zhou ◽  
Jinyu Zhan ◽  
Wei Jiang
2021 ◽  
Vol 18 (2) ◽  
pp. 172988142110087
Author(s):  
Qiao Huang ◽  
Jinlong Liu

The vision-based road lane detection technique plays a key role in driver assistance system. While existing lane recognition algorithms demonstrated over 90% detection rate, the validation test was usually conducted on limited scenarios. Significant gaps still exist when applied in real-life autonomous driving. The goal of this article was to identify these gaps and to suggest research directions that can bridge them. The straight lane detection algorithm based on linear Hough transform (HT) was used in this study as an example to evaluate the possible perception issues under challenging scenarios, including various road types, different weather conditions and shades, changed lighting conditions, and so on. The study found that the HT-based algorithm presented an acceptable detection rate in simple backgrounds, such as driving on a highway or conditions showing distinguishable contrast between lane boundaries and their surroundings. However, it failed to recognize road dividing lines under varied lighting conditions. The failure was attributed to the binarization process failing to extract lane features before detections. In addition, the existing HT-based algorithm would be interfered by lane-like interferences, such as guardrails, railways, bikeways, utility poles, pedestrian sidewalks, buildings and so on. Overall, all these findings support the need for further improvements of current road lane detection algorithms to be robust against interference and illumination variations. Moreover, the widely used algorithm has the potential to raise the lane boundary detection rate if an appropriate search range restriction and illumination classification process is added.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4389 ◽  
Author(s):  
Eun Jang ◽  
Jae Suhr ◽  
Ho Jung

Landmark-based vehicle localization is a key component of both autonomous driving and advanced driver assistance systems (ADAS). Previously used landmarks in highways such as lane markings lack information on longitudinal positions. To address this problem, lane endpoints can be used as landmarks. This paper proposes two essential components when using lane endpoints as landmarks: lane endpoint detection and its accuracy evaluation. First, it proposes a method to efficiently detect lane endpoints using a monocular forward-looking camera, which is the most widely installed perception sensor. Lane endpoints are detected with a small amount of computation based on the following steps: lane detection, lane endpoint candidate generation, and lane endpoint candidate verification. Second, it proposes a method to reliably measure the position accuracy of the lane endpoints detected from images taken while the camera is moving at high speed. A camera is installed with a mobile mapping system (MMS) in a vehicle, and the position accuracy of the lane endpoints detected by the camera is measured by comparing their positions with ground truths obtained by the MMS. In the experiment, the proposed methods were evaluated and compared with previous methods based on a dataset acquired while driving on 80 km of highway in both daytime and nighttime.


Author(s):  
YIMING NIE ◽  
BIN DAI ◽  
XIANGJING AN ◽  
ZHENPING SUN ◽  
TAO WU ◽  
...  

The lane information is essential to the highway intelligent vehicle applications. The direct description of the lanes is lane markings. Many vision methods have been proposed for lane markings detection. But in practice there are some problems to be solved by previous lane tracking systems such as shadows on the road, lighting changes, characters on the road and discontinuous changes in road types. Direction kernel function is proposed for robust detection of the lanes. This method focuses on selecting points on the markings edge by classification. During the classifying, the vanishing point is selected and the parts of the lane marking could form the lanes. The algorithm presented in this paper is proved to be both robust and fast by a large amount of experiments in variable occasions, besides, the algorithm can extract the lanes even in some parts of lane markings missing occasions.


Author(s):  
Qianao Ju ◽  
Dana Forsthoefel ◽  
Shoaib Azmat ◽  
Linda Wills ◽  
Scott Wills ◽  
...  

2021 ◽  
pp. 379-389
Author(s):  
Boyi Li ◽  
Yi Zhao ◽  
Lu Lou

2019 ◽  
Vol 21 (2) ◽  
pp. 80-95 ◽  
Author(s):  
Hongzhe Liu ◽  
Xuewei Li

Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1102
Author(s):  
Malik Haris ◽  
Adam Glowacz

In order to meet the real-time requirements of the autonomous driving system, the existing method directly up-samples the encoder’s output feature map to pixel-wise prediction, thus neglecting the importance of the decoder for the prediction of detail features. In order to solve this problem, this paper proposes a general lane detection framework based on object feature distillation. Firstly, a decoder with strong feature prediction ability is added to the network using direct up-sampling method. Then, in the network training stage, the prediction results generated by the decoder are regarded as soft targets through knowledge distillation technology, so that the directly up-samples branch can learn more detailed lane information and have a strong feature prediction ability for the decoder. Finally, in the stage of network inference, we only need to use the direct up-sampling branch instead of the forward calculation of the decoder, so compared with the existing model, it can improve the lane detection performance without additional cost. In order to verify the effectiveness of this framework, it is applied to many mainstream lane segmentation methods such as SCNN, DeepLabv1, ResNet, etc. Experimental results show that, under the condition of no additional complexity, the proposed method can obtain higher F1Measure on CuLane dataset.


Author(s):  
Zezheng Lv ◽  
Xiaoci Huang ◽  
Yaozhong Liang ◽  
Wenguan Cao ◽  
Yuxiang Chong

Lane detection algorithms require extremely low computational costs as an important part of autonomous driving. Due to heavy backbone networks, algorithms based on pixel-wise segmentation is struggling to handle the problem of runtime consumption in the recognition of lanes. In this paper, a novel and practical methodology based on lightweight Segmentation Network is proposed, which aims to achieve accurate and efficient lane detection. Different with traditional convolutional layers, the proposed Shadow module can reduce the computational cost of the backbone network by performing linear transformations on intrinsic feature maps. Thus a lightweight backbone network Shadow-VGG-16 is built. After that, a tailored pyramid parsing module is introduced to collect different sub-domain features, which is composed of both a strip pool module based on Pyramid Scene Parsing Network (PSPNet) and a convolution attention module. Finally, a lane structural loss is proposed to explicitly model the lane structure and reduce the influence of noise, so that the pixels can fit the lane better. Extensive experimental results demonstrate that the performance of our method is significantly better than the state-of-the-art (SOTA) algorithms such as Pointlanenet and Line-CNN et al. 95.28% and 90.06% accuracy and 62.5 frames per second (fps) inference speed can be achieved on the CULane and Tusimple test dataset. Compared with the latest ERFNet, Line-CNN, SAD, F1 scores have respectively increased by 3.51%, 2.84%, and 3.82%. Meanwhile, the result from our dataset exceeds the top performances of the other by 8.6% with an 87.09 F1 score, which demonstrates the superiority of our method.


Sign in / Sign up

Export Citation Format

Share Document