Three-feature based automatic lane detection algorithm (TFALDA) for autonomous driving

Author(s):  
Younguk Yim ◽  
Se-Young Oh
2021 ◽  
Vol 18 (2) ◽  
pp. 172988142110087
Author(s):  
Qiao Huang ◽  
Jinlong Liu

The vision-based road lane detection technique plays a key role in driver assistance system. While existing lane recognition algorithms demonstrated over 90% detection rate, the validation test was usually conducted on limited scenarios. Significant gaps still exist when applied in real-life autonomous driving. The goal of this article was to identify these gaps and to suggest research directions that can bridge them. The straight lane detection algorithm based on linear Hough transform (HT) was used in this study as an example to evaluate the possible perception issues under challenging scenarios, including various road types, different weather conditions and shades, changed lighting conditions, and so on. The study found that the HT-based algorithm presented an acceptable detection rate in simple backgrounds, such as driving on a highway or conditions showing distinguishable contrast between lane boundaries and their surroundings. However, it failed to recognize road dividing lines under varied lighting conditions. The failure was attributed to the binarization process failing to extract lane features before detections. In addition, the existing HT-based algorithm would be interfered by lane-like interferences, such as guardrails, railways, bikeways, utility poles, pedestrian sidewalks, buildings and so on. Overall, all these findings support the need for further improvements of current road lane detection algorithms to be robust against interference and illumination variations. Moreover, the widely used algorithm has the potential to raise the lane boundary detection rate if an appropriate search range restriction and illumination classification process is added.


2013 ◽  
Vol 433-435 ◽  
pp. 267-272
Author(s):  
Xing Ma ◽  
Chun Yang Mu ◽  
Chun Tao Zhang ◽  
Lu Ming Zhang

This paper proposed a lane detection algorithm for urban environment. The algorithm was concerned on selecting an appropriate limited region of interest (ROI) by OTSU segmentation. Then candidates of lane markers were extracted by Canny, finally the lane boundaries were detected by Hough transform. The limited ROI helps to identification lane in an appropriate region. This process have the effect of enhancement in the speed of operation. The proposed algorithm was simulated in MATLAB. The test databases were shared by Fondazione Bruno Kessler (FBK). The experiments show that lane boundaries can be detected correctly although they are fade. Feature-based method is usually affected by intension of image. Several characteristics of roads need to be considered further for detection more precisely.


Author(s):  
Balasriram Kodi ◽  
Manimozhi M

In the field of autonomous vehicles, lane detection and control plays an important role. In autonomous driving the vehicle has to follow the path to avoid the collision. A deep learning technique is used to detect the curved path in autonomous vehicles. In this paper a customized lane detection algorithm was implemented to detect the curvature of the lane. A ground truth labelling tool box for deep learning is used to detect the curved path in autonomous vehicle. By mapping point to point in each frame 80-90% computing efficiency and accuracy is achieved in detecting path.


2020 ◽  
Vol 14 (01) ◽  
pp. 153-168
Author(s):  
Dongfang Liu ◽  
Yaqin Wang ◽  
Tian Chen ◽  
Eric T. Matson

Lane detection is a crucial factor for self-driving cars to achieve a fully autonomous mode. Due to its importance, lane detection has drawn wide attention in recent years for autonomous driving. One challenge for accurate lane detection is to deal with noise appearing in the input image, such as object shadows, brake marks, breaking lane lines. To address this challenge, we propose an effective road detection algorithm. We leverage the strength of color filters to find a rough localization of the lane marks and employ a K-means clustering filter to screen out the embedded noises. We use an extensive experiment to verify the effectiveness of our method. The result indicates that our approach is robust to process noises appearing in input image, which improves the accuracy in lane detection.


2009 ◽  
Vol 29 (2) ◽  
pp. 440-443 ◽  
Author(s):  
Tao LEI ◽  
Yang-yu FAN ◽  
Xiao-peng WANG ◽  
Lü-cheng WANG

2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1665
Author(s):  
Jakub Suder ◽  
Kacper Podbucki ◽  
Tomasz Marciniak ◽  
Adam Dąbrowski

The aim of the paper was to analyze effective solutions for accurate lane detection on the roads. We focused on effective detection of airport runways and taxiways in order to drive a light-measurement trailer correctly. Three techniques for video-based line extracting were used for specific detection of environment conditions: (i) line detection using edge detection, Scharr mask and Hough transform, (ii) finding the optimal path using the hyperbola fitting line detection algorithm based on edge detection and (iii) detection of horizontal markings using image segmentation in the HSV color space. The developed solutions were tuned and tested with the use of embedded devices such as Raspberry Pi 4B or NVIDIA Jetson Nano.


Sign in / Sign up

Export Citation Format

Share Document