Real-Time Detection of Road Lane-Lines for Autonomous Driving

2020 ◽  
Vol 13 (2) ◽  
pp. 265-274 ◽  
Author(s):  
Wael Farag

Background: Enabling fast and reliable lane-lines detection and tracking for advanced driving assistance systems and self-driving cars. Methods: The proposed technique is mainly a pipeline of computer vision algorithms that augment each other and take in raw RGB images to produce the required lane-line segments that represent the boundary of the road for the car. The main emphasis of the proposed technique in on simplicity and fast computation capability so that it can be embedded in affordable CPUs that are employed by ADAS systems. Results: Each used algorithm is described in details, implemented and its performance is evaluated using actual road images and videos captured by the front mounted camera of the car. The whole pipeline performance is also tested and evaluated on real videos. Conclusion: The evaluation of the proposed technique shows that it reliably detects and tracks road boundaries under various conditions.

2020 ◽  
Vol 39 (3) ◽  
pp. 2693-2710 ◽  
Author(s):  
Wael Farag

In this paper, an advanced-and-reliable vehicle detection-and-tracking technique is proposed and implemented. The Real-Time Vehicle Detection-and-Tracking (RT_VDT) technique is well suited for Advanced Driving Assistance Systems (ADAS) applications or Self-Driving Cars (SDC). The RT_VDT is mainly a pipeline of reliable computer vision and machine learning algorithms that augment each other and take in raw RGB images to produce the required boundary boxes of the vehicles that appear in the front driving space of the car. The main contribution of this paper is the careful fusion of the employed algorithms where some of them work in parallel to strengthen each other in order to produce a precise and sophisticated real-time output. In addition, the RT_VDT provides fast enough computation to be embedded in CPUs that are currently employed by ADAS systems. The particulars of the employed algorithms together with their implementation are described in detail. Additionally, these algorithms and their various integration combinations are tested and their performance is evaluated using actual road images, and videos captured by the front-mounted camera of the car as well as on the KITTI benchmark with 87% average precision achieved. The evaluation of the RT_VDT shows that it reliably detects and tracks vehicle boundaries under various conditions.


Author(s):  
Wael Farag

In this paper, an advanced-and-reliable vehicle detection-and-tracking technique is proposed and implemented. The Real-Time Vehicle Detection-and-Tracking (RT_VDT) technique is well suited for Advanced Driving Assistance Systems (ADAS) applications or Self-Driving Cars (SDC). The RT_VDT is mainly a pipeline of reliable computer vision and machine learning algorithms that augment each other and take in raw RGB images to produce the required boundary boxes of the vehicles that appear in the front driving space of the car. The main contribution of this paper is the careful fusion of the employed algorithms where some of them work in parallel to strengthen each other in order to produce a precise and sophisticated real-time output. In addition, the RT_VDT provides fast enough computation to be embedded in CPUs that are currently employed by ADAS systems. The particulars of the employed algorithms together with their implementation are described in detail. Additionally, these algorithms and their various integration combinations are tested and their performance is evaluated using actual road images, and videos captured by the front-mounted camera of the car as well as on the KITTI benchmark. The evaluation of the RT_VDT shows that it reliably detects and tracks vehicle boundaries under various conditions


Author(s):  
Jun Liu ◽  
Rui Zhang ◽  
Shihao Hou

Perceiving the distance between vehicles is a crucial issue for advanced driving assistance systems. However, most vision-based distance estimation methods do not consider the influence of the change in camera attitude angles during driving or only use the vanishing point detected by lane lines to correct the pitch angle. This paper proposed an improved pinhole distance estimation model based on the road vanishing point without the lane line information. First, the road vanishing point is detected based on the dominant texture orientation, and the yaw and pitch angles of the camera are estimated. Then, a distance estimation model considering attitude angle compensation is established. Finally, the experimental results show that the proposed method can effectively correct the influence of the camera attitude angle on the distance estimation results.


2016 ◽  
Vol 2016 ◽  
pp. 1-8 ◽  
Author(s):  
Taeryun Kim ◽  
Bongsob Song

The detection and tracking algorithms of road barrier including tunnel and guardrail are proposed to enhance performance and reliability for driver assistance systems. Although the road barrier is one of the key features to determine a safe drivable area, it may be recognized incorrectly due to performance degradation of commercial sensors such as radar and monocular camera. Two frequent cases among many challenging problems are considered with the commercial sensors. The first case is that few tracks of radar to road barrier are detected due to material type of road barrier. The second one is inaccuracy of relative lateral position by radar, thus resulting in large variance of distance between a vehicle and road barrier. To overcome the problems, the detection and estimation algorithms of tracks corresponding to road barrier are proposed. Then, the tracking algorithm based on a probabilistic data association filter (PDAF) is used to reduce variation of lateral distance between vehicle and road barrier. Finally, the proposed algorithms are validated via field test data and their performance is compared with that of road barrier measured by lidar.


Author(s):  
Luis A. Curiel-Ramirez ◽  
Ricardo A. Ramirez-Mendoza ◽  
Gerardo Carrera ◽  
Javier Izquierdo-Reyes ◽  
M. Rogelio Bustamante-Bello

2021 ◽  
Vol 11 (16) ◽  
pp. 7296
Author(s):  
Toshinori Kojima ◽  
Pongsathorn Raksincharoensak

Various driving assistance systems have been developed to reduce the number of automobile accidents. However, the control laws of these assistance systems differ based on each situation, and the discontinuous control command value may be input instantaneously. Therefore, a seamless and unified control law for driving assistance systems that can be used in multiple situations is necessary to realize more versatile autonomous driving. Although studies have been conducted on four-wheel steering that steers the rear wheels, these studies considered the role of the rear wheels only to improve vehicle dynamics and not to contribute to autonomous driving. Therefore, in this study, we define the risk potential field as a uniform control law and propose a rear-wheel steering control system that actively steers the rear wheels to contribute to autonomous driving, depending on the level of the perceived risk in the driving situation. The effectiveness of the proposed method is verified by a double lane change test, which is performed assuming emergency avoidance in simulations, and subject experiments using a driving simulator. The results indicate that actively steering the rear wheels ensures a safer and smoother drive while simultaneously improving the emergency avoidance performance.


2016 ◽  
pp. 201-244 ◽  
Author(s):  
Alexandre Armand ◽  
Javier Ibanez-Guzman ◽  
Clément Zinoune

2020 ◽  
Vol 10 (6) ◽  
pp. 2046 ◽  
Author(s):  
Zhicheng Gu ◽  
Zhihao Li ◽  
Xuan Di ◽  
Rongye Shi

The Waymo Open Dataset has been released recently, providing a platform to crowdsource some fundamental challenges for automated vehicles (AVs), such as 3D detection and tracking. While the dataset provides a large amount of high-quality and multi-source driving information, people in academia are more interested in the underlying driving policy programmed in Waymo self-driving cars, which is inaccessible due to AV manufacturers’ proprietary protection. Accordingly, academic researchers have to make various assumptions to implement AV components in their models or simulations, which may not represent the realistic interactions in real-world traffic. Thus, this paper introduces an approach to learn a long short-term memory (LSTM)-based model for imitating the behavior of Waymo’s self-driving model. The proposed model has been evaluated based on Mean Absolute Error (MAE). The experimental results show that our model outperforms several baseline models in driving action prediction. In addition, a visualization tool is presented for verifying the performance of the model.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1121
Author(s):  
Xiaowei Lu ◽  
Yunfeng Ai ◽  
Bin Tian

Road boundary detection is an important part of the perception of the autonomous driving. It is difficult to detect road boundaries of unstructured roads because there are no curbs. There are no clear boundaries on mine roads to distinguish areas within the road boundary line and areas outside the road boundary line. This paper proposes a real-time road boundary detection and tracking method by a 3D-LIDAR sensor. The road boundary points are extracted from the detected elevated point clouds above the ground point cloud according to the spatial distance characteristics and the angular features. Road tracking is to predict and update the boundary point information in real-time, in order to prevent false and missed detection. The experimental verification of mine road data shows the accuracy and robustness of the proposed algorithm.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3224 ◽  
Author(s):  
Pablo R. Palafox ◽  
Johannes Betz ◽  
Felix Nobis ◽  
Konstantin Riedl ◽  
Markus Lienkamp

Typically, lane departure warning systems rely on lane lines being present on the road.However, in many scenarios, e.g., secondary roads or some streets in cities, lane lines are eithernot present or not sufficiently well signaled. In this work, we present a vision-based method tolocate a vehicle within the road when no lane lines are present using only RGB images as input.To this end, we propose to fuse together the outputs of a semantic segmentation and a monoculardepth estimation architecture to reconstruct locally a semantic 3D point cloud of the viewed scene.We only retain points belonging to the road and, additionally, to any kind of fences or walls thatmight be present right at the sides of the road. We then compute the width of the road at a certainpoint on the planned trajectory and, additionally, what we denote as the fence-to-fence distance.Our system is suited to any kind of motoring scenario and is especially useful when lane lines arenot present on the road or do not signal the path correctly. The additional fence-to-fence distancecomputation is complementary to the road’s width estimation. We quantitatively test our methodon a set of images featuring streets of the city of Munich that contain a road-fence structure, so asto compare our two proposed variants, namely the road’s width and the fence-to-fence distancecomputation. In addition, we also validate our system qualitatively on the Stuttgart sequence of thepublicly available Cityscapes dataset, where no fences or walls are present at the sides of the road,thus demonstrating that our system can be deployed in a standard city-like environment. For thebenefit of the community, we make our software open source.


Sign in / Sign up

Export Citation Format

Share Document