scholarly journals Real-Time Road Lane Detection in Urban Areas Using LiDAR Data

Electronics ◽  
2018 ◽  
Vol 7 (11) ◽  
pp. 276 ◽  
Author(s):  
Jiyoung Jung ◽  
Sung-Ho Bae

The generation of digital maps with lane-level resolution is rapidly becoming a necessity, as semi- or fully-autonomous driving vehicles are now commercially available. In this paper, we present a practical real-time working prototype for road lane detection using LiDAR data, which can be further extended to automatic lane-level map generation. Conventional lane detection methods are limited to simple road conditions and are not suitable for complex urban roads with various road signs on the ground. Given a 3D point cloud scanned by a 3D LiDAR sensor, we categorized the points of the drivable region and distinguished the points of the road signs on the ground. Then, we developed an expectation-maximization method to detect parallel lines and update the 3D line parameters in real time, as the probe vehicle equipped with the LiDAR sensor moved forward. The detected and recorded line parameters were integrated to build a lane-level digital map with the help of a GPS/INS sensor. The proposed system was tested to generate accurate lane-level maps of two complex urban routes. The experimental results showed that the proposed system was fast and practical in terms of effectively detecting road lines and generating lane-level maps.

Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4274 ◽  
Author(s):  
Qingquan Li ◽  
Jian Zhou ◽  
Bijun Li ◽  
Yuan Guo ◽  
Jinsheng Xiao

Vision-based lane-detection methods provide low-cost density information about roads for autonomous vehicles. In this paper, we propose a robust and efficient method to expand the application of these methods to cover low-speed environments. First, the reliable region near the vehicle is initialized and a series of rectangular detection regions are dynamically constructed along the road. Then, an improved symmetrical local threshold edge extraction is introduced to extract the edge points of the lane markings based on accurate marking width limitations. In order to meet real-time requirements, a novel Bresenham line voting space is proposed to improve the process of line segment detection. Combined with straight lines, polylines, and curves, the proposed geometric fitting method has the ability to adapt to various road shapes. Finally, different status vectors and Kalman filter transfer matrices are used to track the key points of the linear and nonlinear parts of the lane. The proposed method was tested on a public database and our autonomous platform. The experimental results show that the method is robust and efficient and can meet the real-time requirements of autonomous vehicles.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 657
Author(s):  
Aoki Takanose ◽  
Yoshiki Atsumi ◽  
Kanamu Takikawa ◽  
Junichi Meguro

Autonomous driving support systems and self-driving cars require the determination of reliable vehicle positions with high accuracy. The real time kinematic (RTK) algorithm with global navigation satellite system (GNSS) is generally employed to obtain highly accurate position information. Because RTK can estimate the fix solution, which is a centimeter-level positioning solution, it is also used as an indicator of the position reliability. However, in urban areas, the degradation of the GNSS signal environment poses a challenge. Multipath noise caused by surrounding tall buildings degrades the positioning accuracy. This leads to large errors in the fix solution, which is used as a measure of reliability. We propose a novel position reliability estimation method by considering two factors; one is that GNSS errors are more likely to occur in the height than in the plane direction; the other is that the height variation of the actual vehicle travel path is small compared to the amount of movement in the horizontal directions. Based on these considerations, we proposed a method to detect a reliable fix solution by estimating the height variation during driving. To verify the effectiveness of the proposed method, an evaluation test was conducted in an urban area of Tokyo. According to the evaluation test, a reliability judgment rate of 99% was achieved in an urban environment, and a plane accuracy of less than 0.3 m in RMS was achieved. The results indicate that the accuracy of the proposed method is higher than that of the conventional fix solution, demonstratingits effectiveness.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2084
Author(s):  
Junwon Lee ◽  
Kieun Lee ◽  
Aelee Yoo ◽  
Changjoo Moon

Self-driving cars, autonomous vehicles (AVs), and connected cars combine the Internet of Things (IoT) and automobile technologies, thus contributing to the development of society. However, processing the big data generated by AVs is a challenge due to overloading issues. Additionally, near real-time/real-time IoT services play a significant role in vehicle safety. Therefore, the architecture of an IoT system that collects and processes data, and provides services for vehicle driving, is an important consideration. In this study, we propose a fog computing server model that generates a high-definition (HD) map using light detection and ranging (LiDAR) data generated from an AV. The driving vehicle edge node transmits the LiDAR point cloud information to the fog server through a wireless network. The fog server generates an HD map by applying the Normal Distribution Transform-Simultaneous Localization and Mapping(NDT-SLAM) algorithm to the point clouds transmitted from the multiple edge nodes. Subsequently, the coordinate information of the HD map generated in the sensor frame is converted to the coordinate information of the global frame and transmitted to the cloud server. Then, the cloud server creates an HD map by integrating the collected point clouds using coordinate information.


2020 ◽  
Vol 2020 ◽  
pp. 1-14 ◽  
Author(s):  
Jun Liu ◽  
Rui Zhang

Vehicle detection is a crucial task for autonomous driving and demands high accuracy and real-time speed. Considering that the current deep learning object detection model size is too large to be deployed on the vehicle, this paper introduces the lightweight network to modify the feature extraction layer of YOLOv3 and improve the remaining convolution structure, and the improved Lightweight YOLO network reduces the number of network parameters to a quarter. Then, the license plate is detected to calculate the actual vehicle width and the distance between the vehicles is estimated by the width. This paper proposes a detection and ranging fusion method based on two different focal length cameras to solve the problem of difficult detection and low accuracy caused by a small license plate when the distance is far away. The experimental results show that the average precision and recall of the Lightweight YOLO trained on the self-built dataset is 4.43% and 3.54% lower than YOLOv3, respectively, but the computing speed of the network decreases 49 ms per frame. The road experiments in different scenes also show that the long and short focal length camera fusion ranging method dramatically improves the accuracy and stability of ranging. The mean error of ranging results is less than 4%, and the range of stable ranging can reach 100 m. The proposed method can realize real-time vehicle detection and ranging on the on-board embedded platform Jetson Xavier, which satisfies the requirements of automatic driving environment perception.


Author(s):  
X. Wei ◽  
X. Yao

LiDAR has become important data sources in urban modelling. Traditional methods of LiDAR data processing for building detection require high spatial resolution data and sophisticated methods. The aerial photos, on the other hand, provide continuous spectral information of buildings. But the segmentation of the aerial photos cannot distinguish between the road surfaces and the building roof. This paper develops a geographically weighted regression (GWR)-based method to identify buildings. The method integrates characteristics derived from the sparse LiDAR data and from aerial photos. In the GWR model, LiDAR data provide the height information of spatial objects which is the dependent variable, while the brightness values from multiple bands of the aerial photo serve as the independent variables. The proposed method can thus estimate the height at each pixel from values of its surrounding pixels with consideration of the distances between the pixels and similarities between their brightness values. Clusters of contiguous pixels with higher estimated height values distinguish themselves from surrounding roads or other surfaces. A case study is conducted to evaluate the performance of the proposed method. It is found that the accuracy of the proposed hybrid method is better than those by image classification of aerial photos along or by height extraction of LiDAR data alone. We argue that this simple and effective method can be very useful for automatic detection of buildings in urban areas.


Author(s):  
Mustapha Kabrane ◽  
Salah-ddine Krit ◽  
Lahoucine El Maimouni

In large cities, the increasing number of vehicles private, society, merchandise, and public transport, has led to traffic congestion. Users spend much of their time in endless traffic congestion. To solve this problem, several solutions can be envisaged. The interest is focused on the  system of road signs: The use of a road infrastructure is controlled by a traffic light controller, so it is a matter of knowing how to make the best use of the controls of this system (traffic lights) so as to make traffic more fluid. The values of the commands computed by the controller are determined by an algorithm which is ultimately, only solves a mathematical model representing the problem to be solved. The objective is to make a study and then the comparison on the optimization techniques based on artificial intelligence1 to intelligently route vehicle traffic. These techniques make it possible to minimize a certain function expressing the congestion of the road network. It can be a function, the length of the queue at intersections, the average waiting time, also the total number of vehicles waiting at the intersection


2021 ◽  
Vol 13 (18) ◽  
pp. 3640
Author(s):  
Hao Fu ◽  
Hanzhang Xue ◽  
Xiaochang Hu ◽  
Bokai Liu

In autonomous driving scenarios, the point cloud generated by LiDAR is usually considered as an accurate but sparse representation. In order to enrich the LiDAR point cloud, this paper proposes a new technique that combines spatial adjacent frames and temporal adjacent frames. To eliminate the “ghost” artifacts caused by moving objects, a moving point identification algorithm is introduced that employs the comparison between range images. Experiments are performed on the publicly available Semantic KITTI dataset. Experimental results show that the proposed method outperforms most of the previous approaches. Compared with these previous works, the proposed method is the only method that can run in real-time for online usage.


Author(s):  
Amal Bouti ◽  
Mohamed Adnane Mahraz ◽  
Jamal Riffi ◽  
Hamid Tairi

In this chapter, the authors report a system for detection and classification of road signs. This system consists of two parts. The first part detects the road signs in real time. The second part classifies the German traffic signs (GTSRB) dataset and makes the prediction using the road signs detected in the first part to test the effectiveness. The authors used HOG and SVM in the detection part to detect the road signs captured by the camera. Then they used a convolutional neural network based on the LeNet model in which some modifications were added in the classification part. The system obtains an accuracy rate of 96.85% in the detection part and 96.23% in the classification part.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1121
Author(s):  
Xiaowei Lu ◽  
Yunfeng Ai ◽  
Bin Tian

Road boundary detection is an important part of the perception of the autonomous driving. It is difficult to detect road boundaries of unstructured roads because there are no curbs. There are no clear boundaries on mine roads to distinguish areas within the road boundary line and areas outside the road boundary line. This paper proposes a real-time road boundary detection and tracking method by a 3D-LIDAR sensor. The road boundary points are extracted from the detected elevated point clouds above the ground point cloud according to the spatial distance characteristics and the angular features. Road tracking is to predict and update the boundary point information in real-time, in order to prevent false and missed detection. The experimental verification of mine road data shows the accuracy and robustness of the proposed algorithm.


Sign in / Sign up

Export Citation Format

Share Document