scholarly journals A 3D LiDAR Data-Based Dedicated Road Boundary Detection Algorithm for Autonomous Vehicles

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 29623-29638 ◽  
Author(s):  
Pengpeng Sun ◽  
Xiangmo Zhao ◽  
Zhigang Xu ◽  
Runmin Wang ◽  
Haigen Min
Author(s):  
Guoqiang Chen ◽  
Zhuangzhuang Mao ◽  
Huailong Yi ◽  
Xiaofeng Li ◽  
Bingxin Bai ◽  
...  

Object detection is a crucial task of autonomous driving. This paper addresses an effective algorithm for pedestrian detection of the panoramic depth map transformed from the 3D-LiDAR data. Firstly, the 3D point clouds are transformed into panoramic depth maps, and then the panoramic depth maps are enhanced. Secondly, the grounds of the 3D point clouds are removed. The remaining point clouds are clustered, filtered and projected onto the previously generated panoramic depth maps, and new panoramic depth maps are obtained. Finally, the new panoramic depth maps are jointed to generate depth maps with different sizes, which are used as input of the improved PVANET for pedestrian detection. The 2D image of the panoramic depth map applied to the proposed algorithm is transformed from 3D point cloud, effectively containing the panorama of the sensor, and is more suitable for the environment perception of autonomous driving. Compared with the detection algorithm based on RGB images, the proposed algorithm cannot be affected by light, and can maintain the normal average precision of pedestrian detection at night. In order to increase the robustness of detecting small objects like pedestrians, the network structure based on the original PVANET is modified in this paper. A new dataset is built by processing the 3D-LiDAR data and the model trained on the new dataset perform well. The experimental results show that the proposed algorithm achieves high accuracy and robustness in pedestrian detection under different illumination conditions. Furthermore, when trained on the new dataset, the model exhibits average precision improvements of 2.8–5.1 % over the original PVANET, making it more suitable for autonomous driving applications.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


2016 ◽  
Vol 33 (4) ◽  
pp. 697-712 ◽  
Author(s):  
R. Andrew Weekley ◽  
R. Kent Goodrich ◽  
Larry B. Cornman

AbstractAn image-processing algorithm has been developed to identify aerosol plumes in scanning lidar backscatter data. The images in this case consist of lidar data in a polar coordinate system. Each full lidar scan is taken as a fixed image in time, and sequences of such scans are considered functions of time. The data are analyzed in both the original backscatter polar coordinate system and a lagged coordinate system. The lagged coordinate system is a scatterplot of two datasets, such as subregions taken from the same lidar scan (spatial delay), or two sequential scans in time (time delay). The lagged coordinate system processing allows for finding and classifying clusters of data. The classification step is important in determining which clusters are valid aerosol plumes and which are from artifacts such as noise, hard targets, or background fields. These cluster classification techniques have skill since both local and global properties are used. Furthermore, more information is available since both the original data and the lag data are used. Performance statistics are presented for a limited set of data processed by the algorithm, where results from the algorithm were compared to subjective truth data identified by a human.


Author(s):  
R. A. Loberternos ◽  
W. P. Porpetcho ◽  
J. C. A. Graciosa ◽  
R. R. Violanda ◽  
A. G. Diola ◽  
...  

Traditional remote sensing approach for mapping aquaculture ponds typically involves the use of aerial photography and high resolution images. The current study demonstrates the use of object-based image processing and analyses of LiDAR-data-generated derivative images with 1-meter resolution, namely: CHM (canopy height model) layer, DSM (digital surface model) layer, DTM (digital terrain model) layer, Hillshade layer, Intensity layer, NumRet (number of returns) layer, and Slope layer. A Canny edge detection algorithm was also performed on the Hillshade layer in order to create a new image (Canny layer) with more defined edges. These derivative images were then used as input layers to perform a multi-resolution segmentation algorithm best fit to delineate the aquaculture ponds. In order to extract the aquaculture pond feature, three major classes were identified for classification, including land, vegetation and water. Classification was first performed by using assign class algorithm to classify Flat Surfaces to segments with mean Slope values of 10 or lower. Out of these Flat Surfaces, assign class algorithm was then performed to determine Water feature by using a threshold value of 63.5. The segments identified as Water were then merged together to form larger bodies of water which comprises the aquaculture ponds. The present study shows that LiDAR data coupled with object-based classification can be an effective approach for mapping coastal aquaculture ponds. The workflow currently presented can be used as a model to map other areas in the Philippines where aquaculture ponds exist.


2009 ◽  
Vol 1 (2) ◽  
pp. 70-78 ◽  
Author(s):  
L. Xiang-Wei ◽  
L. Zhan-Ming ◽  
Z. Ming-Xin ◽  
Z. Ya-lLn ◽  
W. Wei-Yi

Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1551 ◽  
Author(s):  
Kai Li ◽  
Jinju Shao ◽  
Dong Guo

In order to improve the accuracy of structured road boundary detection and solve the problem of the poor robustness of single feature boundary extraction, this paper proposes a multi-feature road boundary detection algorithm based on HDL-32E LIDAR. According to the road environment and sensor information, the former scenic cloud data is extracted, and the primary and secondary search windows are set according to the road geometric features and the point cloud spatial distribution features. In the search process, we propose the concept of the largest and smallest cluster points set and a two-way search method. Finally, the quadratic curve model is used to fit the road boundary. In the actual road test in the campus road, the accuracy of the linear boundary detection is 97.54%, the accuracy of the curve boundary detection is 92.56%, and the average detection period is 41.8 ms. In addition, the algorithm is still robust in a typical complex road environment.


Sign in / Sign up

Export Citation Format

Share Document