Fast Segmentation of 3-D Point Clouds Based on Ground Plane State Tracking

Author(s):  
Jin Chen ◽  
Yafei Wang ◽  
Kunpeng Dai ◽  
Taiyuan Ma
Author(s):  
T. Fiolka ◽  
F. Rouatbi ◽  
D. Bender

3D terrain models are an important instrument in areas like geology, agriculture and reconnaissance. Using an automated UAS with a line-based LiDAR can create terrain models fast and easily even from large areas. But the resulting point cloud may contain holes and therefore be incomplete. This might happen due to occlusions, a missed flight route due to wind or simply as a result of changes in the ground height which would alter the swath of the LiDAR system. This paper proposes a method to detect holes in 3D point clouds generated during the flight and adjust the course in order to close them. First, a grid-based search for holes in the horizontal ground plane is performed. Then a check for vertical holes mainly created by buildings walls is done. Due to occlusions and steep LiDAR angles, closing the vertical gaps may be difficult or even impossible. Therefore, the current approach deals with holes in the ground plane and only marks the vertical holes in such a way that the operator can decide on further actions regarding them. The aim is to efficiently create point clouds which can be used for the generation of complete 3D terrain models.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5908
Author(s):  
Yiru Niu ◽  
Zhihua Xu ◽  
Ershuai Xu ◽  
Gongwei Li ◽  
Yuan Huo ◽  
...  

Social distancing protocols have been highly recommended by the World Health Organization (WHO) to curb the spread of COVID-19. However, one major challenge to enforcing social distancing in public areas is how to perceive people in three dimensions. This paper proposes an innovative pedestrian 3D localization method using monocular images combined with terrestrial point clouds. In the proposed approach, camera calibration is achieved based on the correspondences between 2D image points and 3D world points. The vertical coordinates of the ground plane where pedestrians stand are extracted from the point clouds. Then, using the assumption that the pedestrian is always perpendicular to the ground, the 3D coordinates of the pedestrian’s feet and head are calculated iteratively using collinear equations. This allows the three-dimensional localization and height determination of pedestrians using monocular cameras, which are widely distributed in many major cities. The performance of the proposed method was evaluated using two different datasets. Experimental results show that the pedestrian localization error of the proposed approach was less than one meter within tens of meters and performed better than other localization techniques. The proposed approach uses simple and efficient calculations, obtains accurate location, and can be used to implement social distancing rules. Moreover, since the proposed approach also generates accurate height values, exclusionary schemes to social distancing protocols, particularly the parent-child exemption, can be introduced in the framework.


2012 ◽  
Vol 591-593 ◽  
pp. 1265-1268
Author(s):  
Zi Ming Xiong ◽  
Gang Wan ◽  
Xue Feng Cao

Recent progress in structure-from-motion (SfM) has led to robust techniques that can operate in extremely general conditions. However, a limitation of SfM is that the scene can only be recovered up to a similarity transformation. We address the problem of automatically aligning 3D point clouds from SfM reconstructions to orthographic images. We extract feature lines from 3D point clouds, and project the feature lines onto the ground plane to create a 2D feature lines. So we reduce this alignment problem to a 2D line to 2D line alignment(match), and a novel technique for the automatic feature lines matching is presented in this paper.


Author(s):  
Jian Wu ◽  
Qingxiong Yang

In this paper, we study the semantic segmentation of 3D LiDAR point cloud data in urban environments for autonomous driving, and a method utilizing the surface information of the ground plane was proposed. In practice, the resolution of a LiDAR sensor installed in a self-driving vehicle is relatively low and thus the acquired point cloud is indeed quite sparse. While recent work on dense point cloud segmentation has achieved promising results, the performance is relatively low when directly applied to sparse point clouds. This paper is focusing on semantic segmentation of the sparse point clouds obtained from 32-channel LiDAR sensor with deep neural networks. The main contribution is the integration of the ground information which is used to group ground points far away from each other. Qualitative and quantitative experiments on two large-scale point cloud datasets show that the proposed method outperforms the current state-of-the-art.


Author(s):  
Stefano Feraco ◽  
Angelo Bonfitto ◽  
Nicola Amati ◽  
Andrea Tonoli

Abstract This paper presents a clustering technique for the detection of the obstacles and lane boundaries on a road. The algorithm consists of two nested clustering stages. The first stage is based on hierarchical clustering, and the second on k-means clustering. The method exploits a preliminary ground-plane filtering algorithm to process the raw LIDAR point cloud, that is based on the semantic segmentation of point clouds. The clustering algorithm estimates the position of the obstacles that define the race track. Once the race track is sensed, the lane boundaries are detected. The method is validated experimentally on a four-wheel drive electric vehicle participating in the Formula SAE events. The validation environment is structured with traffic cones to define the race track.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2651
Author(s):  
Hao Li ◽  
Sanyuan Zhao ◽  
Wenjun Zhao ◽  
Libin Zhang ◽  
Jianbing Shen

Recent one-stage 3D detection methods generate anchor boxes with various sizes and orientations in the ground plane, then determine whether these anchor boxes contain any region of interest and adjust the edges of them for accurate object bounding boxes. The anchor-based algorithm calculates the classification and regression label for each anchor box during the training process, which is inefficient and complicated. We propose a one-stage, anchor-free 3D vehicle detection algorithm based on LiDAR point clouds. The object position is encoded as a set of keypoints in the bird’s-eye view (BEV) of point clouds. We apply the voxel/pillar feature extractor and convolutional blocks to map an unstructured point cloud to a single-channel 2D heatmap. The vehicle’s Z-axis position, dimension, and orientation angle are regressed as additional attributes of the keypoints. Our method combines SmoothL1 loss and IoU (Intersection over Union) loss, and we apply (cosθ,sinθ) as angle regression labels, which achieve high average orientation similarity (AOS) without any direction classification tricks. During the target assignment and bounding box decoding process, our framework completely avoids any calculations related to anchor boxes. Our framework is end-to-end training and stands at the same performance level as the other one-stage anchor-based detectors.


Author(s):  
B. Borgmann ◽  
M. Hebel ◽  
M. Arens ◽  
U. Stilla

Abstract. This paper presents and extends an approach for the detection of pedestrians in unstructured point clouds resulting from single MLS (mobile laser scanning) scans. The approach is based on a neural network and a subsequent voting process. The neural network processes point clouds subdivided into local point neighborhoods. The member points of these neighborhoods are directly processed by the network, hence a conversion in a structured representation of the data is not needed. The network also uses meta information of the neighborhoods themselves to improve the results, like their distance to the ground plane. It decides if the neighborhood is part of an object of interest and estimates the center of said object. This information is then used in a voting process. By searching for maxima in the voting space, the discrimination between an actual object and incorrectly classified neighborhoods is made. Since a single labeled object can be subdivided into multiple local neighborhoods, we are able to train the neural network with comparatively low amounts of labeled data. Considerations are made to deal with the varying and sparse point density that is typical for single MLS scans. We supplement the detection with a 3D tracking which, although straightforward, allows us to deal with objects which are occluded for short periods of time to improve the quality of the results. Overall, our approach performs reasonably well for the detection and tracking of pedestrians in single MLS scans as long as the local point density is not too low. Given the LiDAR sensor we used, this is the case up to distances of 22 m.


Sign in / Sign up

Export Citation Format

Share Document