scholarly journals SnapshotNet: Self-supervised feature learning for point cloud data segmentation using minimal labeled data

Author(s):  
Xingye Li ◽  
Ling Zhang ◽  
Zhigang Zhu
Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 172 ◽  
Author(s):  
Chunxiao Wang ◽  
Min Ji ◽  
Jian Wang ◽  
Wei Wen ◽  
Ting Li ◽  
...  

Point cloud data segmentation, filtering, classification, and feature extraction are the main focus of point cloud data processing. DBSCAN (density-based spatial clustering of applications with noise) is capable of detecting arbitrary shapes of clusters in spaces of any dimension, and this method is very suitable for LiDAR (Light Detection and Ranging) data segmentation. The DBSCAN method needs at least two parameters: The minimum number of points minPts, and the searching radius ε. However, the parameter ε is often harder to determine, which hinders the application of the DBSCAN method in point cloud segmentation. Therefore, a segmentation algorithm based on DBSCAN is proposed with a novel automatic parameter ε estimation method—Estimation Method based on the average of k nearest neighbors’ maximum distance—with which parameter ε can be calculated on the intrinsic properties of the point cloud data. The method is based on the fitting curve of k and the mean maximum distance. The method was evaluated on different types of point cloud data: Airborne, and mobile point cloud data with and without color information. The results show that the accuracy values using ε estimated by the proposed method are 75%, 74%, and 71%, which are higher than those using parameters that are smaller or greater than the estimated one. The results demonstrate that the proposed algorithm can segment different types of LiDAR point clouds with higher accuracy in a robust manner. The algorithm can be applied to airborne and mobile LiDAR point cloud data processing systems, which can reduce manual work and improve the automation of data processing.


Author(s):  
S. N. Mohd Isa ◽  
S. A. Abdul Shukor ◽  
N. A. Rahim ◽  
I. Maarof ◽  
Z. R. Yahya ◽  
...  

2021 ◽  
Vol 13 (21) ◽  
pp. 4312
Author(s):  
Genping Zhao ◽  
Weiguang Zhang ◽  
Yeping Peng ◽  
Heng Wu ◽  
Zhuowei Wang ◽  
...  

Point cloud classification plays a significant role in Light Detection and Ranging (LiDAR) applications. However, most available multi-scale feature learning networks for large-scale 3D LiDAR point cloud classification tasks are time-consuming. In this paper, an efficient deep neural architecture denoted as Point Expanded Multi-scale Convolutional Network (PEMCNet) is developed to accurately classify the 3D LiDAR point cloud. Different from traditional networks for point cloud processing, PEMCNet includes successive Point Expanded Grouping (PEG) units and Absolute and Relative Spatial Embedding (ARSE) units for representative point feature learning. The PEG unit enables us to progressively increase the receptive field for each observed point and aggregate the feature of a point cloud at different scales but without increasing computation. The ARSE unit following the PEG unit furthermore realizes representative encoding of points relationship, which effectively preserves the geometric details between points. We evaluate our method on both public datasets (the Urban Semantic 3D (US3D) dataset and Semantic3D benchmark dataset) and our new collected Unmanned Aerial Vehicle (UAV) based LiDAR point cloud data of the campus of Guangdong University of Technology. In comparison with four available state-of-the-art methods, our methods ranked first place regarding both efficiency and accuracy. It was observed on the public datasets that with a 2% increase in classification accuracy, over 26% improvement of efficiency was achieved at the same time compared to the second efficient method. Its potential value is also tested on the newly collected point cloud data with over 91% of classification accuracy and 154 ms of processing time.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sign in / Sign up

Export Citation Format

Share Document