scholarly journals A Snapshot-based Approach for Self-supervised Feature Learning and Weakly-supervised Classification on Point Cloud Data

Author(s):  
Xingye Li ◽  
Zhigang Zhu
2018 ◽  
Vol 10 (8) ◽  
pp. 1222 ◽  
Author(s):  
Yanjun Wang ◽  
Qi Chen ◽  
Lin Liu ◽  
Xiong Li ◽  
Arun Kumar Sangaiah ◽  
...  

Power lines classification is important for electric power management and geographical objects extraction using LiDAR (light detection and ranging) point cloud data. Many supervised classification approaches have been introduced for the extraction of features such as ground, trees, and buildings, and several studies have been conducted to evaluate the framework and performance of such supervised classification methods in power lines applications. However, these studies did not systematically investigate all of the relevant factors affecting the classification results, including the segmentation scale, feature selection, classifier variety, and scene complexity. In this study, we examined these factors systematically using airborne laser scanning and mobile laser scanning point cloud data. Our results indicated that random forest and neural network were highly suitable for power lines classification in forest, suburban, and urban areas in terms of the precision, recall, and quality rates of the classification results. In contrast to some previous studies, random forest yielded the best results, while Naïve Bayes was the worst classifier in most cases. Random forest was the more robust classifier with or without feature selection for various LiDAR point cloud data. Furthermore, the classification accuracies were directly related to the selection of the local neighborhood, classifier, and feature set. Finally, it was suggested that random forest should be considered in most cases for power line classification.


2021 ◽  
Vol 13 (21) ◽  
pp. 4312
Author(s):  
Genping Zhao ◽  
Weiguang Zhang ◽  
Yeping Peng ◽  
Heng Wu ◽  
Zhuowei Wang ◽  
...  

Point cloud classification plays a significant role in Light Detection and Ranging (LiDAR) applications. However, most available multi-scale feature learning networks for large-scale 3D LiDAR point cloud classification tasks are time-consuming. In this paper, an efficient deep neural architecture denoted as Point Expanded Multi-scale Convolutional Network (PEMCNet) is developed to accurately classify the 3D LiDAR point cloud. Different from traditional networks for point cloud processing, PEMCNet includes successive Point Expanded Grouping (PEG) units and Absolute and Relative Spatial Embedding (ARSE) units for representative point feature learning. The PEG unit enables us to progressively increase the receptive field for each observed point and aggregate the feature of a point cloud at different scales but without increasing computation. The ARSE unit following the PEG unit furthermore realizes representative encoding of points relationship, which effectively preserves the geometric details between points. We evaluate our method on both public datasets (the Urban Semantic 3D (US3D) dataset and Semantic3D benchmark dataset) and our new collected Unmanned Aerial Vehicle (UAV) based LiDAR point cloud data of the campus of Guangdong University of Technology. In comparison with four available state-of-the-art methods, our methods ranked first place regarding both efficiency and accuracy. It was observed on the public datasets that with a 2% increase in classification accuracy, over 26% improvement of efficiency was achieved at the same time compared to the second efficient method. Its potential value is also tested on the newly collected point cloud data with over 91% of classification accuracy and 154 ms of processing time.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Author(s):  
Keisuke YOSHIDA ◽  
Shiro MAENO ◽  
Syuhei OGAWA ◽  
Sadayuki ISEKI ◽  
Ryosuke AKOH

2019 ◽  
Author(s):  
Byeongjun Oh ◽  
Minju Kim ◽  
Chanwoo Lee ◽  
Hunhee Cho ◽  
Kyung-In Kang

Sign in / Sign up

Export Citation Format

Share Document