scholarly journals ALS Point Cloud Classification by Integrating an Improved Fully Convolutional Network into Transfer Learning with Multi-Scale and Multi-View Deep Features

Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6969
Author(s):  
Xiangda Lei ◽  
Hongtao Wang ◽  
Cheng Wang ◽  
Zongze Zhao ◽  
Jianqi Miao ◽  
...  

Airborne laser scanning (ALS) point cloud has been widely used in various fields, for it can acquire three-dimensional data with a high accuracy on a large scale. However, due to the fact that ALS data are discretely, irregularly distributed and contain noise, it is still a challenge to accurately identify various typical surface objects from 3D point cloud. In recent years, many researchers proved better results in classifying 3D point cloud by using different deep learning methods. However, most of these methods require a large number of training samples and cannot be widely used in complex scenarios. In this paper, we propose an ALS point cloud classification method to integrate an improved fully convolutional network into transfer learning with multi-scale and multi-view deep features. First, the shallow features of the airborne laser scanning point cloud such as height, intensity and change of curvature are extracted to generate feature maps by multi-scale voxel and multi-view projection. Second, these feature maps are fed into the pre-trained DenseNet201 model to derive deep features, which are used as input for a fully convolutional neural network with convolutional and pooling layers. By using this network, the local and global features are integrated to classify the ALS point cloud. Finally, a graph-cuts algorithm considering context information is used to refine the classification results. We tested our method on the semantic 3D labeling dataset of the International Society for Photogrammetry and Remote Sensing (ISPRS). Experimental results show that overall accuracy and the average F1 score obtained by the proposed method is 89.84% and 83.62%, respectively, when only 16,000 points of the original data are used for training.

2018 ◽  
Vol 10 (4) ◽  
pp. 612 ◽  
Author(s):  
Lei Wang ◽  
Yuchun Huang ◽  
Jie Shan ◽  
Liu He

2019 ◽  
Vol 11 (23) ◽  
pp. 2846 ◽  
Author(s):  
Tong ◽  
Li ◽  
Zhang ◽  
Chen ◽  
Zhang ◽  
...  

Accurate and effective classification of lidar point clouds with discriminative features expression is a challenging task for scene understanding. In order to improve the accuracy and the robustness of point cloud classification based on single point features, we propose a novel point set multi-level aggregation features extraction and fusion method based on multi-scale max pooling and latent Dirichlet allocation (LDA). To this end, in the hierarchical point set feature extraction, point sets of different levels and sizes are first adaptively generated through multi-level clustering. Then, more effective sparse representation is implemented by locality-constrained linear coding (LLC) based on single point features, which contributes to the extraction of discriminative individual point set features. Next, the local point set features are extracted by combining the max pooling method and the multi-scale pyramid structure constructed by the point’s coordinates within each point set. The global and the local features of the point sets are effectively expressed by the fusion of multi-scale max pooling features and global features constructed by the point set LLC-LDA model. The point clouds are classified by using the point set multi-level aggregation features. Our experiments on two scenes of airborne laser scanning (ALS) point clouds—a mobile laser scanning (MLS) scene point cloud and a terrestrial laser scanning (TLS) scene point cloud—demonstrate the effectiveness of the proposed point set multi-level aggregation features for point cloud classification, and the proposed method outperforms other related and compared algorithms.


2020 ◽  
Vol 57 (18) ◽  
pp. 181019
Author(s):  
侯向丹 Hou Xiangdan ◽  
于习欣 Yu Xixin ◽  
刘洪普 Liu Hongpu

2021 ◽  
Vol 10 (7) ◽  
pp. 444
Author(s):  
Jianfeng Zhu ◽  
Lichun Sui ◽  
Yufu Zang ◽  
He Zheng ◽  
Wei Jiang ◽  
...  

In various applications of airborne laser scanning (ALS), the classification of the point cloud is a basic and key step. It requires assigning category labels to each point, such as ground, building or vegetation. Convolutional neural networks have achieved great success in image classification and semantic segmentation, but they cannot be directly applied to point cloud classification because of the disordered and unstructured characteristics of point clouds. In this paper, we design a novel convolution operator to extract local features directly from unstructured points. Based on this convolution operator, we define the convolution layer, construct a convolution neural network to learn multi-level features from the point cloud, and obtain the category label of each point in an end-to-end manner. The proposed method is evaluated on two ALS datasets: the International Society for Photogrammetry and Remote Sensing (ISPRS) Vaihingen 3D Labeling benchmark and the 2019 IEEE Geoscience and Remote Sensing Society (GRSS) Data Fusion Contest (DFC) 3D dataset. The results show that our method achieves state-of-the-art performance for ALS point cloud classification, especially for the larger dataset DFC: we get an overall accuracy of 97.74% and a mean intersection over union (mIoU) of 0.9202, ranking in first place on the contest website.


Author(s):  
W. Ao ◽  
L. Wang ◽  
J. Shan

<p><strong>Abstract.</strong> Point cloud classification is quite a challenging task due to the existence of noises, occlusion and various object types and sizes. Currently, the commonly used statistics-based features cannot accurately characterize the geometric information of a point cloud. This limitation often leads to feature confusion and classification mistakes (e.g., points of building corners and vegetation always share similar statistical features in a local neighbourhood, such as curvature, sphericity, etc). This study aims at solving this problem by leveraging the advantage of both the supervoxel segmentation and multi-scale features. For each point, its multi-scale features within different radii are extracted. Simultaneously, the point cloud is partitioned into simple supervoxel segments. After that, the class probability of each point is predicted by the proposed SegMSF approach that combines multi-scale features with the supervoxel segmentation results. At the end, the effect of data noises is supressed by using a global optimization that encourages spatial consistency of class labels. The proposed method is tested on both airborne laser scanning (ALS) and mobile laser scanning (MLS) point clouds. The experimental results demonstrate that the proposed method performs well in terms of classifying objects of different scales and is robust to noise.</p>


2020 ◽  
Vol 17 (4) ◽  
pp. 721-725 ◽  
Author(s):  
Rong Huang ◽  
Danfeng Hong ◽  
Yusheng Xu ◽  
Wei Yao ◽  
Uwe Stilla

Sign in / Sign up

Export Citation Format

Share Document