Classification of ALS Point Clouds Using End-to-End Deep Learning

Author(s):  
Lukas Winiwarter ◽  
Gottfried Mandlburger ◽  
Stefan Schmohl ◽  
Norbert Pfeifer

Author(s):  
D. Tosic ◽  
S. Tuttas ◽  
L. Hoegner ◽  
U. Stilla

<p><strong>Abstract.</strong> This work proposes an approach for semantic classification of an outdoor-scene point cloud acquired with a high precision Mobile Mapping System (MMS), with major goal to contribute to the automatic creation of High Definition (HD) Maps. The automatic point labeling is achieved by utilizing the combination of a feature-based approach for semantic classification of point clouds and a deep learning approach for semantic segmentation of images. Both, point cloud data, as well as the data from a multi-camera system are used for gaining spatial information in an urban scene. Two types of classification applied for this task are: 1) Feature-based approach, in which the point cloud is organized into a supervoxel structure for capturing geometric characteristics of points. Several geometric features are then extracted for appropriate representation of the local geometry, followed by removing the effect of local tendency for each supervoxel to enhance the distinction between similar structures. And lastly, the Random Forests (RF) algorithm is applied in the classification phase, for assigning labels to supervoxels and therefore to points within them. 2) The deep learning approach is employed for semantic segmentation of MMS images of the same scene. To achieve this, an implementation of Pyramid Scene Parsing Network is used. Resulting segmented images with each pixel containing a class label are then projected onto the point cloud, enabling label assignment for each point. At the end, experiment results are presented from a complex urban scene and the performance of this method is evaluated on a manually labeled dataset, for the deep learning and feature-based classification individually, as well as for the result of the labels fusion. The achieved overall accuracy with fusioned output is 0.87 on the final test set, which significantly outperforms the results of individual methods on the same point cloud. The labeled data is published on the TUM-PF Semantic-Labeling-Benchmark.</p>



Author(s):  
A. Nurunnabi ◽  
F. N. Teferle ◽  
J. Li ◽  
R. C. Lindenbergh ◽  
A. Hunegnaw

Abstract. Ground surface extraction is one of the classic tasks in airborne laser scanning (ALS) point cloud processing that is used for three-dimensional (3D) city modelling, infrastructure health monitoring, and disaster management. Many methods have been developed over the last three decades. Recently, Deep Learning (DL) has become the most dominant technique for 3D point cloud classification. DL methods used for classification can be categorized into end-to-end and non end-to-end approaches. One of the main challenges of using supervised DL approaches is getting a sufficient amount of training data. The main advantage of using a supervised non end-to-end approach is that it requires less training data. This paper introduces a novel local feature-based non end-to-end DL algorithm that generates a binary classifier for ground point filtering. It studies feature relevance, and investigates three models that are different combinations of features. This method is free from the limitations of point clouds’ irregular data structure and varying data density, which is the biggest challenge for using the elegant convolutional neural network. The new algorithm does not require transforming data into regular 3D voxel grids or any rasterization. The performance of the new method has been demonstrated through two ALS datasets covering urban environments. The method successfully labels ground and non-ground points in the presence of steep slopes and height discontinuity in the terrain. Experiments in this paper show that the algorithm achieves around 97% in both F1-score and model accuracy for ground point labelling.





Author(s):  
D. Laupheimer ◽  
P. Tutzauer ◽  
N. Haala ◽  
M. Spicker

Within this paper we propose an end-to-end approach for classifying terrestrial images of building facades into five different utility classes (<i>commercial, hybrid, residential, specialUse, underConstruction</i>) by using Convolutional Neural Networks (CNNs). For our examples we use images provided by Google Street View. These images are automatically linked to a coarse city model, including the outlines of the buildings as well as their respective use classes. By these means an extensive dataset is available for training and evaluation of our Deep Learning pipeline. The paper describes the implemented end-to-end approach for classifying street-level images of building facades and discusses our experiments with various CNNs. In addition to the classification results, so-called Class Activation Maps (CAMs) are evaluated. These maps give further insights into decisive facade parts that are learned as features during the training process. Furthermore, they can be used for the generation of abstract presentations which facilitate the comprehension of semantic image content. The abstract representations are a result of the stippling method, an importance-based image rendering.



Author(s):  
M. Soilán ◽  
R. Lindenbergh ◽  
B. Riveiro ◽  
A. Sánchez-Rodríguez

<p><strong>Abstract.</strong> During the last couple of years, there has been an increased interest to develop new deep learning networks specifically for processing 3D point cloud data. In that context, this work intends to expand the applicability of one of these networks, PointNet, from the semantic segmentation of indoor scenes, to outdoor point clouds acquired with Airborne Laser Scanning (ALS) systems. Our goal is to of assist the classification of future iterations of a national wide dataset such as the <i>Actueel Hoogtebestand Nederland</i> (AHN), using a classification model trained with a previous iteration. First, a simple application such as ground classification is proposed in order to prove the capabilities of the proposed deep learning architecture to perform an efficient point-wise classification with aerial point clouds. Then, two different models based on PointNet are defined to classify the most relevant elements in the case study data: Ground, vegetation and buildings. While the model for ground classification performs with a F-score metric above 96%, motivating the second part of the work, the overall accuracy of the remaining models is around 87%, showing consistency across different versions of AHN but with improvable false positive and false negative rates. Therefore, this work concludes that the proposed classification of future AHN iterations is feasible but needs more experimentation.</p>



2020 ◽  
Vol 12 (22) ◽  
pp. 3757
Author(s):  
Hyunsoo Kim ◽  
Changwan Kim

Conventional bridge maintenance requires significant time and effort because it involves manual inspection and two-dimensional drawings are used to record any damage. For this reason, a process that identifies the location of the damage in three-dimensional space and classifies the bridge components involved is required. In this study, three deep-learning models—PointNet, PointCNN, and Dynamic Graph Convolutional Neural Network (DGCNN)—were compared to classify the components of bridges. Point cloud data were acquired from three types of bridge (Rahmen, girder, and gravity bridges) to determine the optimal model for use across all three types. Three-fold cross-validation was employed, with overall accuracy and intersection over unions used as the performance measures. The mean interval over unit value of DGCNN is 86.85%, which is higher than 84.29% of Pointnet, 74.68% of PointCNN. The accurate classification of a bridge component based on its relationship with the surrounding components may assist in identifying whether the damage to a bridge affects a structurally important main component.



2020 ◽  
Vol 335 ◽  
pp. 108506 ◽  
Author(s):  
Atif Riaz ◽  
Muhammad Asad ◽  
Eduardo Alonso ◽  
Greg Slabaugh


Author(s):  
E. Özdemir ◽  
F. Remondino

<p><strong>Abstract.</strong> Due to their usefulness in various implementations, such as energy evaluation, visibility analysis, emergency response, 3D cadastre, urban planning, change detection, navigation, etc., 3D city models have gained importance over the last decades. Point clouds are one of the primary data sources for the generation of realistic city models. Beside model-driven approaches, 3D building models can be directly produced from classified aerial point clouds. This paper presents an ongoing research for 3D building reconstruction based on the classification of aerial point clouds without given ancillary data (e.g. footprints, etc.). The work includes a deep learning approach based on specific geometric features extracted from the point cloud. The methodology was tested on the ISPRS 3D Semantic Labeling Contest (Vaihingen and Toronto point clouds) showing promising results, although partly affected by the low density and lack of points on the building facades for the available clouds.</p>



Author(s):  
Cyprian Mataczynski ◽  
Agnieszka Kazimierska ◽  
Agnieszka Uryga ◽  
Malgorzata Burzynska ◽  
Andrzej Rusiecki ◽  
...  


Author(s):  
E. Grilli ◽  
E. Özdemir ◽  
F. Remondino

Abstract. The use of heritage point cloud for documentation and dissemination purposes is nowadays increasing. The association of semantic information to 3D data by means of automated classification methods can help to characterize, describe and better interpret the object under study. In the last decades, machine learning methods have brought significant progress to classification procedures. However, the topic of cultural heritage has not been fully explored yet. This paper presents a research for the classification of heritage point clouds using different supervised learning approaches (Machine and Deep learning ones). The classification is aimed at automatically recognizing architectural components such as columns, facades or windows in large datasets. For each case study and employed classification method, different accuracy metrics are calculated and compared.



Sign in / Sign up

Export Citation Format

Share Document