scholarly journals CLASSIFICATION OF AERIAL POINT CLOUDS WITH DEEP LEARNING

Author(s):  
E. Özdemir ◽  
F. Remondino

<p><strong>Abstract.</strong> Due to their usefulness in various implementations, such as energy evaluation, visibility analysis, emergency response, 3D cadastre, urban planning, change detection, navigation, etc., 3D city models have gained importance over the last decades. Point clouds are one of the primary data sources for the generation of realistic city models. Beside model-driven approaches, 3D building models can be directly produced from classified aerial point clouds. This paper presents an ongoing research for 3D building reconstruction based on the classification of aerial point clouds without given ancillary data (e.g. footprints, etc.). The work includes a deep learning approach based on specific geometric features extracted from the point cloud. The methodology was tested on the ISPRS 3D Semantic Labeling Contest (Vaihingen and Toronto point clouds) showing promising results, although partly affected by the low density and lack of points on the building facades for the available clouds.</p>

Author(s):  
E. Özdemir ◽  
F. Remondino

<p><strong>Abstract.</strong> 3D city modeling has become important over the last decades as these models are being used in different studies including, energy evaluation, visibility analysis, 3D cadastre, urban planning, change detection, disaster management, etc. Segmentation and classification of photogrammetric or LiDAR data is important for 3D city models as these are the main data sources, and, these tasks are challenging due to their complexity. This study presents research in progress, which focuses on the segmentation and classification of 3D point clouds and orthoimages to generate 3D urban models. The aim is to classify photogrammetric-based point clouds (&amp;gt;<span class="thinspace"></span>30<span class="thinspace"></span>pts/sqm) in combination with aerial RGB orthoimages (~<span class="thinspace"></span>10<span class="thinspace"></span>cm, RGB image) in order to name buildings, ground level objects (GLOs), trees, grass areas, and other regions. If on the one hand the classification of aerial orthoimages is foreseen to be a fast approach to get classes and then transfer them from the image to the point cloud space, on the other hand, segmenting a point cloud is expected to be much more time consuming but to provide significant segments from the analyzed scene. For this reason, the proposed method combines segmentation methods on the two geoinformation in order to achieve better results.</p>


Author(s):  
D. Tosic ◽  
S. Tuttas ◽  
L. Hoegner ◽  
U. Stilla

<p><strong>Abstract.</strong> This work proposes an approach for semantic classification of an outdoor-scene point cloud acquired with a high precision Mobile Mapping System (MMS), with major goal to contribute to the automatic creation of High Definition (HD) Maps. The automatic point labeling is achieved by utilizing the combination of a feature-based approach for semantic classification of point clouds and a deep learning approach for semantic segmentation of images. Both, point cloud data, as well as the data from a multi-camera system are used for gaining spatial information in an urban scene. Two types of classification applied for this task are: 1) Feature-based approach, in which the point cloud is organized into a supervoxel structure for capturing geometric characteristics of points. Several geometric features are then extracted for appropriate representation of the local geometry, followed by removing the effect of local tendency for each supervoxel to enhance the distinction between similar structures. And lastly, the Random Forests (RF) algorithm is applied in the classification phase, for assigning labels to supervoxels and therefore to points within them. 2) The deep learning approach is employed for semantic segmentation of MMS images of the same scene. To achieve this, an implementation of Pyramid Scene Parsing Network is used. Resulting segmented images with each pixel containing a class label are then projected onto the point cloud, enabling label assignment for each point. At the end, experiment results are presented from a complex urban scene and the performance of this method is evaluated on a manually labeled dataset, for the deep learning and feature-based classification individually, as well as for the result of the labels fusion. The achieved overall accuracy with fusioned output is 0.87 on the final test set, which significantly outperforms the results of individual methods on the same point cloud. The labeled data is published on the TUM-PF Semantic-Labeling-Benchmark.</p>


Author(s):  
Lukas Winiwarter ◽  
Gottfried Mandlburger ◽  
Stefan Schmohl ◽  
Norbert Pfeifer

Author(s):  
B. Dukai ◽  
H. Ledoux ◽  
J. E. Stoter

<p><strong>Abstract.</strong> The 3D representation of buildings with roof shapes (also called LoD2) is popular in the 3D city modelling domain since it provides a realistic view of 3D city models. However, for many application block models of buildings are sufficient or even more suitable. These so called LoD1 models can be reconstructed relatively easily from building footprints and point clouds. But LoD1 representations for the same building can be rather different because of differences in height references used to reconstruct the block models and differences in underlying statistical calculation methods. Users are often not aware of these differences, while these differences may have an impact on the outcome of spatial analyses. To standardise possible variances of LoD1 models and let the users choose the best one for their application, we have developed a LoD1 reconstruction service that generates several heights per building (both for the ground surface and the extrusion height). The building models are generated for all ~10 million buildings in The Netherlands based on footprints of buildings and LiDAR point clouds. The 3D dataset is updated every month automatically. In addition, for each building quality parameters are calculated and made available. This article describes the development of the LoD1 building service and we report on the spatial analysis that we performed on the generated height values.</p>


2018 ◽  
Vol 8 (2) ◽  
pp. 59-64
Author(s):  
Iuliana Maria Pârvu ◽  
F. Remondino ◽  
E. Ozdemir

Abstract The VOLTA project is a RISE Marie-Curie action designed to realize Research & Innovation (R&I) among intersectoral partners to exchange knowledge, methods and workflows in the geospatial field. To accomplish its objectives, the main R&I activities of VOLTA are divided in four interlinked Work Packages with two transversal ones responsible for knowledge transfer & training as well as dissemination of the project results. The research activities and knowledge transfer are performed with a series of secondments between partners. The consortium is composed of 13 partners from academic & research institutions, industrial partners and national mapping agencies. The Romanian National Center of Cartography is part of this research project and in this article the achievements of the secondment at Bruno Kessler Foundation in Trento (Italy) are given. The main goal of the exchange was to generate level of detail - LOD2 building models in an automated manner from photogrammetric point clouds and without any ancillary data. To benchmark existing commercial solutions for the realization of LOD2 building models, we tested Building Reconstruction. This program generates LOD2 models starting from building footprints, digital terrain model (DTM) and digital surface model (DSM). The presented work examined a research and a commercial-based approach to reconstruct LOD2 building models from point clouds. The full paper will report all technical details of the work with insight analyses and comparisons.


Author(s):  
M. Soilán ◽  
R. Lindenbergh ◽  
B. Riveiro ◽  
A. Sánchez-Rodríguez

<p><strong>Abstract.</strong> During the last couple of years, there has been an increased interest to develop new deep learning networks specifically for processing 3D point cloud data. In that context, this work intends to expand the applicability of one of these networks, PointNet, from the semantic segmentation of indoor scenes, to outdoor point clouds acquired with Airborne Laser Scanning (ALS) systems. Our goal is to of assist the classification of future iterations of a national wide dataset such as the <i>Actueel Hoogtebestand Nederland</i> (AHN), using a classification model trained with a previous iteration. First, a simple application such as ground classification is proposed in order to prove the capabilities of the proposed deep learning architecture to perform an efficient point-wise classification with aerial point clouds. Then, two different models based on PointNet are defined to classify the most relevant elements in the case study data: Ground, vegetation and buildings. While the model for ground classification performs with a F-score metric above 96%, motivating the second part of the work, the overall accuracy of the remaining models is around 87%, showing consistency across different versions of AHN but with improvable false positive and false negative rates. Therefore, this work concludes that the proposed classification of future AHN iterations is feasible but needs more experimentation.</p>


Author(s):  
E. Özdemir ◽  
F. Remondino ◽  
A. Golkar

Abstract. With recent advances in technology, 3D point clouds are getting more and more frequently requested and used, not only for visualization needs but also e.g. by public administrations for urban planning and management. 3D point clouds are also a very frequent source for generating 3D city models which became recently more available for many applications, such as urban development plans, energy evaluation, navigation, visibility analysis and numerous other GIS studies. While the main data sources remained the same (namely aerial photogrammetry and LiDAR), the way these city models are generated have been evolving towards automation with different approaches. As most of these approaches are based on point clouds with proper semantic classes, our aim is to classify aerial point clouds into meaningful semantic classes, e.g. ground level objects (GLO, including roads and pavements), vegetation, buildings’ facades and buildings’ roofs. In this study we tested and evaluated various machine learning algorithms for classification, including three deep learning algorithms and one machine learning algorithm. In the experiments, several hand-crafted geometric features depending on the dataset are used and, unconventionally, these geometric features are used also for deep learning.


Author(s):  
S. N. Perera ◽  
N. Hetti Arachchige ◽  
D. Schneider

Geometrically and topologically correct 3D building models are required to satisfy with new demands such as 3D cadastre, map updating, and decision making. More attention on building reconstruction has been paid using Airborne Laser Scanning (ALS) point cloud data. The planimetric accuracy of roof outlines, including step-edges is questionable in building models derived from only point clouds. This paper presents a new approach for the detection of accurate building boundaries by merging point clouds acquired by ALS and aerial photographs. It comprises two major parts: reconstruction of initial roof models from point clouds only, and refinement of their boundaries. A shortest closed circle (graph) analysis method is employed to generate building models in the first step. Having the advantages of high reliability, this method provides reconstruction without prior knowledge of primitive building types even when complex height jumps and various types of building roof are available. The accurate position of boundaries of the initial models is determined by the integration of the edges extracted from aerial photographs. In this process, scene constraints defined based on the initial roof models are introduced as the initial roof models are representing explicit unambiguous geometries about the scene. Experiments were conducted using the ISPRS benchmark test data. Based on test results, we show that the proposed approach can reconstruct 3D building models with higher geometrical (planimetry and vertical) and topological accuracy.


2020 ◽  
Vol 12 (22) ◽  
pp. 3757
Author(s):  
Hyunsoo Kim ◽  
Changwan Kim

Conventional bridge maintenance requires significant time and effort because it involves manual inspection and two-dimensional drawings are used to record any damage. For this reason, a process that identifies the location of the damage in three-dimensional space and classifies the bridge components involved is required. In this study, three deep-learning models—PointNet, PointCNN, and Dynamic Graph Convolutional Neural Network (DGCNN)—were compared to classify the components of bridges. Point cloud data were acquired from three types of bridge (Rahmen, girder, and gravity bridges) to determine the optimal model for use across all three types. Three-fold cross-validation was employed, with overall accuracy and intersection over unions used as the performance measures. The mean interval over unit value of DGCNN is 86.85%, which is higher than 84.29% of Pointnet, 74.68% of PointCNN. The accurate classification of a bridge component based on its relationship with the surrounding components may assist in identifying whether the damage to a bridge affects a structurally important main component.


Sign in / Sign up

Export Citation Format

Share Document