scholarly journals SEMANTIC LABELING AND REFINEMENT OF LIDAR POINT CLOUDS USING DEEP NEURAL NETWORK IN URBAN AREAS

Author(s):  
R. Huang ◽  
Z. Ye ◽  
D. Hong ◽  
Y. Xu ◽  
U. Stilla

<p><strong>Abstract.</strong> In this paper, we propose a framework for obtaining semantic labels of LiDAR point clouds and refining the classification results by combining a deep neural network with a graph-structured smoothing technique. In general, the goal of the semantic scene analysis is to assign a semantic label to each point in the point cloud. Although various related researches have been reported, due to the complexity of urban areas, the semantic labeling of point clouds in urban areas is still a challenging task. In this paper, we address the issues of how to effectively extract features from each point and its local surrounding and how to refine the initial soft labels by considering contextual information in the spatial domain. Specifically, we improve the effectiveness of classification of point cloud in two aspects. Firstly, instead of utilizing handcrafted features as input for classification and refinement, the local context of a point is embedded into deep dimensional space and classified via a deep neural network (PointNet++), and simultaneously soft labels are obtained as initial results for next refinement. Secondly, the initial label probability set is improved by taking the context both in the spatial domain into consideration by constructing a graph structure, and the final labels are optimized by a graph cuts algorithm. To evaluate the performance of our proposed framework, experiments are conducted on a mobile laser scanning (MLS) point cloud dataset. We demonstrate that our approach can achieve higher accuracy in comparison to several commonly-used state-of-the-art baselines. The overall accuracy of our proposed method on TUM dataset can reach 85.38% for labeling eight semantic classes.</p>

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2161 ◽  
Author(s):  
Arnadi Murtiyoso ◽  
Pierre Grussenmeyer

3D heritage documentation has seen a surge in the past decade due to developments in reality-based 3D recording techniques. Several methods such as photogrammetry and laser scanning are becoming ubiquitous amongst architects, archaeologists, surveyors, and conservators. The main result of these methods is a 3D representation of the object in the form of point clouds. However, a solely geometric point cloud is often insufficient for further analysis, monitoring, and model predicting of the heritage object. The semantic annotation of point clouds remains an interesting research topic since traditionally it requires manual labeling and therefore a lot of time and resources. This paper proposes an automated pipeline to segment and classify multi-scalar point clouds in the case of heritage object. This is done in order to perform multi-level segmentation from the scale of a historical neighborhood up until that of architectural elements, specifically pillars and beams. The proposed workflow involves an algorithmic approach in the form of a toolbox which includes various functions covering the semantic segmentation of large point clouds into smaller, more manageable and semantically labeled clusters. The first part of the workflow will explain the segmentation and semantic labeling of heritage complexes into individual buildings, while a second part will discuss the use of the same toolbox to segment the resulting buildings further into architectural elements. The toolbox was tested on several historical buildings and showed promising results. The ultimate intention of the project is to help the manual point cloud labeling, especially when confronted with the large training data requirements of machine learning-based algorithms.


2020 ◽  
Vol 12 (11) ◽  
pp. 1875 ◽  
Author(s):  
Jingwei Zhu ◽  
Joachim Gehrung ◽  
Rong Huang ◽  
Björn Borgmann ◽  
Zhenghao Sun ◽  
...  

In the past decade, a vast amount of strategies, methods, and algorithms have been developed to explore the semantic interpretation of 3D point clouds for extracting desirable information. To assess the performance of the developed algorithms or methods, public standard benchmark datasets should invariably be introduced and used, which serve as an indicator and ruler in the evaluation and comparison. In this work, we introduce and present large-scale Mobile LiDAR point clouds acquired at the city campus of the Technical University of Munich, which have been manually annotated and can be used for the evaluation of related algorithms and methods for semantic point cloud interpretation. We created three datasets from a measurement campaign conducted in April 2016, including a benchmark dataset for semantic labeling, test data for instance segmentation, and test data for annotated single 360 ° laser scans. These datasets cover an urban area of approximately 1 km long roadways and include more than 40 million annotated points with eight classes of objects labeled. Moreover, experiments were carried out with results from several baseline methods compared and analyzed, revealing the quality of this dataset and its effectiveness when using it for performance evaluation.


Author(s):  
Z. Lari ◽  
K. Al-Durgham ◽  
A. Habib

Terrestrial laser scanning (TLS) systems have been established as a leading tool for the acquisition of high density three-dimensional point clouds from physical objects. The collected point clouds by these systems can be utilized for a wide spectrum of object extraction, modelling, and monitoring applications. Pole-like features are among the most important objects that can be extracted from TLS data especially those acquired in urban areas and industrial sites. However, these features cannot be completely extracted and modelled using a single TLS scan due to significant local point density variations and occlusions caused by the other objects. Therefore, multiple TLS scans from different perspectives should be integrated through a registration procedure to provide a complete coverage of the pole-like features in a scene. To date, different segmentation approaches have been proposed for the extraction of pole-like features from either single or multiple-registered TLS scans. These approaches do not consider the internal characteristics of a TLS point cloud (local point density variations and noise level in data) and usually suffer from computational inefficiency. To overcome these problems, two recently-developed PCA-based parameter-domain and spatial-domain approaches for the segmentation of pole-like features are introduced, in this paper. Moreover, the performance of the proposed segmentation approaches for the extraction of pole-like features from a single or multiple-registered TLS scans is investigated in this paper. The alignment of the utilized TLS scans is implemented using an Iterative Closest Projected Point (ICPP) registration procedure. Qualitative and quantitative evaluation of the extracted pole-like features from single and multiple-registered TLS scans, using both of the proposed segmentation approaches, is conducted to verify the extraction of more complete pole-like features using multipleregistered TLS scans.


2020 ◽  
Vol 9 (7) ◽  
pp. 450
Author(s):  
Zhen Ye ◽  
Yusheng Xu ◽  
Rong Huang ◽  
Xiaohua Tong ◽  
Xin Li ◽  
...  

The semantic labeling of the urban area is an essential but challenging task for a wide variety of applications such as mapping, navigation, and monitoring. The rapid advance in Light Detection and Ranging (LiDAR) systems provides this task with a possible solution using 3D point clouds, which are accessible, affordable, accurate, and applicable. Among all types of platforms, the airborne platform with LiDAR can serve as an efficient and effective tool for large-scale 3D mapping in the urban area. Against this background, a large number of algorithms and methods have been developed to fully explore the potential of 3D point clouds. However, the creation of publicly accessible large-scale annotated datasets, which are critical for assessing the performance of the developed algorithms and methods, is still at an early age. In this work, we present a large-scale aerial LiDAR point cloud dataset acquired in a highly-dense and complex urban area for the evaluation of semantic labeling methods. This dataset covers an urban area with highly-dense buildings of approximately 1 km2 and includes more than three million points with five classes of objects labeled. Moreover, experiments are carried out with the results from several baseline methods, demonstrating the feasibility and capability of the dataset serving as a benchmark for assessing semantic labeling methods.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2731
Author(s):  
Yunbo Rao ◽  
Menghan Zhang ◽  
Zhanglin Cheng ◽  
Junmin Xue ◽  
Jiansu Pu ◽  
...  

Accurate segmentation of entity categories is the critical step for 3D scene understanding. This paper presents a fast deep neural network model with Dense Conditional Random Field (DCRF) as a post-processing method, which can perform accurate semantic segmentation for 3D point cloud scene. On this basis, a compact but flexible framework is introduced for performing segmentation to the semantics of point clouds concurrently, contribute to more precise segmentation. Moreover, based on semantics labels, a novel DCRF model is elaborated to refine the result of segmentation. Besides, without any sacrifice to accuracy, we apply optimization to the original data of the point cloud, allowing the network to handle fewer data. In the experiment, our proposed method is conducted comprehensively through four evaluation indicators, proving the superiority of our method.


Author(s):  
G. Gabara ◽  
P. Sawicki

Abstract. The term “3D building models” is used in relation to the CityGML models and building information modelling. Reconstruction and modelling of 3D building objects in urban areas becomes a common trend and finds a wide spectrum of utilitarian applications. The paper presents the quality assessment of two multifaceted 3D building models, which were obtained from two open-access databases: Polish national Geoportal (accuracy in LOD 2 standard) and Trimble SketchUp Warehouse (accuracy in LOD 2 standard with information about architectural details of façades). The Geoportal 3D models were primary created based on the airborne laser scanning data (density 12 pts/sq. m, elevation accuracy to 0.10 m) collected during Informatic System for Country Protection against extraordinary hazards project. The testing was performed using different validation low-altitude photogrammetric datasets: RIEGL LMS-Q680i airborne laser scanning point cloud (min. density 25 pts/sq. m and height accuracy 0.03 m), and image-based Phase One iXU-RS 1000 point cloud (average accuracy in the horizontal and in the vertical plane is respectively to 0.015 m and 0.030 m). The visual comparison, heat maps with the function of the signed distance, and histograms in predefined ranges were used to evaluate the quality and accuracy of 3D building models. The aspect of error sources that occurred during the modelling process was also discussed.


Author(s):  
R. A. Kuçak ◽  
E. Özdemir ◽  
S. Erol

Segmentation of point clouds is recently used in many Geomatics Engineering applications such as the building extraction in urban areas, Digital Terrain Model (DTM) generation and the road or urban furniture extraction. Segmentation is a process of dividing point clouds according to their special characteristic layers. The present paper discusses K-means and self-organizing map (SOM) which is a type of ANN (Artificial Neural Network) segmentation algorithm which treats the segmentation of point cloud. The point clouds which generate with photogrammetric method and Terrestrial Lidar System (TLS) were segmented according to surface normal, intensity and curvature. Thus, the results were evaluated.<br><br> LIDAR (Light Detection and Ranging) and Photogrammetry are commonly used to obtain point clouds in many remote sensing and geodesy applications. By photogrammetric method or LIDAR method, it is possible to obtain point cloud from terrestrial or airborne systems. In this study, the measurements were made with a Leica C10 laser scanner in LIDAR method. In photogrammetric method, the point cloud was obtained from photographs taken from the ground with a 13 MP non-metric camera.


2021 ◽  
Vol 13 (5) ◽  
pp. 859
Author(s):  
Elyta Widyaningrum ◽  
Qian Bai ◽  
Marda K. Fajari ◽  
Roderik C. Lindenbergh

Classification of aerial point clouds with high accuracy is significant for many geographical applications, but not trivial as the data are massive and unstructured. In recent years, deep learning for 3D point cloud classification has been actively developed and applied, but notably for indoor scenes. In this study, we implement the point-wise deep learning method Dynamic Graph Convolutional Neural Network (DGCNN) and extend its classification application from indoor scenes to airborne point clouds. This study proposes an approach to provide cheap training samples for point-wise deep learning using an existing 2D base map. Furthermore, essential features and spatial contexts to effectively classify airborne point clouds colored by an orthophoto are also investigated, in particularly to deal with class imbalance and relief displacement in urban areas. Two airborne point cloud datasets of different areas are used: Area-1 (city of Surabaya—Indonesia) and Area-2 (cities of Utrecht and Delft—the Netherlands). Area-1 is used to investigate different input feature combinations and loss functions. The point-wise classification for four classes achieves a remarkable result with 91.8% overall accuracy when using the full combination of spectral color and LiDAR features. For Area-2, different block size settings (30, 50, and 70 m) are investigated. It is found that using an appropriate block size of, in this case, 50 m helps to improve the classification until 93% overall accuracy but does not necessarily ensure better classification results for each class. Based on the experiments on both areas, we conclude that using DGCNN with proper settings is able to provide results close to production.


2021 ◽  
Vol 13 (11) ◽  
pp. 2195
Author(s):  
Shiming Li ◽  
Xuming Ge ◽  
Shengfu Li ◽  
Bo Xu ◽  
Zhendong Wang

Today, mobile laser scanning and oblique photogrammetry are two standard urban remote sensing acquisition methods, and the cross-source point-cloud data obtained using these methods have significant differences and complementarity. Accurate co-registration can make up for the limitations of a single data source, but many existing registration methods face critical challenges. Therefore, in this paper, we propose a systematic incremental registration method that can successfully register MLS and photogrammetric point clouds in the presence of a large number of missing data, large variations in point density, and scale differences. The robustness of this method is due to its elimination of noise in the extracted linear features and its 2D incremental registration strategy. There are three main contributions of our work: (1) the development of an end-to-end automatic cross-source point-cloud registration method; (2) a way to effectively extract the linear feature and restore the scale; and (3) an incremental registration strategy that simplifies the complex registration process. The experimental results show that this method can successfully achieve cross-source data registration, while other methods have difficulty obtaining satisfactory registration results efficiently. Moreover, this method can be extended to more point-cloud sources.


Sign in / Sign up

Export Citation Format

Share Document