Deep-Learning for Lod1 Building Reconstruction from Airborne Lidar Data

Author(s):  
Tee-Ann Teo
Author(s):  
Shenman Zhang ◽  
Jie Shan ◽  
Zhichao Zhang ◽  
Jixing Yan ◽  
Yaolin Hou

A complete building model reconstruction needs data collected from both air and ground. The former often has sparse coverage on building façades, while the latter usually is unable to observe the building rooftops. Attempting to solve the missing data issues in building reconstruction from single data source, we describe an approach for complete building reconstruction that integrates airborne LiDAR data and ground smartphone imagery. First, by taking advantages of GPS and digital compass information embedded in the image metadata of smartphones, we are able to find airborne LiDAR point clouds for the corresponding buildings in the images. In the next step, Structure-from-Motion and dense multi-view stereo algorithms are applied to generate building point cloud from multiple ground images. The third step extracts building outlines respectively from the LiDAR point cloud and the ground image point cloud. An automated correspondence between these two sets of building outlines allows us to achieve a precise registration and combination of the two point clouds, which ultimately results in a complete and full resolution building model. The developed approach overcomes the problem of sparse points on building façades in airborne LiDAR and the deficiency of rooftops in ground images such that the merits of both datasets are utilized.


Author(s):  
Shenman Zhang ◽  
Jie Shan ◽  
Zhichao Zhang ◽  
Jixing Yan ◽  
Yaolin Hou

A complete building model reconstruction needs data collected from both air and ground. The former often has sparse coverage on building façades, while the latter usually is unable to observe the building rooftops. Attempting to solve the missing data issues in building reconstruction from single data source, we describe an approach for complete building reconstruction that integrates airborne LiDAR data and ground smartphone imagery. First, by taking advantages of GPS and digital compass information embedded in the image metadata of smartphones, we are able to find airborne LiDAR point clouds for the corresponding buildings in the images. In the next step, Structure-from-Motion and dense multi-view stereo algorithms are applied to generate building point cloud from multiple ground images. The third step extracts building outlines respectively from the LiDAR point cloud and the ground image point cloud. An automated correspondence between these two sets of building outlines allows us to achieve a precise registration and combination of the two point clouds, which ultimately results in a complete and full resolution building model. The developed approach overcomes the problem of sparse points on building façades in airborne LiDAR and the deficiency of rooftops in ground images such that the merits of both datasets are utilized.


Author(s):  
E. Janssens-Coron ◽  
E. Guilbert

<p><strong>Abstract.</strong> Airborne lidar data is commonly used to generate point clouds over large areas. These points can be classified into different categories such as ground, building, vegetation, etc. The first step for this is to separate ground points from non-ground points. Existing methods rely mainly on TIN densification but there performance varies with the type of terrain and relies on the user’s experience who adjusts parameters accordingly. An alternative may be on the use of a deep learning approach that would limit user’s intervention. Hence, in this paper, we assess a deep learning architecture, PointNet, that applies directly to point clouds. Our preliminary results show mitigating classification rates and further investigation is required to properly train the system and improve the robustness, showing issues with the choices we made in the preprocessing. Nonetheless, our analysis suggests that it is necessary to enrich the architecture of the network to integrate the notion of neighbourhood at different scales in order to increase the accuracy and the robustness of the treatment as well as its capacity to treat data from different geographical contexts.</p>


Author(s):  
S. C. L. Ribeiro ◽  
M. Jarzabek-Rychard ◽  
J. P. Cintra ◽  
H.-G. Maas

<p><strong>Abstract.</strong> Cadastral mapping of <i>favela</i>’s agglomerated buildings in informal settlements at Level of Detail 1 (LoD1) usually requires specific surveys and extensive manual data processing. Therefore, there is a demand for including the <i>favelas</i> in the city map production on the basis of Lidar surveys, as well as the detection of their vertical growth. However, the currently developed algorithms for automatically extracting buildings from airborne Lidar data have mainly been tested only for regular building reconstruction. This study aims to develop a Lidar data processing pipeline enabling to compute metrics related to intraurban informal settlements. To do so, we present a procedure to generate <i>favela</i>’s buildings delineation, height, floors’ number and built area and apply them to six case studies in <i>favela</i> typo-morphologies. We conducted an exploratory analysis in order to obtain the adequate parameters of the processing pipeline and its evaluation, using open source, free license and self-developed software. The results are compared to reference data from the manual stereo plotting, achieving a quality index in the building reconstruction about 70%. We also calculated the growth density, measured by gross Floor Area Ratio index inside settlement, revealing values from 29% to 74% considering different time periods.</p>


Sign in / Sign up

Export Citation Format

Share Document