scholarly journals GROUND POINT FILTERING FROM AIRBORNE LIDAR POINT CLOUDS USING DEEP LEARNING: A PRELIMINARY STUDY

Author(s):  
E. Janssens-Coron ◽  
E. Guilbert

<p><strong>Abstract.</strong> Airborne lidar data is commonly used to generate point clouds over large areas. These points can be classified into different categories such as ground, building, vegetation, etc. The first step for this is to separate ground points from non-ground points. Existing methods rely mainly on TIN densification but there performance varies with the type of terrain and relies on the user’s experience who adjusts parameters accordingly. An alternative may be on the use of a deep learning approach that would limit user’s intervention. Hence, in this paper, we assess a deep learning architecture, PointNet, that applies directly to point clouds. Our preliminary results show mitigating classification rates and further investigation is required to properly train the system and improve the robustness, showing issues with the choices we made in the preprocessing. Nonetheless, our analysis suggests that it is necessary to enrich the architecture of the network to integrate the notion of neighbourhood at different scales in order to increase the accuracy and the robustness of the treatment as well as its capacity to treat data from different geographical contexts.</p>

2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Wuming Zhang ◽  
Shangshu Cai ◽  
Xinlian Liang ◽  
Jie Shao ◽  
Ronghai Hu ◽  
...  

Abstract Background The universal occurrence of randomly distributed dark holes (i.e., data pits appearing within the tree crown) in LiDAR-derived canopy height models (CHMs) negatively affects the accuracy of extracted forest inventory parameters. Methods We develop an algorithm based on cloth simulation for constructing a pit-free CHM. Results The proposed algorithm effectively fills data pits of various sizes whilst preserving canopy details. Our pit-free CHMs derived from point clouds at different proportions of data pits are remarkably better than those constructed using other algorithms, as evidenced by the lowest average root mean square error (0.4981 m) between the reference CHMs and the constructed pit-free CHMs. Moreover, our pit-free CHMs show the best performance overall in terms of maximum tree height estimation (average bias = 0.9674 m). Conclusion The proposed algorithm can be adopted when working with different quality LiDAR data and shows high potential in forestry applications.


2021 ◽  
Vol 13 (18) ◽  
pp. 3766
Author(s):  
Zhenyang Hui ◽  
Zhuoxuan Li ◽  
Penggen Cheng ◽  
Yao Yevenyo Ziggah ◽  
JunLin Fan

Building extraction from airborne Light Detection and Ranging (LiDAR) point clouds is a significant step in the process of digital urban construction. Although the existing building extraction methods perform well in simple urban environments, when encountering complicated city environments with irregular building shapes or varying building sizes, these methods cannot achieve satisfactory building extraction results. To address these challenges, a building extraction method from airborne LiDAR data based on multi-constraints graph segmentation was proposed in this paper. The proposed method mainly converted point-based building extraction into object-based building extraction through multi-constraints graph segmentation. The initial extracted building points were derived according to the spatial geometric features of different object primitives. Finally, a multi-scale progressive growth optimization method was proposed to recover some omitted building points and improve the completeness of building extraction. The proposed method was tested and validated using three datasets provided by the International Society for Photogrammetry and Remote Sensing (ISPRS). Experimental results show that the proposed method can achieve the best building extraction results. It was also found that no matter the average quality or the average F1 score, the proposed method outperformed ten other investigated building extraction methods.


Author(s):  
Shenman Zhang ◽  
Jie Shan ◽  
Zhichao Zhang ◽  
Jixing Yan ◽  
Yaolin Hou

A complete building model reconstruction needs data collected from both air and ground. The former often has sparse coverage on building façades, while the latter usually is unable to observe the building rooftops. Attempting to solve the missing data issues in building reconstruction from single data source, we describe an approach for complete building reconstruction that integrates airborne LiDAR data and ground smartphone imagery. First, by taking advantages of GPS and digital compass information embedded in the image metadata of smartphones, we are able to find airborne LiDAR point clouds for the corresponding buildings in the images. In the next step, Structure-from-Motion and dense multi-view stereo algorithms are applied to generate building point cloud from multiple ground images. The third step extracts building outlines respectively from the LiDAR point cloud and the ground image point cloud. An automated correspondence between these two sets of building outlines allows us to achieve a precise registration and combination of the two point clouds, which ultimately results in a complete and full resolution building model. The developed approach overcomes the problem of sparse points on building façades in airborne LiDAR and the deficiency of rooftops in ground images such that the merits of both datasets are utilized.


Author(s):  
Shenman Zhang ◽  
Jie Shan ◽  
Zhichao Zhang ◽  
Jixing Yan ◽  
Yaolin Hou

A complete building model reconstruction needs data collected from both air and ground. The former often has sparse coverage on building façades, while the latter usually is unable to observe the building rooftops. Attempting to solve the missing data issues in building reconstruction from single data source, we describe an approach for complete building reconstruction that integrates airborne LiDAR data and ground smartphone imagery. First, by taking advantages of GPS and digital compass information embedded in the image metadata of smartphones, we are able to find airborne LiDAR point clouds for the corresponding buildings in the images. In the next step, Structure-from-Motion and dense multi-view stereo algorithms are applied to generate building point cloud from multiple ground images. The third step extracts building outlines respectively from the LiDAR point cloud and the ground image point cloud. An automated correspondence between these two sets of building outlines allows us to achieve a precise registration and combination of the two point clouds, which ultimately results in a complete and full resolution building model. The developed approach overcomes the problem of sparse points on building façades in airborne LiDAR and the deficiency of rooftops in ground images such that the merits of both datasets are utilized.


Sign in / Sign up

Export Citation Format

Share Document