scholarly journals Parameter-Free Half-Spaces Based 3D Building Reconstruction Using Ground and Segmented Building Points from Airborne LiDAR Data with 2D Outlines

2021 ◽  
Vol 13 (21) ◽  
pp. 4430
Author(s):  
Marko Bizjak ◽  
Borut Žalik ◽  
Niko Lukač

This paper aims to automatically reconstruct 3D building models on a large scale using a new approach on the basis of half-spaces, while making no assumptions about the building layout and keeping the number of input parameters to a minimum. The proposed algorithm is performed in two stages. First, the airborne LiDAR data and buildings’ outlines are preprocessed to generate buildings’ base models and the corresponding half-spaces. In the second stage, the half-spaces are analysed and used for shaping the final 3D building model using 3D Boolean operations. In experiments, the proposed algorithm was applied on a large scale, and its’ performance was inspected on a city level and on a single building level. Accurate reconstruction of buildings with various layouts were demonstrated and limitations were identified for large-scale applications. Finally, the proposed algorithm was validated on an ISPRS benchmark dataset, where a RMSE of 1.31 m and completeness of 98.9 % were obtained.

Author(s):  
N. Yastikli ◽  
Z. Cetin

LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.


Author(s):  
Z. Li ◽  
W. Zhang ◽  
J. Shan

Abstract. Building models are conventionally reconstructed by building roof points via planar segmentation and then using a topology graph to group the planes together. Roof edges and vertices are then mathematically represented by intersecting segmented planes. Technically, such solution is based on sequential local fitting, i.e., the entire data of one building are not simultaneously participating in determining the building model. As a consequence, the solution is lack of topological integrity and geometric rigor. Fundamentally different from this traditional approach, we propose a holistic parametric reconstruction method which means taking into consideration the entire point clouds of one building simultaneously. In our work, building models are reconstructed from predefined parametric (roof) primitives. We first use a well-designed deep neural network to segment and identify primitives in the given building point clouds. A holistic optimization strategy is then introduced to simultaneously determine the parameters of a segmented primitive. In the last step, the optimal parameters are used to generate a watertight building model in CityGML format. The airborne LiDAR dataset RoofN3D with predefined roof types is used for our test. It is shown that PointNet++ applied to the entire dataset can achieve an accuracy of 83% for primitive classification. For a subset of 910 buildings in RoofN3D, the holistic approach is then used to determine the parameters of primitives and reconstruct the buildings. The achieved overall quality of reconstruction is 0.08 meters for point-surface-distance or 0.7 times RMSE of the input LiDAR points. This study demonstrates the efficiency and capability of the proposed approach and its potential to handle large scale urban point clouds.


Author(s):  
Y. He ◽  
C. Zhang ◽  
C. S. Fraser

This paper presents an automated approach to the extraction of building footprints from airborne LiDAR data based on energy minimization. Automated 3D building reconstruction in complex urban scenes has been a long-standing challenge in photogrammetry and computer vision. Building footprints constitute a fundamental component of a 3D building model and they are useful for a variety of applications. Airborne LiDAR provides large-scale elevation representation of urban scene and as such is an important data source for object reconstruction in spatial information systems. However, LiDAR points on building edges often exhibit a jagged pattern, partially due to either occlusion from neighbouring objects, such as overhanging trees, or to the nature of the data itself, including unavoidable noise and irregular point distributions. The explicit 3D reconstruction may thus result in irregular or incomplete building polygons. In the presented work, a vertex-driven Douglas-Peucker method is developed to generate polygonal hypotheses from points forming initial building outlines. The energy function is adopted to examine and evaluate each hypothesis and the optimal polygon is determined through energy minimization. The energy minimization also plays a key role in bridging gaps, where the building outlines are ambiguous due to insufficient LiDAR points. In formulating the energy function, hard constraints such as parallelism and perpendicularity of building edges are imposed, and local and global adjustments are applied. The developed approach has been extensively tested and evaluated on datasets with varying point cloud density over different terrain types. Results are presented and analysed. The successful reconstruction of building footprints, of varying structural complexity, along with a quantitative assessment employing accurate reference data, demonstrate the practical potential of the proposed approach.


Author(s):  
Shenman Zhang ◽  
Jie Shan ◽  
Zhichao Zhang ◽  
Jixing Yan ◽  
Yaolin Hou

A complete building model reconstruction needs data collected from both air and ground. The former often has sparse coverage on building façades, while the latter usually is unable to observe the building rooftops. Attempting to solve the missing data issues in building reconstruction from single data source, we describe an approach for complete building reconstruction that integrates airborne LiDAR data and ground smartphone imagery. First, by taking advantages of GPS and digital compass information embedded in the image metadata of smartphones, we are able to find airborne LiDAR point clouds for the corresponding buildings in the images. In the next step, Structure-from-Motion and dense multi-view stereo algorithms are applied to generate building point cloud from multiple ground images. The third step extracts building outlines respectively from the LiDAR point cloud and the ground image point cloud. An automated correspondence between these two sets of building outlines allows us to achieve a precise registration and combination of the two point clouds, which ultimately results in a complete and full resolution building model. The developed approach overcomes the problem of sparse points on building façades in airborne LiDAR and the deficiency of rooftops in ground images such that the merits of both datasets are utilized.


2020 ◽  
Vol 12 (21) ◽  
pp. 3506
Author(s):  
Nuria Sanchez-Lopez ◽  
Luigi Boschetti ◽  
Andrew T. Hudak ◽  
Steven Hancock ◽  
Laura I. Duncanson

Stand-level maps of past forest disturbances (expressed as time since disturbance, TSD) are needed to model forest ecosystem processes, but the conventional approaches based on remotely sensed satellite data can only extend as far back as the first available satellite observations. Stand-level analysis of airborne LiDAR data has been demonstrated to accurately estimate long-term TSD (~100 years), but large-scale coverage of airborne LiDAR remains costly. NASA’s spaceborne LiDAR Global Ecosystem Dynamics Investigation (GEDI) instrument, launched in December 2018, is providing billions of measurements of tropical and temperate forest canopies around the globe. GEDI is a spatial sampling instrument and, as such, does not provide wall-to-wall data. GEDI’s lasers illuminate ground footprints, which are separated by ~600 m across-track and ~60 m along-track, so new approaches are needed to generate wall-to-wall maps from the discrete measurements. In this paper, we studied the feasibility of a data fusion approach between GEDI and Landsat for wall-to-wall mapping of TSD. We tested the methodology on a ~52,500-ha area located in central Idaho (USA), where an extensive record of stand-replacing disturbances is available, starting in 1870. GEDI data were simulated over the nominal two-year planned mission lifetime from airborne LiDAR data and used for TSD estimation using a random forest (RF) classifier. Image segmentation was performed on Landsat-8 data, obtaining image-objects representing forest stands needed for the spatial extrapolation of estimated TSD from the discrete GEDI locations. We quantified the influence of (1) the forest stand map delineation, (2) the sample size of the training dataset, and (3) the number of GEDI footprints per stand on the accuracy of estimated TSD. The results show that GEDI-Landsat data fusion would allow for TSD estimation in stands covering ~95% of the study area, having the potential to reconstruct the long-term disturbance history of temperate even-aged forests with accuracy (median root mean square deviation = 22.14 years, median BIAS = 1.70 years, 60.13% of stands classified within 10 years of the reference disturbance date) comparable to the results obtained in the same study area with airborne LiDAR.


2021 ◽  
Author(s):  
Yipeng Yuan

Demand for three-dimensional (3D) urban models keeps growing in various civil and military applications. Topographic LiDAR systems are capable of acquiring elevation data directly over terrain features. However, the task of creating a large-scale virtual environment still remains a time-consuming and manual work. In this thesis a method for 3D building reconstruction, consisting of building roof detection, roof outline extraction and regularization, and 3D building model generation, directly from LiDAR point clouds is developed. In the proposed approach, a new algorithm called Gaussian Markov Random Field (GMRF) and Markov Chain Monte Carlo (MCMC) is used to segment point clouds for building roof detection. The modified convex hull (MCH) algorithm is used for the extraction of roof outlines followed by the regularization of the extracted outlines using the modified hierarchical regularization algorithm. Finally, 3D building models are generated in an ArcGIS environment. The results obtained demonstrate the effectiveness and satisfactory accuracy of the developed method.


2013 ◽  
Vol 30 (10) ◽  
pp. 2452-2464 ◽  
Author(s):  
J. H. Middleton ◽  
C. G. Cooke ◽  
E. T. Kearney ◽  
P. J. Mumford ◽  
M. A. Mole ◽  
...  

Abstract Airborne scanning laser technology provides an effective method to systematically survey surface topography and changes in that topography with time. In this paper, the authors describe the capability of a rapid-response lidar system in which airborne observations are utilized to describe results from a set of surveys of Narrabeen–Collaroy Beach, Sydney, New South Wales, Australia, over a short period of time during which significant erosion and deposition of the subaerial beach occurred. The airborne lidar data were obtained using a Riegl Q240i lidar coupled with a NovAtel SPAN-CPT integrated Global Navigation Satellite System (GNSS) and inertial unit and flown at various altitudes. A set of the airborne lidar data is compared with ground-truth data acquired from the beach using a GNSS/real-time kinematic (RTK) system mounted on an all-terrain vehicle. The comparison shows consistency between systems, with the airborne lidar data being less than 0.02 m different from the ground-truth data when four surveys are undertaken, provided a method of removing outliers—developed here and designated as “weaving”—is used. The combination of airborne lidar data with ground-truth data provides an excellent method of obtaining high-quality topographic data. Using the results from this analysis, it is shown that airborne lidar data alone produce results that can be used for ongoing large-scale surveys of beaches with reliable accuracy, and that the enhanced accuracy resulting from multiple airborne surveys can be assessed quantitatively.


Author(s):  
Shenman Zhang ◽  
Jie Shan ◽  
Zhichao Zhang ◽  
Jixing Yan ◽  
Yaolin Hou

A complete building model reconstruction needs data collected from both air and ground. The former often has sparse coverage on building façades, while the latter usually is unable to observe the building rooftops. Attempting to solve the missing data issues in building reconstruction from single data source, we describe an approach for complete building reconstruction that integrates airborne LiDAR data and ground smartphone imagery. First, by taking advantages of GPS and digital compass information embedded in the image metadata of smartphones, we are able to find airborne LiDAR point clouds for the corresponding buildings in the images. In the next step, Structure-from-Motion and dense multi-view stereo algorithms are applied to generate building point cloud from multiple ground images. The third step extracts building outlines respectively from the LiDAR point cloud and the ground image point cloud. An automated correspondence between these two sets of building outlines allows us to achieve a precise registration and combination of the two point clouds, which ultimately results in a complete and full resolution building model. The developed approach overcomes the problem of sparse points on building façades in airborne LiDAR and the deficiency of rooftops in ground images such that the merits of both datasets are utilized.


Sign in / Sign up

Export Citation Format

Share Document