scholarly journals Accurate Reconstruction of the LoD3 Building Model by Integrating Multi-Source Point Clouds and Oblique Remote Sensing Imagery

2019 ◽  
Vol 8 (3) ◽  
pp. 135 ◽  
Author(s):  
Xuedong Wen ◽  
Hong Xie ◽  
Hua Liu ◽  
Li Yan

3D urban building models, which provide 3D information services for urban planning, management and operational decision-making, are essential for constructing digital cities. Unfortunately, the existing reconstruction approaches for LoD3 building models are insufficient in model details and are associated with a heavy workload, and accordingly they could not satisfy urgent requirements of realistic applications. In this paper, we propose an accurate LoD3 building reconstruction method by integrating multi-source laser point clouds and oblique remote sensing imagery. By combing high-precision plane features extracted from point clouds and accurate boundary constraint features from oblique images, the building mainframe model, which provides an accurate reference for further editing, is quickly and automatically constructed. Experimental results show that the proposed reconstruction method outperforms existing manual and automatic reconstruction methods using both point clouds and oblique images in terms of reconstruction efficiency and spatial accuracy.

Author(s):  
Z. Li ◽  
W. Zhang ◽  
J. Shan

Abstract. Building models are conventionally reconstructed by building roof points via planar segmentation and then using a topology graph to group the planes together. Roof edges and vertices are then mathematically represented by intersecting segmented planes. Technically, such solution is based on sequential local fitting, i.e., the entire data of one building are not simultaneously participating in determining the building model. As a consequence, the solution is lack of topological integrity and geometric rigor. Fundamentally different from this traditional approach, we propose a holistic parametric reconstruction method which means taking into consideration the entire point clouds of one building simultaneously. In our work, building models are reconstructed from predefined parametric (roof) primitives. We first use a well-designed deep neural network to segment and identify primitives in the given building point clouds. A holistic optimization strategy is then introduced to simultaneously determine the parameters of a segmented primitive. In the last step, the optimal parameters are used to generate a watertight building model in CityGML format. The airborne LiDAR dataset RoofN3D with predefined roof types is used for our test. It is shown that PointNet++ applied to the entire dataset can achieve an accuracy of 83% for primitive classification. For a subset of 910 buildings in RoofN3D, the holistic approach is then used to determine the parameters of primitives and reconstruct the buildings. The achieved overall quality of reconstruction is 0.08 meters for point-surface-distance or 0.7 times RMSE of the input LiDAR points. This study demonstrates the efficiency and capability of the proposed approach and its potential to handle large scale urban point clouds.


2021 ◽  
Vol 13 (6) ◽  
pp. 1107
Author(s):  
Linfu Xie ◽  
Han Hu ◽  
Qing Zhu ◽  
Xiaoming Li ◽  
Shengjun Tang ◽  
...  

Three-dimensional (3D) building models play an important role in digital cities and have numerous potential applications in environmental studies. In recent years, the photogrammetric point clouds obtained by aerial oblique images have become a major source of data for 3D building reconstruction. Aiming at reconstructing a 3D building model at Level of Detail (LoD) 2 and even LoD3 with preferred geometry accuracy and affordable computation expense, in this paper, we propose a novel method for the efficient reconstruction of building models from the photogrammetric point clouds which combines the rule-based and the hypothesis-based method using a two-stage topological recovery process. Given the point clouds of a single building, planar primitives and their corresponding boundaries are extracted and regularized to obtain abstracted building counters. In the first stage, we take advantage of the regularity and adjacency of the building counters to recover parts of the topological relationships between different primitives. Three constraints, namely pairwise constraint, triplet constraint, and nearby constraint, are utilized to form an initial reconstruction with candidate faces in ambiguous areas. In the second stage, the topologies in ambiguous areas are removed and reconstructed by solving an integer linear optimization problem based on the initial constraints while considering data fitting degree. Experiments using real datasets reveal that compared with state-of-the-art methods, the proposed method can efficiently reconstruct 3D building models in seconds with the geometry accuracy in decimeter level.


2021 ◽  
Vol 10 (5) ◽  
pp. 345
Author(s):  
Konstantinos Chaidas ◽  
George Tataris ◽  
Nikolaos Soulakellis

In a post-earthquake scenario, the semantic enrichment of 3D building models with seismic damage is crucial from the perspective of disaster management. This paper aims to present the methodology and the results for the Level of Detail 3 (LOD3) building modelling (after an earthquake) with the enrichment of the semantics of the seismic damage based on the European Macroseismic Scale (EMS-98). The study area is the Vrisa traditional settlement on the island of Lesvos, Greece, which was affected by a devastating earthquake of Mw = 6.3 on 12 June 2017. The applied methodology consists of the following steps: (a) unmanned aircraft systems (UAS) nadir and oblique images are acquired and photogrammetrically processed for 3D point cloud generation, (b) 3D building models are created based on 3D point clouds and (c) 3D building models are transformed into a LOD3 City Geography Markup Language (CityGML) standard with enriched semantics of the related seismic damage of every part of the building (walls, roof, etc.). The results show that in following this methodology, CityGML LOD3 models can be generated and enriched with buildings’ seismic damage. These models can assist in the decision-making process during the recovery phase of a settlement as well as be the basis for its monitoring over time. Finally, these models can contribute to the estimation of the reconstruction cost of the buildings.


Author(s):  
Y. Sun ◽  
M. Shahzad ◽  
X. Zhu

This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR) point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007) and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce) the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center) in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.


2019 ◽  
Vol 8 (4) ◽  
pp. 193 ◽  
Author(s):  
Hossein Bagheri ◽  
Michael Schmitt ◽  
Xiaoxiang Zhu

So-called prismatic 3D building models, following the level-of-detail (LOD) 1 of the OGC City Geography Markup Language (CityGML) standard, are usually generated automatically by combining building footprints with height values. Typically, high-resolution digital elevation models (DEMs) or dense LiDAR point clouds are used to generate these building models. However, high-resolution LiDAR data are usually not available with extensive coverage, whereas globally available DEM data are often not detailed and accurate enough to provide sufficient input to the modeling of individual buildings. Therefore, this paper investigates the possibility of generating LOD1 building models from both volunteered geographic information (VGI) in the form of OpenStreetMap data and remote sensing-derived geodata improved by multi-sensor and multi-modal DEM fusion techniques or produced by synthetic aperture radar (SAR)-optical stereogrammetry. The results of this study show several things: First, it can be seen that the height information resulting from data fusion is of higher quality than the original data sources. Secondly, the study confirms that simple, prismatic building models can be reconstructed by combining OpenStreetMap building footprints and easily accessible, remote sensing-derived geodata, indicating the potential of application on extensive areas. The building models were created under the assumption of flat terrain at a constant height, which is valid in the selected study area.


2021 ◽  
Author(s):  
Yipeng Yuan

Demand for three-dimensional (3D) urban models keeps growing in various civil and military applications. Topographic LiDAR systems are capable of acquiring elevation data directly over terrain features. However, the task of creating a large-scale virtual environment still remains a time-consuming and manual work. In this thesis a method for 3D building reconstruction, consisting of building roof detection, roof outline extraction and regularization, and 3D building model generation, directly from LiDAR point clouds is developed. In the proposed approach, a new algorithm called Gaussian Markov Random Field (GMRF) and Markov Chain Monte Carlo (MCMC) is used to segment point clouds for building roof detection. The modified convex hull (MCH) algorithm is used for the extraction of roof outlines followed by the regularization of the extracted outlines using the modified hierarchical regularization algorithm. Finally, 3D building models are generated in an ArcGIS environment. The results obtained demonstrate the effectiveness and satisfactory accuracy of the developed method.


Author(s):  
Y. Sun ◽  
M. Shahzad ◽  
X. Zhu

This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR) point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007) and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce) the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center) in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.


2020 ◽  
Vol 12 (10) ◽  
pp. 1643 ◽  
Author(s):  
Marek Kulawiak ◽  
Zbigniew Lubniewski

Due to high requirements of variety of 3D spatial data applications with respect to data amount and quality, automatized, efficient and reliable data acquisition and preprocessing methods are needed. The use of photogrammetry techniques—as well as the light detection and ranging (LiDAR) automatic scanners—are among attractive solutions. However, measurement data are in the form of unorganized point clouds, usually requiring transformation to higher order 3D models based on polygons or polyhedral surfaces, which is not a trivial process. The study presents a newly developed algorithm for correcting 3D point cloud data from airborne LiDAR surveys of regular 3D buildings. The proposed approach assumes the application of a sequence of operations resulting in 3D rasterization, i.e., creation and processing of a 3D regular grid representation of an object, prior to applying a regular Poisson surface reconstruction method. In order to verify the accuracy and quality of reconstructed objects for quantitative comparison with the obtained 3D models, high-quality ground truth models were used in the form of the meshes constructed from photogrammetric measurements and manually made using buildings architectural plans. The presented results show that applying the proposed algorithm positively influences the quality of the results and can be used in combination with existing surface reconstruction methods in order to generate more detailed 3D models from LiDAR scanning.


Author(s):  
S. Becker ◽  
M. Peter ◽  
D. Fritsch

The paper presents a grammar-based approach for the robust automatic reconstruction of 3D interiors from raw point clouds. The core of the approach is a 3D indoor grammar which is an extension of our previously published grammar concept for the modeling of 2D floor plans. The grammar allows for the modeling of buildings whose horizontal, continuous floors are traversed by hallways providing access to the rooms as it is the case for most office buildings or public buildings like schools, hospitals or hotels. The grammar is designed in such way that it can be embedded in an iterative automatic learning process providing a seamless transition from LOD3 to LOD4 building models. Starting from an initial low-level grammar, automatically derived from the window representations of an available LOD3 building model, hypotheses about indoor geometries can be generated. The hypothesized indoor geometries are checked against observation data - here 3D point clouds - collected in the interior of the building. The verified and accepted geometries form the basis for an automatic update of the initial grammar. By this, the knowledge content of the initial grammar is enriched, leading to a grammar with increased quality. This higher-level grammar can then be applied to predict realistic geometries to building parts where only sparse observation data are available. Thus, our approach allows for the robust generation of complete 3D indoor models whose quality can be improved continuously as soon as new observation data are fed into the grammar-based reconstruction process. The feasibility of our approach is demonstrated based on a real-world example.


2014 ◽  
Vol 71 (4) ◽  
Author(s):  
R. Akmaliaa ◽  
H. Setan ◽  
Z. Majid ◽  
D. Suwardhi

Nowadays, 3D city models are used by the increasing number of applications. Most applications require not only geometric information but also semantic information. As a standard and tool for 3D city model, CityGML, provides a method for storing and managing both geometric and semantic information. Moreover, it also provides the multi-scale representation of 3D building model for efficient visualization. In CityGML, building models are represented in five LODs (Level of Detail), start from LOD0, LOD1, LOD2, LOD3, and LOD4. Each level has different accuracy and detail requirement for visualization. Usually, for obtaining multi-LOD of 3D building model, several data sources are integrated. For example, LiDAR data is used for generating LOD0, LOD1, and LOD2 as close-range photogrammetry data is used for generating more detailed model in LOD3 and LOD4. However, using additional data sources is increasing cost and time consuming. Since the development of TLS (Terrestrial Laser Scanner), data collection for detailed model can be conducted in a relative short time compared to photogrammetry. Point cloud generated from TLS can be used for generating the multi-LOD of building model. This paper gives an overview about the representation of 3D building model in CityGML and also method for generating multi-LOD of building from TLS data. An experiment was conducted using TLS. Following the standard in CityGML, point clouds from TLS were processed resulting 3D model of building in different level of details. Afterward, models in different LOD were converted into XML schema to be used in CityGML. From the experiment, final result shows that TLS can be used for generating 3D models of building in LOD1, LOD2, and LOD3.


Sign in / Sign up

Export Citation Format

Share Document