scholarly journals JOINT SIMULTANEOUS RECONSTRUCTION OF REGULARIZED BUILDING SUPERSTRUCTURES FROM LOW-DENSITY LIDAR DATA USING ICP

Author(s):  
Andreas Wichmann ◽  
Martin Kada

There are many applications for 3D city models, e.g., in visualizations, analysis, and simulations; each one requiring a certain level of detail to be effective. The overall trend goes towards including various kinds of anthropogenic and natural objects therein with ever increasing geometric and semantic details. A few years back, the featured 3D building models had only coarse roof geometry. But nowadays, they are expected to include detailed roof superstructures like dormers and chimneys. Several methods have been proposed for the automatic reconstruction of 3D building models from airborne based point clouds. However, they are usually unable to reliably recognize and reconstruct small roof superstructures as these objects are often represented by only few point measurements, especially in low-density point clouds. In this paper, we propose a recognition and reconstruction approach that overcomes this problem by identifying and simultaneously reconstructing regularized superstructures of similar shape. For this purpose, candidate areas for superstructures are detected by taking into account virtual sub-surface points that are assumed to lie on the main roof faces below the measured points. The areas with similar superstructures are detected, extracted, grouped together, and registered to one another with the Iterative Closest Point (ICP) algorithm. As an outcome, the joint point density of each detected group is increased, which helps to recognize the shape of the superstructure more reliably and in more detail. Finally, all instances of each group of superstructures are modeled at once and transformed back to their original position. Because superstructures are reconstructed in groups, symmetries, alignments, and regularities can be enforced in a straight-forward way. The validity of the approach is presented on a number of example buildings from the Vaihingen test data set.

Author(s):  
Andreas Wichmann ◽  
Martin Kada

There are many applications for 3D city models, e.g., in visualizations, analysis, and simulations; each one requiring a certain level of detail to be effective. The overall trend goes towards including various kinds of anthropogenic and natural objects therein with ever increasing geometric and semantic details. A few years back, the featured 3D building models had only coarse roof geometry. But nowadays, they are expected to include detailed roof superstructures like dormers and chimneys. Several methods have been proposed for the automatic reconstruction of 3D building models from airborne based point clouds. However, they are usually unable to reliably recognize and reconstruct small roof superstructures as these objects are often represented by only few point measurements, especially in low-density point clouds. In this paper, we propose a recognition and reconstruction approach that overcomes this problem by identifying and simultaneously reconstructing regularized superstructures of similar shape. For this purpose, candidate areas for superstructures are detected by taking into account virtual sub-surface points that are assumed to lie on the main roof faces below the measured points. The areas with similar superstructures are detected, extracted, grouped together, and registered to one another with the Iterative Closest Point (ICP) algorithm. As an outcome, the joint point density of each detected group is increased, which helps to recognize the shape of the superstructure more reliably and in more detail. Finally, all instances of each group of superstructures are modeled at once and transformed back to their original position. Because superstructures are reconstructed in groups, symmetries, alignments, and regularities can be enforced in a straight-forward way. The validity of the approach is presented on a number of example buildings from the Vaihingen test data set.


Author(s):  
J. Meidow ◽  
H. Hammer ◽  
M. Pohl ◽  
D. Bulatov

Many buildings in 3D city models can be represented by generic models, e.g. boundary representations or polyhedrons, without expressing building-specific knowledge explicitly. Without additional constraints, the bounding faces of these building reconstructions do not feature expected structures such as orthogonality or parallelism. The recognition and enforcement of man-made structures within model instances is one way to enhance 3D city models. Since the reconstructions are derived from uncertain and imprecise data, crisp relations such as orthogonality or parallelism are rarely satisfied exactly. Furthermore, the uncertainty of geometric entities is usually not specified in 3D city models. Therefore, we propose a point sampling which simulates the initial point cloud acquisition by airborne laser scanning and provides estimates for the uncertainties. We present a complete workflow for recognition and enforcement of man-made structures in a given boundary representation. The recognition is performed by hypothesis testing and the enforcement of the detected constraints by a global adjustment of all bounding faces. Since the adjustment changes not only the geometry but also the topology of faces, we obtain improved building models which feature regular structures and a potentially reduced complexity. The feasibility and the usability of the approach are demonstrated with a real data set.


Author(s):  
B. Dukai ◽  
H. Ledoux ◽  
J. E. Stoter

<p><strong>Abstract.</strong> The 3D representation of buildings with roof shapes (also called LoD2) is popular in the 3D city modelling domain since it provides a realistic view of 3D city models. However, for many application block models of buildings are sufficient or even more suitable. These so called LoD1 models can be reconstructed relatively easily from building footprints and point clouds. But LoD1 representations for the same building can be rather different because of differences in height references used to reconstruct the block models and differences in underlying statistical calculation methods. Users are often not aware of these differences, while these differences may have an impact on the outcome of spatial analyses. To standardise possible variances of LoD1 models and let the users choose the best one for their application, we have developed a LoD1 reconstruction service that generates several heights per building (both for the ground surface and the extrusion height). The building models are generated for all ~10 million buildings in The Netherlands based on footprints of buildings and LiDAR point clouds. The 3D dataset is updated every month automatically. In addition, for each building quality parameters are calculated and made available. This article describes the development of the LoD1 building service and we report on the spatial analysis that we performed on the generated height values.</p>


Author(s):  
A. V. Vo ◽  
C. N. Lokugam Hewage ◽  
N. A. Le Khac ◽  
M. Bertolotto ◽  
D. Laefer

Abstract. Point density is an important property that dictates the usability of a point cloud data set. This paper introduces an efficient, scalable, parallel algorithm for computing the local point density index, a sophisticated point cloud density metric. Computing the local point density index is non-trivial, because this computation involves a neighbour search that is required for each, individual point in the potentially large, input point cloud. Most existing algorithms and software are incapable of computing point density at scale. Therefore, the algorithm introduced in this paper aims to address both the needed computational efficiency and scalability for considering this factor in large, modern point clouds such as those collected in national or regional scans. The proposed algorithm is composed of two stages. In stage 1, a point-level, parallel processing step is performed to partition an unstructured input point cloud into partially overlapping, buffered tiles. A buffer is provided around each tile so that the data partitioning does not introduce spatial discontinuity into the final results. In stage 2, the buffered tiles are distributed to different processors for computing the local point density index in parallel. That tile-level parallel processing step is performed using a conventional algorithm with an R-tree data structure. While straight-forward, the proposed algorithm is efficient and particularly suitable for processing large point clouds. Experiments conducted using a 1.4 billion point data set acquired over part of Dublin, Ireland demonstrated an efficiency factor of up to 14.8/16. More specifically, the computational time was reduced by 14.8 times when the number of processes (i.e. executors) increased by 16 times. Computing the local point density index for the 1.4 billion point data set took just over 5 minutes with 16 executors and 8 cores per executor. The reduction in computational time was nearly 70 times compared to the 6 hours required without parallelism.


Author(s):  
Y. Dehbi ◽  
L. Lucks ◽  
J. Behmann ◽  
L. Klingbeil ◽  
L. Plümer

Abstract. Accurate and robust positioning of vehicles in urban environments is of high importance for many applications (e.g. autonomous driving or mobile mapping). In the case of mobile mapping systems, a simultaneous mapping of the environment using laser scanning and an accurate positioning using GNSS is targeted. This requirement is often not guaranteed in shadowed cities where GNSS signals are usually disturbed, weak or even unavailable. Both, the generated point clouds and the derived trajectory are consequently imprecise. We propose a novel approach which incorporates prior knowledge, i.e. 3D building model of the environment, and improves the point cloud and the trajectory. The key idea is to benefit from the complementarity of both GNSS and 3D building models. The point cloud is matched to the city model using a point-to-plane ICP. An informed sampling of appropriate matching points is enabled by a pre-classification step. Support vector machines (SVMs) are used to discriminate between facade and remaining points. Local inconsistencies are tackled by a segment-wise partitioning of the point cloud where an interpolation guarantees a seamless transition between the segments. The full processing chain is implemented from the detection of facades in the point clouds, the matching between them and the building models and the update of the trajectory estimate. The general applicability of the implemented method is demonstrated on an inner city data set recorded with a mobile mapping system.


Author(s):  
D. Guo ◽  
D. Yu ◽  
Y. Liang ◽  
C. Feng

<p><strong>Abstract.</strong> Point cloud registration is important and essential task for terrestrial laser scanning applications. Point clouds acquired at different positions exhibit significant variation in point density. Most registration methods implicitly assume dense and uniform distributed point clouds, which is hardly the case in large-scale surveying. The accuracy and robustness of feature extraction are greatly influenced by the point density, which undermines the feature-based registration methods. We show that the accuracy and robustness of target localization dramatically decline with decreasing point density. A methodology for localization of artificial planar targets in low density point clouds is presented. An orthographic image of the target is firstly generated and the potential position of the target center is interactively selected. Then the 3D position of the target center is estimated by a non-linear least squares adjustment. The presented methodology enables millimeter level accuracy of target localization in point clouds with 30mm sample interval. The robustness and effectiveness of the methodology is demonstrated by the experimental results.</p>


Author(s):  
J. Li ◽  
B. Xiong ◽  
F. Biljecki ◽  
G. Schrotter

<p><strong>Abstract.</strong> Architectural building models (LoD3) consist of detailed wall and roof structures including openings, such as doors and windows. Openings are usually identified through corner and edge detection, based on terrestrial LiDAR point clouds. However, singular boundary points are mostly detected by analysing their neighbourhoods within a small search area, which is highly sensitive to noise. In this paper, we present a global-wide sliding window method on a projected fa&amp;ccedil;ade to reduce the influence of noise. We formulate the gradient of point density for the sliding window to inspect the change of fa&amp;ccedil;ade elements. With derived symmetry information from statistical analysis, border lines of the changes are extracted and intersected generating corner points of openings. We demonstrate the performance of the proposed approach on the static and mobile terrestrial LiDAR data with inhomogeneous point density. The algorithm detects the corners of repetitive and neatly arranged openings and also recovers angular points within slightly missing data areas. In the future we will extend the algorithm to detect disordered openings and assist to fa&amp;ccedil;ade modelling, semantic labelling and procedural modelling.</p>


Author(s):  
E. Özdemir ◽  
F. Remondino

<p><strong>Abstract.</strong> Due to their usefulness in various implementations, such as energy evaluation, visibility analysis, emergency response, 3D cadastre, urban planning, change detection, navigation, etc., 3D city models have gained importance over the last decades. Point clouds are one of the primary data sources for the generation of realistic city models. Beside model-driven approaches, 3D building models can be directly produced from classified aerial point clouds. This paper presents an ongoing research for 3D building reconstruction based on the classification of aerial point clouds without given ancillary data (e.g. footprints, etc.). The work includes a deep learning approach based on specific geometric features extracted from the point cloud. The methodology was tested on the ISPRS 3D Semantic Labeling Contest (Vaihingen and Toronto point clouds) showing promising results, although partly affected by the low density and lack of points on the building facades for the available clouds.</p>


Author(s):  
O. Wysocki ◽  
Y. Xu ◽  
U. Stilla

Abstract. Throughout the years, semantic 3D city models have been created to depict 3D spatial phenomenon. Recently, an increasing number of mobile laser scanning (MLS) units yield terrestrial point clouds at an unprecedented level. Both dataset types often depict the same 3D spatial phenomenon differently, thus their fusion should increase the quality of the captured 3D spatial phenomenon. Yet, each dataset has modality-dependent uncertainties that hinder their immediate fusion. Therefore, we present a method for fusing MLS point clouds with semantic 3D building models while considering uncertainty issues. Specifically, we show MLS point clouds coregistration with semantic 3D building models based on expert confidence in evaluated metadata quantified by confidence interval (CI). This step leads to the dynamic adjustment of the CI, which is used to delineate matching bounds for both datasets. Both coregistration and matching steps serve as priors for a Bayesian network (BayNet) that performs application-dependent identity estimation. The BayNet propagates uncertainties and beliefs throughout the process to estimate end probabilities for confirmed, unmodeled, and other city objects. We conducted promising preliminary experiments on urban MLS and CityGML datasets. Our strategy sets up a framework for the fusion of MLS point clouds and semantic 3D building models. This framework aids the challenging parallel usage of such datasets in applications such as façade refinement or change detection. To further support this process, we open-sourced our implementation.


Author(s):  
J. Meidow ◽  
H. Hammer ◽  
M. Pohl ◽  
D. Bulatov

Many buildings in 3D city models can be represented by generic models, e.g. boundary representations or polyhedrons, without expressing building-specific knowledge explicitly. Without additional constraints, the bounding faces of these building reconstructions do not feature expected structures such as orthogonality or parallelism. The recognition and enforcement of man-made structures within model instances is one way to enhance 3D city models. Since the reconstructions are derived from uncertain and imprecise data, crisp relations such as orthogonality or parallelism are rarely satisfied exactly. Furthermore, the uncertainty of geometric entities is usually not specified in 3D city models. Therefore, we propose a point sampling which simulates the initial point cloud acquisition by airborne laser scanning and provides estimates for the uncertainties. We present a complete workflow for recognition and enforcement of man-made structures in a given boundary representation. The recognition is performed by hypothesis testing and the enforcement of the detected constraints by a global adjustment of all bounding faces. Since the adjustment changes not only the geometry but also the topology of faces, we obtain improved building models which feature regular structures and a potentially reduced complexity. The feasibility and the usability of the approach are demonstrated with a real data set.


Sign in / Sign up

Export Citation Format

Share Document