scholarly journals INTEGRATION OF BUILDING KNOWLEDGE INTO BINARY SPACE PARTITIONING FOR THE RECONSTRUCTION OF REGULARIZED BUILDING MODELS

Author(s):  
A. Wichmann ◽  
J. Jung ◽  
G. Sohn ◽  
M. Kada ◽  
M. Ehlers

Recent approaches for the automatic reconstruction of 3D building models from airborne point cloud data integrate prior knowledge of roof shapes with the intention to improve the regularization of the resulting models without lessening the flexibility to generate all real-world occurring roof shapes. In this paper, we present a method to integrate building knowledge into the data-driven approach that uses binary space partitioning (BSP) for modeling the 3D building geometry. A retrospective regularization of polygons that emerge from the BSP tree is not without difficulty because it has to deal with the 2D BSP subdivision itself and the plane definitions of the resulting partition regions to ensure topological correctness. This is aggravated by the use of hyperplanes during the binary subdivision that often splits planar roof regions into several parts that are stored in different subtrees of the BSP tree. We therefore introduce the use of hyperpolylines in the generation of the BSP tree to avoid unnecessary spatial subdivisions, so that the spatial integrity of planar roof regions is better maintained. The hyperpolylines are shown to result from basic building roof knowledge that is extracted based on roof topology graphs. An adjustment of the underlying point segments ensures that the positions of the extracted hyperpolylines result in regularized 2D partitions as well as topologically correct 3D building models. The validity and limitations of the approach are demonstrated on real-world examples.

2018 ◽  
Vol 10 (10) ◽  
pp. 1512 ◽  
Author(s):  
Mohammad Awrangjeb ◽  
Syed Gilani ◽  
Fasahat Siddiqui

Three-dimensional (3-D) reconstruction of building roofs can be an essential prerequisite for 3-D building change detection, which is important for detection of informal buildings or extensions and for update of 3-D map database. However, automatic 3-D roof reconstruction from the remote sensing data is still in its development stage for a number of reasons. For instance, there are difficulties in determining the neighbourhood relationships among the planes on a complex building roof, locating the step edges from point cloud data often requires additional information or may impose constraints, and missing roof planes attract human interaction and often produces high reconstruction errors. This research introduces a new 3-D roof reconstruction technique that constructs an adjacency matrix to define the topological relationships among the roof planes. It identifies any missing planes through an analysis using the 3-D plane intersection lines between adjacent planes. Then, it generates new planes to fill gaps of missing planes. Finally, it obtains complete building models through insertion of approximate wall planes and building floor. The reported research in this paper then uses the generated building models to detect 3-D changes in buildings. Plane connections between neighbouring planes are first defined to establish relationships between neighbouring planes. Then, each building in the reference and test model sets is represented using a graph data structure. Finally, the height intensity images, and if required the graph representations, of the reference and test models are directly compared to find and categorise 3-D changes into five groups: new, unchanged, demolished, modified and partially-modified planes. Experimental results on two Australian datasets show high object- and pixel-based accuracy in terms of completeness, correctness, and quality for both 3-D roof reconstruction and change detection techniques. The proposed change detection technique is robust to various changes including addition of a new veranda to or removal of an existing veranda from a building and increase of the height of a building.


Author(s):  
Jixing Yan ◽  
Wanshou Jiang ◽  
Jie Shan

This paper presents a global solution to building roof topological reconstruction from LiDAR point clouds. Starting with segmented roof planes from building LiDAR points, a BSP (binary space partitioning) algorithm is used to partition the bounding box of the building into volumetric cells, whose geometric features and their topology are simultaneously determined. To resolve the inside/outside labelling problem of cells, a global energy function considering surface visibility and spatial regularization between adjacent cells is constructed and minimized via graph cuts. As a result, the cells are labelled as either inside or outside, where the planar surfaces between the inside and outside form the reconstructed building model. Two LiDAR data sets of Yangjiang (China) and Wuhan University (China) are used in the study. Experimental results show that the completeness of reconstructed roof planes is 87.5%. Comparing with existing data-driven approaches, the proposed approach is global. Roof faces and edges as well as their topology can be determined at one time via minimization of an energy function. Besides, this approach is robust to partial absence of roof planes and tends to reconstruct roof models with visibility-consistent surfaces.


2017 ◽  
Vol 55 (1) ◽  
pp. 63-89 ◽  
Author(s):  
Syed Ali Naqi Gilani ◽  
Mohammad Awrangjeb ◽  
Guojun Lu

Author(s):  
S. N. Perera ◽  
N. Hetti Arachchige ◽  
D. Schneider

Geometrically and topologically correct 3D building models are required to satisfy with new demands such as 3D cadastre, map updating, and decision making. More attention on building reconstruction has been paid using Airborne Laser Scanning (ALS) point cloud data. The planimetric accuracy of roof outlines, including step-edges is questionable in building models derived from only point clouds. This paper presents a new approach for the detection of accurate building boundaries by merging point clouds acquired by ALS and aerial photographs. It comprises two major parts: reconstruction of initial roof models from point clouds only, and refinement of their boundaries. A shortest closed circle (graph) analysis method is employed to generate building models in the first step. Having the advantages of high reliability, this method provides reconstruction without prior knowledge of primitive building types even when complex height jumps and various types of building roof are available. The accurate position of boundaries of the initial models is determined by the integration of the edges extracted from aerial photographs. In this process, scene constraints defined based on the initial roof models are introduced as the initial roof models are representing explicit unambiguous geometries about the scene. Experiments were conducted using the ISPRS benchmark test data. Based on test results, we show that the proposed approach can reconstruct 3D building models with higher geometrical (planimetry and vertical) and topological accuracy.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4761 ◽  
Author(s):  
Yue Qiu ◽  
Yutaka Satoh ◽  
Ryota Suzuki ◽  
Kenji Iwata ◽  
Hirokatsu Kataoka

This study proposes a framework for describing a scene change using natural language text based on indoor scene observations conducted before and after a scene change. The recognition of scene changes plays an essential role in a variety of real-world applications, such as scene anomaly detection. Most scene understanding research has focused on static scenes. Most existing scene change captioning methods detect scene changes from single-view RGB images, neglecting the underlying three-dimensional structures. Previous three-dimensional scene change captioning methods use simulated scenes consisting of geometry primitives, making it unsuitable for real-world applications. To solve these problems, we automatically generated large-scale indoor scene change caption datasets. We propose an end-to-end framework for describing scene changes from various input modalities, namely, RGB images, depth images, and point cloud data, which are available in most robot applications. We conducted experiments with various input modalities and models and evaluated model performance using datasets with various levels of complexity. Experimental results show that the models that combine RGB images and point cloud data as input achieve high performance in sentence generation and caption correctness and are robust for change type understanding for datasets with high complexity. The developed datasets and models contribute to the study of indoor scene change understanding.


Sign in / Sign up

Export Citation Format

Share Document