scholarly journals Relation-Constrained 3D Reconstruction of Buildings in Metropolitan Areas from Photogrammetric Point Clouds

2021 ◽  
Vol 13 (1) ◽  
pp. 129
Author(s):  
Yuan Li ◽  
Bo Wu

The complexity and variety of buildings and the defects of point cloud data are the main challenges faced by 3D urban reconstruction from point clouds, especially in metropolitan areas. In this paper, we developed a method that embeds multiple relations into a procedural modelling process for the automatic 3D reconstruction of buildings from photogrammetric point clouds. First, a hybrid tree of constructive solid geometry and boundary representation (CSG-BRep) was built to decompose the building bounding space into multiple polyhedral cells based on geometric-relation constraints. The cells that approximate the shapes of buildings were then selected based on topological-relation constraints and geometric building models were generated using a reconstructing CSG-BRep tree. Finally, different parts of buildings were retrieved from the CSG-BRep trees, and specific surface types were recognized to convert the building models into the City Geography Markup Language (CityGML) format. The point clouds of 105 buildings in a metropolitan area in Hong Kong were used to evaluate the performance of the proposed method. Compared with two existing methods, the proposed method performed the best in terms of robustness, regularity, and topological correctness. The CityGML building models enriched with semantic information were also compared with the manually digitized ground truth, and the high level of consistency between the results suggested that the produced models will be useful in smart city applications.

2021 ◽  
Vol 10 (5) ◽  
pp. 345
Author(s):  
Konstantinos Chaidas ◽  
George Tataris ◽  
Nikolaos Soulakellis

In a post-earthquake scenario, the semantic enrichment of 3D building models with seismic damage is crucial from the perspective of disaster management. This paper aims to present the methodology and the results for the Level of Detail 3 (LOD3) building modelling (after an earthquake) with the enrichment of the semantics of the seismic damage based on the European Macroseismic Scale (EMS-98). The study area is the Vrisa traditional settlement on the island of Lesvos, Greece, which was affected by a devastating earthquake of Mw = 6.3 on 12 June 2017. The applied methodology consists of the following steps: (a) unmanned aircraft systems (UAS) nadir and oblique images are acquired and photogrammetrically processed for 3D point cloud generation, (b) 3D building models are created based on 3D point clouds and (c) 3D building models are transformed into a LOD3 City Geography Markup Language (CityGML) standard with enriched semantics of the related seismic damage of every part of the building (walls, roof, etc.). The results show that in following this methodology, CityGML LOD3 models can be generated and enriched with buildings’ seismic damage. These models can assist in the decision-making process during the recovery phase of a settlement as well as be the basis for its monitoring over time. Finally, these models can contribute to the estimation of the reconstruction cost of the buildings.


Author(s):  
B. Xiong ◽  
S. Oude Elberink ◽  
G. Vosselman

The Multi-View Stereo (MVS) technology has improved significantly in the last decade, providing a much denser and more accurate point cloud than before. The point cloud now becomes a valuable data for modelling the LOD2 buildings. However, it is still not accurate enough to replace the lidar point cloud. Its relative high level of noise prevents the accurate interpretation of roof faces, e.g. one planar roof face has uneven surface of points therefore is segmented into many parts. The derived roof topology graphs are quite erroneous and cannot be used to model the buildings using the current methods based on roof topology graphs. We propose a parameter-free algorithm to robustly and precisely derive roof structures and building models. The points connecting roof segments are searched and grouped as structure points and structure boundaries, accordingly presenting the roof corners and boundaries. Their geometries are computed by the plane equations of their attached roof segments. If data available, the algorithm guarantees complete building structures in noisy point clouds and meanwhile achieves global optimized models. Experiments show that, when comparing to the roof topology graph based methods, the novel algorithm achieves consistent quality for both lidar and photogrammetric point clouds. But the new method is fully automatic and is a good alternative for the model-driven method when the processing time is important.


Author(s):  
K. Thoeni ◽  
A. Giacomini ◽  
R. Murtagh ◽  
E. Kniest

This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.


2019 ◽  
Vol 8 (4) ◽  
pp. 193 ◽  
Author(s):  
Hossein Bagheri ◽  
Michael Schmitt ◽  
Xiaoxiang Zhu

So-called prismatic 3D building models, following the level-of-detail (LOD) 1 of the OGC City Geography Markup Language (CityGML) standard, are usually generated automatically by combining building footprints with height values. Typically, high-resolution digital elevation models (DEMs) or dense LiDAR point clouds are used to generate these building models. However, high-resolution LiDAR data are usually not available with extensive coverage, whereas globally available DEM data are often not detailed and accurate enough to provide sufficient input to the modeling of individual buildings. Therefore, this paper investigates the possibility of generating LOD1 building models from both volunteered geographic information (VGI) in the form of OpenStreetMap data and remote sensing-derived geodata improved by multi-sensor and multi-modal DEM fusion techniques or produced by synthetic aperture radar (SAR)-optical stereogrammetry. The results of this study show several things: First, it can be seen that the height information resulting from data fusion is of higher quality than the original data sources. Secondly, the study confirms that simple, prismatic building models can be reconstructed by combining OpenStreetMap building footprints and easily accessible, remote sensing-derived geodata, indicating the potential of application on extensive areas. The building models were created under the assumption of flat terrain at a constant height, which is valid in the selected study area.


Author(s):  
O. Ennafii ◽  
A. Le Bris ◽  
F. Lafarge ◽  
C. Mallet

Abstract. City modeling consists in building a semantic generalized model of the surface of urban objects. These could be seen as a special case of Boundary representation surfaces. Most modeling methods focus on 3D buildings with Very High Resolution overhead data (images and/or 3D point clouds). The literature abundantly addresses 3D mesh processing but frequently ignores the analysis of such models. This requires an efficient representation of 3D buildings. In particular, for them to be used in supervised learning tasks, such a representation should be scalable and transferable to various environments as only a few reference training instances would be available. In this paper, we propose two solutions that take into account the specificity of 3D urban models. They are based on graph kernels and Scattering Network. They are here evaluated in the challenging framework of quality evaluation of building models. The latter is formulated as a supervised multilabel classification problem, where error labels are predicted at building level. The experiments show for both feature extraction strategy strong and complementary results (F-score > 74% for most labels). Transferability of the classification is also examined in order to assess the scalability of the evaluation process yielding very encouraging scores (F-score > 86% for most labels).


Author(s):  
Y. Li ◽  
B. Wu

Abstract. Automatic 3D building reconstruction from laser scanning or photogrammetric point clouds has gained increasing attention in the past two decades. Although many efforts have been made, the complexity of buildings and incompletion of point clouds, i.e., data missing, still make it a challenging task for automatic 3D reconstruction of buildings in large-scale urban scenes with various architectural styles. This paper presents an innovative approach for automatic generation of 3D models of complex buildings from even incomplete point clouds. The approach first decomposes the 3D space into multiple space units, including 3D polyhedral cells, facets and edges, where the facets and edges are also encoded with topological-relation constraints. Then, the units and constraints are used together to approximate the buildings. On one hand, by extracting facets from 3D cells and further extracting edges from facets, this approach simplifies complicated topological computations. On the other hand, because this approach models buildings on the basis of polyhedral cells, it can guarantee that the models are manifold and watertight and avoid correcting topological errors. A challenging dataset containing 105 buildings acquired in Central, Hong Kong, was used to evaluate the performance of the proposed approach. The results were compared with two previous methods and the comparisons suggested that the proposed approach outperforms other methods in terms of robustness, regularity, and accuracy of the models, with an average root-mean-square error of less than 0.9 m. The proposed approach is of significance for automatic 3D modelling of buildings for urban applications.


Author(s):  
S. Goebbels ◽  
R. Pohle-Fröhlich

The paper presents a new data-driven approach to generate CityGML building models from airborne laser scanning data. The approach is based on image processing methods applied to an interpolated height map and avoids shortcomings of established methods for plane detection like Hough transform or RANSAC algorithms on point clouds. The improvement originates in an interpolation algorithm that generates a height map from sparse point cloud data by preserving ridge lines and step edges of roofs. Roof planes then are detected by clustering the height map’s gradient angles, parameterizations of planes are estimated and used to filter out noise around ridge lines. On that basis, a raster representation of roof facets is generated. Then roof polygons are determined from region outlines, connected to a roof boundary graph, and simplified. Whereas the method is not limited to churches, the method’s performance is primarily tested for church roofs of the German city of Krefeld because of their complexity. To eliminate inaccuracies of spires, contours of towers are detected additionally, and spires are rendered as solids of revolution. In our experiments, the new data-driven method lead to significantly better building models than the previously applied model-driven approach.


Author(s):  
S. N. Perera ◽  
N. Hetti Arachchige ◽  
D. Schneider

Geometrically and topologically correct 3D building models are required to satisfy with new demands such as 3D cadastre, map updating, and decision making. More attention on building reconstruction has been paid using Airborne Laser Scanning (ALS) point cloud data. The planimetric accuracy of roof outlines, including step-edges is questionable in building models derived from only point clouds. This paper presents a new approach for the detection of accurate building boundaries by merging point clouds acquired by ALS and aerial photographs. It comprises two major parts: reconstruction of initial roof models from point clouds only, and refinement of their boundaries. A shortest closed circle (graph) analysis method is employed to generate building models in the first step. Having the advantages of high reliability, this method provides reconstruction without prior knowledge of primitive building types even when complex height jumps and various types of building roof are available. The accurate position of boundaries of the initial models is determined by the integration of the edges extracted from aerial photographs. In this process, scene constraints defined based on the initial roof models are introduced as the initial roof models are representing explicit unambiguous geometries about the scene. Experiments were conducted using the ISPRS benchmark test data. Based on test results, we show that the proposed approach can reconstruct 3D building models with higher geometrical (planimetry and vertical) and topological accuracy.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 533 ◽  
Author(s):  
Shengjun Tang ◽  
Yunjie Zhang ◽  
You Li ◽  
Zhilu Yuan ◽  
Yankun Wang ◽  
...  

Semantically rich indoor models are increasingly used throughout a facility’s life cycle for different applications. With the decreasing price of 3D sensors, it is convenient to acquire point cloud data from consumer-level scanners. However, most existing methods in 3D indoor reconstruction from point clouds involve a tedious manual or interactive process due to line-of-sight occlusions and complex space structures. Using the multiple types of data obtained by RGB-D devices, this paper proposes a fast and automatic method for reconstructing semantically rich indoor 3D building models from low-quality RGB-D sequences. Our method is capable of identifying and modelling the main structural components of indoor environments such as space, wall, floor, ceilings, windows, and doors from the RGB-D datasets. The method includes space division and extraction, opening extraction, and global optimization. For space division and extraction, rather than distinguishing room spaces based on the detected wall planes, we interactively define the start-stop position for each functional space (e.g., room, corridor, kitchen) during scanning. Then, an interior elements filtering algorithm is proposed for wall component extraction and a boundary generation algorithm is used for space layout determination. For opening extraction, we propose a new noise robustness method based on the properties of convex hull, octrees structure, Euclidean clusters and the camera trajectory for opening generation, which is inapplicable to the data collected in the indoor environments due to inevitable occlusion. A global optimization approach for planes is designed to eliminate the inconsistency of planes sharing the same global plane, and maintain plausible connectivity between the walls and the relationships between the walls and openings. The final model is stored according to the CityGML3.0 standard. Our approach allows for the robust generation of semantically rich 3D indoor models and has strong applicability and reconstruction power for complex real-world datasets.


2017 ◽  
Author(s):  
Jérémie Voumard ◽  
Antonio Abellan ◽  
Pierrick Nicolet ◽  
Marie-Aurélie Chanut ◽  
Marc-Henri Derron ◽  
...  

Abstract. We discuss here the challenges and limitations on surveying rock slope failures using 3D reconstruction from images acquired from Street View Imagery (SVI) and processed with modern photogrammetric workflows. We show how the back in time function can be used for a 3D reconstruction of two or more image sets from the same site but at different instants of time, allowing for rock slope surveying. Three sites in the French alps were selected: (a) a cliff beside a road where a protective wall collapsed consisting on two images sets (60 and 50 images on each set) captured on a six years timeframe; (b) a large-scale active landslide located on a slope at 250 m from the road, using seven images sets (50 to 80 images per set) from five different time periods with three images sets for one period; (c) a cliff over a tunnel which has collapsed, using three images sets on a six years time-frame. The analysis includes the use of different commercially available Structure for Motion (SfM) programs and comparison between the so-extracted photogrammetric point clouds and a LiDAR derived mesh used as a ground truth. As a result, both landslide deformation together with estimation of fallen volumes were clearly identified in the point clouds. Results are site and software-dependent, as a function of the image set and number of images, with model accuracies ranging between 0.1 and 3.1 m in the best and worst scenario, respectively. Despite some clear limitations and challenges, this manuscript demonstrates that this original approach might allow obtaining preliminary 3D models of an area without on-field images. Furthermore, the pre-failure topography can be obtained for sites where it would not be available otherwise.


Sign in / Sign up

Export Citation Format

Share Document