scholarly journals 3D CITY MODELS FOR URBAN MINING: POINT CLOUD BASED SEMANTIC ENRICHMENT FOR SPECTRAL VARIATION IDENTIFICATION IN HYPERSPECTRAL IMAGERY

Author(s):  
P. A. Ruben ◽  
R. Sileryte ◽  
G. Agugiaro

Abstract. Urban mining aims at reusing building materials enclosed in our cities. Therefore, it requires accurate information on the availability of these materials for each separate building. While recent publications have demonstrated that such information can be obtained using machine learning and data fusion techniques applied to hyperspectral imagery, challenges still persist. One of these is the so-called ’salt-and-pepper noise’, i.e. the oversensitivity to the presence of several materials within one pixel (e.g. chimneys, roof windows). For the specific case of identifying roof materials, this research demonstrates the potential of 3D city models to identify and filter out such unreliable pixels beforehand. As, from a geometrical point of view, most available 3D city models are too generalized for this purpose (e.g. in CityGML Level of Detail 2), semantic enrichment using a point cloud is proposed to compensate missing details. So-called deviations are mapped onto a 3D building model by comparing it with a point cloud. Seeded region growing approach based on distance and orientation features is used for the comparison. Further, the results of a validation carried out for parts of Rotterdam and resulting in KHAT values as high as 0.7 are discussed.

2015 ◽  
Vol 4 (2) ◽  
pp. 68-76 ◽  
Author(s):  
Roland Billen ◽  
Anne-Françoise Cutting-Decelle ◽  
Claudine Métral ◽  
Gilles Falquet ◽  
Sisi Zlatanova ◽  
...  

This technical paper is a contribution to the identification of current challenges of semantic 3D city models. They are presented in four parts, namely 3D enriched city models and their connection with urban information models and smartcities, urban models integration, urban analyses and data. This work is an output of the COST Action TU0801 “Semantic Enrichment of 3D city models for sustainable urban development”.


Author(s):  
O. Wysocki ◽  
B. Schwab ◽  
L. Hoegner ◽  
T. H. Kolbe ◽  
U. Stilla

Abstract. Nowadays, the number of connected devices providing unstructured data is rapidly rising. These devices acquire data with a temporal and spatial resolution at an unprecedented level creating an influx of geoinformation which, however, lacks semantic information. Simultaneously, structured datasets like semantic 3D city models are widely available and assure rich semantics and high global accuracy but are represented by rather coarse geometries. While the mentioned downsides curb the usability of these data types for nowadays’ applications, the fusion of both shall maximize their potential. Since testing and developing automated driving functions stands at the forefront of the challenges, we propose a pipeline fusing structured (CityGML and HD Map datasets) and unstructured datasets (MLS point clouds) to maximize their advantages in the automatic 3D road space models reconstruction domain. The pipeline is a parameterized end-to-end solution that integrates segmentation, reconstruction, and modeling tasks while ensuring geometric and semantic validity of models. Firstly, the segmentation of point clouds is supported by the transfer of semantics from a structured to an unstructured dataset. The distinction between horizontal- and vertical-like point cloud subsets enforces a further segmentation or an immediate refinement while only adequately depicted models by point clouds are allowed. Then, based on the classified and filtered point clouds the input 3D model geometries are refined. Building upon the refinement, the semantic enrichment of the 3D models is presented. The deployment of a simulation engine for automated driving research and a city model database tool underlines the versatility of possible application areas.


Author(s):  
I. Apra ◽  
C. Bachert ◽  
C. Cáceres Tocora ◽  
Ö. Tufan ◽  
O. Veselý ◽  
...  

Abstract. In guiding the energy transition efforts towards renewable energy sources, 3D city models were shown to be useful tools when assessing the annual solar energy generation potential of urban landscapes. However, the simplified roof geometry included in these 3D city models and the lack of additional semantic information about the buildings’ roof often yield less accurate solar potential evaluations than desirable. In this paper we propose three different methods to infer and store additional information into 3D city models, namely on physical obstacles present on the roof and existing solar panels. Both can be used to increase the accuracy of roof solar panel retrofit potential. These methods are developed and tested on the open datasets available in the Netherlands, specifically AHN3 lidar point-cloud and PDOK aerial photography. However, we believe they can be adapted to different environments as well, based on the available datasets and their precision locally available.


2020 ◽  
Vol 12 (12) ◽  
pp. 1972 ◽  
Author(s):  
Urška Drešček ◽  
Mojca Kosmatin Fras ◽  
Jernej Tekavec ◽  
Anka Lisec

This paper provides the innovative approach of using a spatial extract, transform, load (ETL) solution for 3D building modelling, based on an unmanned aerial vehicle (UAV) photogrammetric point cloud. The main objective of the paper is to present the holistic workflow for 3D building modelling, emphasising the benefits of using spatial ETL solutions for this purpose. Namely, despite the increasing demands for 3D city models and their geospatial applications, the generation of 3D city models is still challenging in the geospatial domain. Advanced geospatial technologies provide various possibilities for the mass acquisition of geospatial data that is further used for 3D city modelling, but there is a huge difference in the cost and quality of input data. While aerial photogrammetry and airborne laser scanning involve high costs, UAV photogrammetry has brought new opportunities, including for small and medium-sized companies, by providing a more flexible and low-cost source of spatial data for 3D modelling. In our data-driven approach, we use a spatial ETL solution to reconstruct a 3D building model from a dense image matching point cloud which was obtained beforehand from UAV imagery. The results are 3D building models in a semantic vector format consistent with the OGC CityGML standard, Level of Detail 2 (LOD2). The approach has been tested on selected buildings in a simple semi-urban area. We conclude that spatial ETL solutions can be efficiently used for 3D building modelling from UAV data, where the data process model developed allows the developer to easily control and manipulate each processing step.


Author(s):  
M. Rook ◽  
F. Biljecki ◽  
A. A. Diakité

The lack of semantic information in many 3D city models is a considerable limiting factor in their use, as a lot of applications rely on semantics. Such information is not always available, since it is not collected at all times, it might be lost due to data transformation, or its lack may be caused by non-interoperability in data integration from other sources. This research is a first step in creating an automatic workflow that semantically labels plain 3D city model represented by a soup of polygons, with semantic and thematic information, as defined in the CityGML standard. The first step involves the reconstruction of the topology, which is used in a region growing algorithm that clusters upward facing adjacent triangles. Heuristic rules, embedded in a decision tree, are used to compute a likeliness score for these regions that either represent the ground (terrain) or a RoofSurface. Regions with a high likeliness score, to one of the two classes, are used to create a decision space, which is used in a support vector machine (SVM). Next, topological relations are utilised to select seeds that function as a start in a region growing algorithm, to create regions of triangles of other semantic classes. The topological relationships of the regions are used in the aggregation of the thematic building features. Finally, the level of detail is detected to generate the correct output in CityGML. The results show an accuracy between 85 % and 99 % in the automatic semantic labelling on four different test datasets. The paper is concluded by indicating problems and difficulties implying the next steps in the research.


3D Printing ◽  
2017 ◽  
pp. 296-305
Author(s):  
Roland Billen ◽  
Anne-Françoise Cutting-Decelle ◽  
Claudine Métral ◽  
Gilles Falquet ◽  
Sisi Zlatanova ◽  
...  

This technical paper is a contribution to the identification of current challenges of semantic 3D city models. They are presented in four parts, namely 3D enriched city models and their connection with urban information models and smartcities, urban models integration, urban analyses and data. This work is an output of the COST Action TU0801 “Semantic Enrichment of 3D city models for sustainable urban development”.


Author(s):  
B. Willenborg ◽  
M. Pültz ◽  
T. H. Kolbe

<p><strong>Abstract.</strong> High-resolution 3D mesh models are an inexpensive and increasingly available data source for 3D models of cities and landscapes of high visual quality and rich geometric detail. However, because of their simple data structure, their analytic capabilites are limited. Semantic 3D city model contain rich thematic information and are well suited for analytics due to their deeply structured semantic data model. In this work an approach for the integration of semantic 3D city models with 3D mesh models is presented. The method is based on geometric distance measures between mesh triangles and semantic surfaces and a region growing approach using plane fitting. The resulting semantic segmentation of mesh triangles is stored in a CityGML data set, to enrich the semantic model with an additional detailed geometric representation of its surfaces and a broad range of unrepresented features like technical building installations, balconies, dormers, chimneys, and vegetation. The potential of the approach is demonstrated on the example of a solar potential analysis, which estimation quality is significantly improved due to the mesh integration. The impact of the method is quantified on a case study using open data from the city of Helsinki.</p>


Author(s):  
C. Beil ◽  
T. Kutzner ◽  
B. Schwab ◽  
B. Willenborg ◽  
A. Gawronski ◽  
...  

Abstract. A range of different and increasingly accessible acquisition methods, the possibility for frequent data updates of large areas, and a simple data structure are some of the reasons for the popularity of three-dimensional (3D) point cloud data. While there are multiple techniques for segmenting and classifying point clouds, capabilities of common data formats such as LAS for providing semantic information are mostly limited to assigning points to a certain category (classification). However, several fields of application, such as digital urban twins used for simulations and analyses, require more detailed semantic knowledge. This can be provided by semantic 3D city models containing hierarchically structured semantic and spatial information. Although semantic models are often reconstructed from point clouds, they are usually geometrically less accurate due to generalization processes. First, point cloud data structures / formats are discussed with respect to their semantic capabilities. Then, a new approach for integrating point clouds with semantic 3D city models is presented, consequently combining respective advantages of both data types. In addition to elaborate (and established) semantic concepts for several thematic areas, the new version 3.0 of the international Open Geospatial Consortium (OGC) standard CityGML also provides a PointCloud module. In this paper a scheme is shown, how CityGML 3.0 can be used to provide semantic structures for point clouds (directly or stored in a separate LAS file). Methods and metrics to automatically assign points to corresponding Level of Detail (LoD)2 or LoD3 models are presented. Subsequently, dataset examples implementing these concepts are provided for download.


Author(s):  
O. Wysocki ◽  
Y. Xu ◽  
U. Stilla

Abstract. Throughout the years, semantic 3D city models have been created to depict 3D spatial phenomenon. Recently, an increasing number of mobile laser scanning (MLS) units yield terrestrial point clouds at an unprecedented level. Both dataset types often depict the same 3D spatial phenomenon differently, thus their fusion should increase the quality of the captured 3D spatial phenomenon. Yet, each dataset has modality-dependent uncertainties that hinder their immediate fusion. Therefore, we present a method for fusing MLS point clouds with semantic 3D building models while considering uncertainty issues. Specifically, we show MLS point clouds coregistration with semantic 3D building models based on expert confidence in evaluated metadata quantified by confidence interval (CI). This step leads to the dynamic adjustment of the CI, which is used to delineate matching bounds for both datasets. Both coregistration and matching steps serve as priors for a Bayesian network (BayNet) that performs application-dependent identity estimation. The BayNet propagates uncertainties and beliefs throughout the process to estimate end probabilities for confirmed, unmodeled, and other city objects. We conducted promising preliminary experiments on urban MLS and CityGML datasets. Our strategy sets up a framework for the fusion of MLS point clouds and semantic 3D building models. This framework aids the challenging parallel usage of such datasets in applications such as façade refinement or change detection. To further support this process, we open-sourced our implementation.


Sign in / Sign up

Export Citation Format

Share Document