scholarly journals INTEGRATION OF SEMANTIC 3D CITY MODELS AND 3D MESH MODELS FOR ACCURACY IMPROVEMENTS OF SOLAR POTENTIAL ANALYSES

Author(s):  
B. Willenborg ◽  
M. Pültz ◽  
T. H. Kolbe

<p><strong>Abstract.</strong> High-resolution 3D mesh models are an inexpensive and increasingly available data source for 3D models of cities and landscapes of high visual quality and rich geometric detail. However, because of their simple data structure, their analytic capabilites are limited. Semantic 3D city model contain rich thematic information and are well suited for analytics due to their deeply structured semantic data model. In this work an approach for the integration of semantic 3D city models with 3D mesh models is presented. The method is based on geometric distance measures between mesh triangles and semantic surfaces and a region growing approach using plane fitting. The resulting semantic segmentation of mesh triangles is stored in a CityGML data set, to enrich the semantic model with an additional detailed geometric representation of its surfaces and a broad range of unrepresented features like technical building installations, balconies, dormers, chimneys, and vegetation. The potential of the approach is demonstrated on the example of a solar potential analysis, which estimation quality is significantly improved due to the mesh integration. The impact of the method is quantified on a case study using open data from the city of Helsinki.</p>

Author(s):  
G. S. Floros ◽  
C. Ellul ◽  
E. Dimopoulou

<p><strong>Abstract.</strong> Applications of 3D City Models range from assessing the potential output of solar panels across a city to determining the best location for 5G mobile phone masts. While in the past these models were not readily available, the rapid increase of available data from sources such as Open Data (e.g. OpenStreetMap), National Mapping and Cadastral Agencies and increasingly Building Information Models facilitates the implementation of increasingly detailed 3D Models. However, these sources also generate integration challenges relating to heterogeneity, storage and efficient management and visualization. CityGML and IFC (Industry Foundation Classes) are two standards that serve different application domains (GIS and BIM) and are commonly used to store and share 3D information. The ability to convert data from IFC to CityGML in a consistent manner could generate 3D City Models able to represent an entire city, but that also include detailed geometric and semantic information regarding its elements. However, CityGML and IFC present major differences in their schemas, rendering interoperability a challenging task, particularly when details of a building’s internal structure are considered (Level of Detail 4 in CityGML). The aim of this paper is to investigate interoperability options between the aforementioned standards, by converting IFC models to CityGML LoD 4 Models. The CityGML Models are then semantically enriched and the proposed methodology is assessed in terms of model’s geometric validity and capability to preserve semantics.</p>


Author(s):  
E. Muñumer Herrero ◽  
C. Ellul ◽  
J. Morley

<p><strong>Abstract.</strong> Popularity and diverse use of 3D city models has increased exponentially in the past few years, providing a more realistic impression and understanding of cities. Often, 3D city models are created by elevating the buildings from a detailed 2D topographic base map and subsequently used in studies such as solar panel allocation, infrastructure remodelling, antenna installations or even tourist guide applications. However, the large amount of resulting data slows down rendering and visualisation of the 3D models, and can also impact the performance of any analysis. Generalisation enables a reduction in the amount of data – however the addition of the third dimension makes this process more complex, and the loss of detail resulting from the process will inevitably have an impact on the result of any subsequent analysis.</p><p>While a few 3D generalization algorithms do exist in a research context, these are not available commercially. However, GIS users can create the generalised 3D models by simplifying and aggregating the 2D dataset first and then extruding it to the third dimension. This approach offers a rapid generalization process to create a dataset to underpin the impact of using generalised data for analysis. Specifically, in this study, the line of sight from a tall building and the sun shadow that it creates are calculated and compared, in both original and generalised datasets. The results obtained after the generalisation process are significant: both the number of polygons and the number of nodes are minimized by around 83<span class="thinspace"></span>% and the volume of 3D buildings is reduced by 14.87<span class="thinspace"></span>%. As expected, the spatial analyses processing times are also reduced. The study demonstrates the impact of generalisation on analytical results – which is particularly relevant in situations where detailed data is not available and will help to guide the development of future 3D generalisation algorithms. It also highlights some issues with the overall maturity of 3D analysis tools, which could be one factor limiting uptake of 3D GIS.</p>


Author(s):  
A. Uyar ◽  
N. N. Ulugtekin

In recent years, 3D models have been created of many cities around the world. Most of the 3D city models have been introduced as completely graphic or geometric models, and the semantic and topographic aspects of the models have been neglected. In order to use 3D city models beyond the task, a generalization is necessary. CityGML is an open data model and XML-based format for the storage and exchange of virtual 3D city models. Level of Details (LoD) which is an important concept for 3D modelling, can be defined as outlined degree or prior representation of real-world objects. The paper aim is first describes some requirements of 3D model generalization, then presents problems and approaches that have been developed in recent years. In conclude the paper will be a summary and outlook on problems and future work.


Author(s):  
V. Rautenbach ◽  
A. Çöltekin ◽  
S. Coetzee

In this paper we report results from a qualitative user experiment (n=107) designed to contribute to understanding the impact of various levels of complexity (mainly based on levels of detail, i.e., LoD) in 3D city models, specifically on the participants’ orientation and cognitive (mental) maps. The experiment consisted of a number of tasks motivated by spatial cognition theory where participants (among other things) were given orientation tasks, and in one case also produced sketches of a path they ‘travelled’ in a virtual environment. The experiments were conducted in groups, where individuals provided responses on an answer sheet. The preliminary results based on descriptive statistics and qualitative sketch analyses suggest that very little information (i.e., a low LoD model of a smaller area) might have a negative impact on the accuracy of cognitive maps constructed based on a virtual experience. Building an accurate cognitive map is an inherently desired effect of the visualizations in planning tasks, thus the findings are important for understanding how to develop better-suited 3D visualizations such as 3D city models. In this study, we specifically discuss the suitability of different levels of visual complexity for development planning (urban planning), one of the domains where 3D city models are most relevant.


Author(s):  
J. Meidow ◽  
H. Hammer ◽  
M. Pohl ◽  
D. Bulatov

Many buildings in 3D city models can be represented by generic models, e.g. boundary representations or polyhedrons, without expressing building-specific knowledge explicitly. Without additional constraints, the bounding faces of these building reconstructions do not feature expected structures such as orthogonality or parallelism. The recognition and enforcement of man-made structures within model instances is one way to enhance 3D city models. Since the reconstructions are derived from uncertain and imprecise data, crisp relations such as orthogonality or parallelism are rarely satisfied exactly. Furthermore, the uncertainty of geometric entities is usually not specified in 3D city models. Therefore, we propose a point sampling which simulates the initial point cloud acquisition by airborne laser scanning and provides estimates for the uncertainties. We present a complete workflow for recognition and enforcement of man-made structures in a given boundary representation. The recognition is performed by hypothesis testing and the enforcement of the detected constraints by a global adjustment of all bounding faces. Since the adjustment changes not only the geometry but also the topology of faces, we obtain improved building models which feature regular structures and a potentially reduced complexity. The feasibility and the usability of the approach are demonstrated with a real data set.


Author(s):  
Juha Hyyppä ◽  
Lingli Zhu ◽  
Zhengjun Liu ◽  
Harri Kaartinen ◽  
Anttoni Jaakkola

Smartphones with larger screens, powerful processors, abundant memory, and an open operation system provide many possibilities for 3D city or photorealistic model applications. 3D city or photorealistic models can be used by the users to locate themselves in the 3D world, or they can be used as methods for visualizing the surrounding environment once a smartphone has already located the phone by other means, e.g. by using GNSS, and then to provide an interface in the form of a 3D model for the location-based services. In principle, 3D models can be also used for positioning purposes. For example, matching of images exported from the smartphone and then registering them in the existing 3D photorealistic world provides the position of the image capture. In that process, the central computer can do a similar image matching task when the users locate themselves interactively into the 3D world. As the benefits of 3D city models are obvious, this chapter demonstrates the technology used to provide photorealistic 3D city models and focus on 3D data acquisition and the methods available in 3D city modeling, and the development of 3D display technology for smartphone applications. Currently, global geoinformatic data providers, such as Google, Nokia (NAVTEQ), and TomTom (Tele Atlas), are expanding their products from 2D to 3D. This chapter is a presentation of a case study of 3D data acquisition, modeling and mapping, and visualization for a smartphone, including an example based on data collected by mobile laser scanning data from the Tapiola (Espoo, Finland) test field.


Author(s):  
P. A. Ruben ◽  
R. Sileryte ◽  
G. Agugiaro

Abstract. Urban mining aims at reusing building materials enclosed in our cities. Therefore, it requires accurate information on the availability of these materials for each separate building. While recent publications have demonstrated that such information can be obtained using machine learning and data fusion techniques applied to hyperspectral imagery, challenges still persist. One of these is the so-called ’salt-and-pepper noise’, i.e. the oversensitivity to the presence of several materials within one pixel (e.g. chimneys, roof windows). For the specific case of identifying roof materials, this research demonstrates the potential of 3D city models to identify and filter out such unreliable pixels beforehand. As, from a geometrical point of view, most available 3D city models are too generalized for this purpose (e.g. in CityGML Level of Detail 2), semantic enrichment using a point cloud is proposed to compensate missing details. So-called deviations are mapped onto a 3D building model by comparing it with a point cloud. Seeded region growing approach based on distance and orientation features is used for the comparison. Further, the results of a validation carried out for parts of Rotterdam and resulting in KHAT values as high as 0.7 are discussed.


Author(s):  
O. Wysocki ◽  
B. Schwab ◽  
L. Hoegner ◽  
T. H. Kolbe ◽  
U. Stilla

Abstract. Nowadays, the number of connected devices providing unstructured data is rapidly rising. These devices acquire data with a temporal and spatial resolution at an unprecedented level creating an influx of geoinformation which, however, lacks semantic information. Simultaneously, structured datasets like semantic 3D city models are widely available and assure rich semantics and high global accuracy but are represented by rather coarse geometries. While the mentioned downsides curb the usability of these data types for nowadays’ applications, the fusion of both shall maximize their potential. Since testing and developing automated driving functions stands at the forefront of the challenges, we propose a pipeline fusing structured (CityGML and HD Map datasets) and unstructured datasets (MLS point clouds) to maximize their advantages in the automatic 3D road space models reconstruction domain. The pipeline is a parameterized end-to-end solution that integrates segmentation, reconstruction, and modeling tasks while ensuring geometric and semantic validity of models. Firstly, the segmentation of point clouds is supported by the transfer of semantics from a structured to an unstructured dataset. The distinction between horizontal- and vertical-like point cloud subsets enforces a further segmentation or an immediate refinement while only adequately depicted models by point clouds are allowed. Then, based on the classified and filtered point clouds the input 3D model geometries are refined. Building upon the refinement, the semantic enrichment of the 3D models is presented. The deployment of a simulation engine for automated driving research and a city model database tool underlines the versatility of possible application areas.


Author(s):  
Zhen Li

Application of 3D mesh model coding is first presented in this chapter. We then survey the typical existing algorithms in the area of compression of static and dynamic 3D meshes. In an introductory sub-section we introduce basic concepts of 3D mesh models, including data representations, model formats, data acquisitions and 3D display technologies. Furthermore, we introduce several typical 3D mesh formats and give an overview to coding principles of mesh compression algorithms in general, followed by describing the quantitative measures for 3D mesh compression. Then we describe some typical and state-of-the-art algorithms in 3D mesh compression. Compression and streaming of gigantic 3D models are specially introduced. At last, the MPEG4 3D mesh model coding standard is briefed. We conclude this chapter with a discussion providing an overall picture of developments in the mesh coding area and pointing out directions for future research.


2021 ◽  
Vol 13 (11) ◽  
pp. 6028
Author(s):  
Carlos Beltran-Velamazan ◽  
Marta Monzón-Chavarrías ◽  
Belinda López-Mesa

3D city models are a useful tool to analyze the solar potential of neighborhoods and cities. These models are built from buildings footprints and elevation measurements. Footprints are widely available, but elevation datasets remain expensive and time-consuming to acquire. Our hypothesis is that the GIS cadastral data can be used to build a 3D model automatically, so that generating complete cities 3D models can be done in a short time with already available data. We propose a method for the automatic construction of 3D models of cities and neighborhoods from 2D cadastral data and study their usefulness for solar analysis by comparing the results with those from a hand-built model. The results show that the accuracy in evaluating solar access on pedestrian areas and solar potential on rooftops with the automatic method is close to that from the hand-built model with slight differences of 3.4% and 2.2%, respectively. On the other hand, time saving with the automatic models is significant. A neighborhood of 400,000 m2 can be built up in 30 min, 50 times faster than by hand, and an entire city of 967 km2 can be built in 8.5 h.


Sign in / Sign up

Export Citation Format

Share Document