scholarly journals THE IMPACT OF LEVEL OF DETAIL IN 3D CITY MODELS FOR CFD-BASED WIND FLOW SIMULATIONS

Author(s):  
C. García-Sánchez ◽  
S. Vitalis ◽  
I. Paden ◽  
J. Stoter

Abstract. Climate change and urbanization rates are transforming urban environments, making the use of 3D city models in computational fluid dynamics (CFD) a fundamental ingredient to evaluate urban layouts before construction. However, current geometries used in CFD simulations tend to be built by CFD experts to test specific cases, most of the times oversimplifying their designs due to lack of information or in order to reduce complexity. In this work we explore what are the effects of oversimplifying geometries by comparing wind simulations of different level of detail geometries. We use semantic 3D city models automatically built and adjust them to their suitable use in CFD. For the first test, we explore wind simulations within a troublesome section of the TUDelft campus, the passage next to the EWI building (the tallest building in our domain), where the use of 3D city model variants show how differences in geometry and surface properties affect local wind conditions. Finally we analyze what these differences in velocity magnitude could mean for practitioners in terms of pedestrian wind comfort.

Author(s):  
C. Ellul ◽  
M. Adjrad ◽  
P. Groves

There is an increasing demand for highly accurate positioning information in urban areas, to support applications such as people and vehicle tracking, real-time air quality detection and navigation. However systems such as GPS typically perform poorly in dense urban areas. A number of authors have made use of 3D city models to enhance accuracy, obtaining good results, but to date the influence of the quality of the 3D city model on these results has not been tested. This paper addresses the following question: how does the quality, and in particular the variation in height, level of generalization and completeness and currency of a 3D dataset, impact the results obtained for the preliminary calculations in a process known as Shadow Matching, which takes into account not only where satellite signals are visible on the street but also where they are predicted to be absent. We describe initial simulations to address this issue, examining the variation in elevation angle – i.e. the angle above which the satellite is visible, for three 3D city models in a test area in London, and note that even within one dataset using different available height values could cause a difference in elevation angle of up to 29°. Missing or extra buildings result in an elevation variation of around 85°. Variations such as these can significantly influence the predicted satellite visibility which will then not correspond to that experienced on the ground, reducing the accuracy of the resulting Shadow Matching process.


Author(s):  
S. Vitalis ◽  
K. Arroyo Ohori ◽  
J. Stoter

<p><strong>Abstract.</strong> 3D city models are being increasingly adopted by organisations in order to serve application needs related to urban areas. In order to fulfil the different requirements of various applications, the concept of Level of Detail (LoD) has been incorporated in 3D city models specifications, such as CityGML. Therefore, datasets of different LoDs are being created for the same areas by several organisations for their own use cases. Meanwhile, as time progresses newer versions of existing 3D city models are being created by vendors. Nevertheless, the existing mechanisms for representating multi-LoD data has not been adopted by the users and there has been little effort on the implementation of a mechanism to store multiple revisions of a city model. This results in redundancy of information and the existence of multiple datasets inconsistent with each other. Alternatively, a representation of time or scale as additional dimensions to the three spatial ones has been proposed as a better way to store multiple versions of datasets while retaining information related to the corresponding features between datasets. In this paper, we propose a conceptual framework with initial considerations for the implementation of a 4D representation of two states of a 3D city model. This framework defines both the data structure of such an approach, as well as the methodology according to which two existing 3D city models can be compared, associated and stored with their correspondences in 4D. The methodology is defined as six individual steps that have to be undertaken, each with its own individual requirements and goals that have to be challenged. We, also, provide some examples and considerations for the way those steps can be implemented.</p>


2019 ◽  
Vol 8 (11) ◽  
pp. 504
Author(s):  
Siyi Li ◽  
Wenjing Li ◽  
Zhiyong Lin ◽  
Shengjie Yi

A 3D city model is an intuitive tool that is used to describe cities. Currently, level-of-detail (LOD) technology is used to meet different visual demands for 3D city models by weighting the rendering efficiency against the details of the model. However, when the visual demands change, the “popping” phenomenon appears when making transformations between different LOD models. We optimized this popping phenomenon by improving the data structure that focuses on 3D city building models and combined it with the facet shift algorithm based on minimal features. Unlike generating finite LOD models in advance, the proposed continuous LOD topology data structure is able to store the changes between different LOD models. By reasonably using the change information, continuous LOD transformation becomes possible. The experimental results showed that the continuous LOD transformation based on the proposed data structure worked well, and the improved data structure also performed well in memory occupation.


Author(s):  
V. Rautenbach ◽  
A. Çöltekin ◽  
S. Coetzee

In this paper we report results from a qualitative user experiment (n=107) designed to contribute to understanding the impact of various levels of complexity (mainly based on levels of detail, i.e., LoD) in 3D city models, specifically on the participants’ orientation and cognitive (mental) maps. The experiment consisted of a number of tasks motivated by spatial cognition theory where participants (among other things) were given orientation tasks, and in one case also produced sketches of a path they ‘travelled’ in a virtual environment. The experiments were conducted in groups, where individuals provided responses on an answer sheet. The preliminary results based on descriptive statistics and qualitative sketch analyses suggest that very little information (i.e., a low LoD model of a smaller area) might have a negative impact on the accuracy of cognitive maps constructed based on a virtual experience. Building an accurate cognitive map is an inherently desired effect of the visualizations in planning tasks, thus the findings are important for understanding how to develop better-suited 3D visualizations such as 3D city models. In this study, we specifically discuss the suitability of different levels of visual complexity for development planning (urban planning), one of the domains where 3D city models are most relevant.


2014 ◽  
Vol 3 (3) ◽  
pp. 35-49 ◽  
Author(s):  
Jan Klimke ◽  
Benjamin Hagedorn ◽  
Jürgen Döllner

Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. This paper introduces a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.


2018 ◽  
Vol 7 (9) ◽  
pp. 339 ◽  
Author(s):  
Mehmet Buyukdemircioglu ◽  
Sultan Kocaman ◽  
Umit Isikdag

3D city models have become crucial for better city management, and can be used for various purposes such as disaster management, navigation, solar potential computation and planning simulations. 3D city models are not only visual models, and they can also be used for thematic queries and analyzes with the help of semantic data. The models can be produced using different data sources and methods. In this study, vector basemaps and large-format aerial images, which are regularly produced in accordance with the large scale map production regulations in Turkey, have been used to develop a workflow for semi-automatic 3D city model generation. The aim of this study is to propose a procedure for the production of 3D city models from existing aerial photogrammetric datasets without additional data acquisition efforts and/or costly manual editing. To prove the methodology, a 3D city model has been generated with semi-automatic methods at LoD2 (Level of Detail 2) of CityGML (City Geographic Markup Language) using the data of the study area over Cesme Town of Izmir Province, Turkey. The generated model is automatically textured and additional developments have been performed for 3D visualization of the model on the web. The problems encountered throughout the study and approaches to solve them are presented here. Consequently, the approach introduced in this study yields promising results for low-cost 3D city model production with the data at hand.


2020 ◽  
Vol 9 (8) ◽  
pp. 476 ◽  
Author(s):  
Dušan Jovanović ◽  
Stevan Milovanov ◽  
Igor Ruskovski ◽  
Miro Govedarica ◽  
Dubravka Sladić ◽  
...  

The Smart Cities data and applications need to replicate, as faithfully as possible, the state of the city and to simulate possible alternative futures. In order to do this, the modelling of the city should cover all aspects of the city that are relevant to the problems that require smart solutions. In this context, 2D and 3D spatial data play a key role, in particular 3D city models. One of the methods for collecting data that can be used for developing such 3D city models is Light Detection and Ranging (LiDAR), a technology that has provided opportunities to generate large-scale 3D city models at relatively low cost. The collected data is further processed to obtain fully developed photorealistic virtual 3D city models. The goal of this research is to develop virtual 3D city model based on airborne LiDAR surveying and to analyze its applicability toward Smart Cities applications. It this paper, we present workflow that goes from data collection by LiDAR, through extract, transform, load (ETL) transformations and data processing to developing 3D virtual city model and finally discuss its future potential usage scenarios in various fields of application such as modern ICT-based urban planning and 3D cadaster. The results are presented on the case study of campus area of the University of Novi Sad.


Author(s):  
K. Kumar ◽  
A. Labetski ◽  
H. Ledoux ◽  
J. Stoter

<p><strong>Abstract.</strong> The Level of Detail (LOD) concept in CityGML 2.0 is meant to differentiate the multiple representations of semantic 3D city models. Despite the popularity and general acceptance of the concept by the practitioners and stakeholders in 3D city modelling, there are still some limitations. While the CityGML LOD concept is well defined for buildings, bridges, tunnels, and to some extent for roads, there is no clear definition of LODs for terrain/relief, vegetation, land use, water bodies, and generic city objects in CityGML. In addition, extensive research has been done to refine the LOD concept of CityGML for buildings but little is known on requirements and possibilities to model city object types as terrain at different LODs. To address this gap, we focus in this paper on the terrain of a 3D city model and propose a framework for modelling terrains at different LODs in CityGML. As a proof of concept of our framework, we implemented a software prototype to generate terrain models with other city features integrated (e.g. buildings) at different LODs in CityGML.</p>


Author(s):  
R. Piepereit ◽  
A. Beuster ◽  
M. von der Gruen ◽  
U. Voß ◽  
M. Pries ◽  
...  

<p><strong>Abstract.</strong> Virtual reality (VR) technologies are used more and more in product development processes and are upcoming in urban planning systems as well. They help to visualize big amounts of data in self-explanatory way and improve people’s interpretation of results. In this paper we demonstrate the process of visualizing a city model together with wind simulation results in a collaborative VR system. In order to make this kind of visualization possible a considerable amount of preliminary work is necessary: healing and simplification of building models, conversion of these data into an appropriate CAD-format and numerical simulation of wind flow around the buildings. The data obtained from these procedures are visualized in a collaborative VR-System. In our approach CityGML models in the LoD (Level of Detail) 1, 2 and 3 can be used as an input. They are converted into the STEP format, commonly used in CAD for simulation and representation. For this publication we use an exemplary LoD1 model from the district Stöckach-Stuttgart. After preprocessing the model, the results are combined with those of an air flow simulation and afterwards depicted in a VR system with a HTC Vive as well as in a CAVE and a Powerwall. This provides researchers, city planners and technicians with the means to flexibly and interactively exchange simulation results in a virtual environment.</p>


Author(s):  
E. Muñumer Herrero ◽  
C. Ellul ◽  
J. Morley

<p><strong>Abstract.</strong> Popularity and diverse use of 3D city models has increased exponentially in the past few years, providing a more realistic impression and understanding of cities. Often, 3D city models are created by elevating the buildings from a detailed 2D topographic base map and subsequently used in studies such as solar panel allocation, infrastructure remodelling, antenna installations or even tourist guide applications. However, the large amount of resulting data slows down rendering and visualisation of the 3D models, and can also impact the performance of any analysis. Generalisation enables a reduction in the amount of data – however the addition of the third dimension makes this process more complex, and the loss of detail resulting from the process will inevitably have an impact on the result of any subsequent analysis.</p><p>While a few 3D generalization algorithms do exist in a research context, these are not available commercially. However, GIS users can create the generalised 3D models by simplifying and aggregating the 2D dataset first and then extruding it to the third dimension. This approach offers a rapid generalization process to create a dataset to underpin the impact of using generalised data for analysis. Specifically, in this study, the line of sight from a tall building and the sun shadow that it creates are calculated and compared, in both original and generalised datasets. The results obtained after the generalisation process are significant: both the number of polygons and the number of nodes are minimized by around 83<span class="thinspace"></span>% and the volume of 3D buildings is reduced by 14.87<span class="thinspace"></span>%. As expected, the spatial analyses processing times are also reduced. The study demonstrates the impact of generalisation on analytical results – which is particularly relevant in situations where detailed data is not available and will help to guide the development of future 3D generalisation algorithms. It also highlights some issues with the overall maturity of 3D analysis tools, which could be one factor limiting uptake of 3D GIS.</p>


Sign in / Sign up

Export Citation Format

Share Document