scholarly journals HIGH-LEVEL-OF-DETAIL SEMANTIC 3D GIS FOR RISK AND DAMAGE REPRESENTATION OF ARCHITECTURAL HERITAGE

Author(s):  
E. Colucci ◽  
F. Noardo ◽  
F. Matrone ◽  
A. Spanò ◽  
A. Lingua

<p><strong>Abstract.</strong> The need to share information about architectural heritage effectively after a disaster event, in order to foster its preservation, requires the use of a common language between the involved actors and stakeholders. A database able to connect the architectural heritage representation with the data useful for hazard and risk analysis can thus be a powerful instrument. This paper outlines a methodology to represent 3D models of the architectural heritage, according to some existing standards data models, and relate their geometric features to the damage mechanisms that could occur after an earthquake. Among all the existing standard to represent cartographic, cultural heritage and hazard/risk information, respectively INSPIRE, CityGML, UNESCO, CIDOC-CRM, its extension MONDIS and the Getty Institute vocabularies, compliant to the CIDOC-CRM, have been taken into account. An INSPIRE extension has been proposed for increasing the level of detail (LoD) of the representation and improving the description of heritage buildings, adding some macro-elements and elements “feature types” connected with the damage mechanisms, identified in structural studies. The suggested method allows to archive, in a multi-scale database, 3D information with a very high level of detail about architectural heritage and can help structural engineers and conservator-restorers in preventing further damages through individuating useful targeted actions.</p>

2010 ◽  
Vol 2 (5) ◽  
Author(s):  
Nuno Rodrigues ◽  
Luís Magalhães ◽  
João Paulo Moura ◽  
Alan Chalmers ◽  
Filipe Santos ◽  
...  

The manual creation of virtual environments is a demanding and costly task. With the increasing demand for more complex models in different areas, such as the design of virtual worlds, video games and computer animated movies the need to generate them automatically has become more necessary than ever.This paper presents a framework for the automatic generation of houses based on architectural rules. This approach has some innovating features, including the implementation of architectural rules, and produces 2D floor plans as well as complete 3D models, with a high level of detail, in just a few seconds. To evaluate the framework two different applications were developed and the output models were tested for different fields of application (e.g. virtual worlds). The results obtained contain evidences that the proposed framework may lead to the development of several specific applications to produce accurate 3D models of houses representing different realities (e.g. civilizations, epochs, etc.).


2013 ◽  
Vol 4 (9) ◽  
pp. 70
Author(s):  
Iñaki Prieto ◽  
José Luis Izkara ◽  
Aitziber Egusquiza

<p>Georeferenced 3D models represent an increasingly accepted solution for storing and displaying information at urban scale. CityGML, as standard data model for the representation, storage and exchange of 3D city models, represent a very attractive solution which combines 3D geometric and semantic information in a single data model. In this paper we present an approach to visualize semantic and 3D information of historical centers using open standards. Also, three client applications are presented targeting different agents with different needs with the characteristic that all the information is got from an unique extended CityGML data model.</p>


2006 ◽  
Vol 27 (4) ◽  
pp. 218-228 ◽  
Author(s):  
Paul Rodway ◽  
Karen Gillies ◽  
Astrid Schepman

This study examined whether individual differences in the vividness of visual imagery influenced performance on a novel long-term change detection task. Participants were presented with a sequence of pictures, with each picture and its title displayed for 17  s, and then presented with changed or unchanged versions of those pictures and asked to detect whether the picture had been changed. Cuing the retrieval of the picture's image, by presenting the picture's title before the arrival of the changed picture, facilitated change detection accuracy. This suggests that the retrieval of the picture's representation immunizes it against overwriting by the arrival of the changed picture. The high and low vividness participants did not differ in overall levels of change detection accuracy. However, in replication of Gur and Hilgard (1975) , high vividness participants were significantly more accurate at detecting salient changes to pictures compared to low vividness participants. The results suggest that vivid images are not characterised by a high level of detail and that vivid imagery enhances memory for the salient aspects of a scene but not all of the details of a scene. Possible causes of this difference, and how they may lead to an understanding of individual differences in change detection, are considered.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1200
Author(s):  
Franziska Schollemann ◽  
Carina Barbosa Pereira ◽  
Stefanie Rosenhain ◽  
Andreas Follmann ◽  
Felix Gremse ◽  
...  

Even though animal trials are a controversial topic, they provide knowledge about diseases and the course of infections in a medical context. To refine the detection of abnormalities that can cause pain and stress to the animal as early as possible, new processes must be developed. Due to its noninvasive nature, thermal imaging is increasingly used for severity assessment in animal-based research. Within a multimodal approach, thermal images combined with anatomical information could be used to simulate the inner temperature profile, thereby allowing the detection of deep-seated infections. This paper presents the generation of anatomical thermal 3D models, forming the underlying multimodal model in this simulation. These models combine anatomical 3D information based on computed tomography (CT) data with a registered thermal shell measured with infrared thermography. The process of generating these models consists of data acquisition (both thermal images and CT), camera calibration, image processing methods, and structure from motion (SfM), among others. Anatomical thermal 3D models were successfully generated using three anesthetized mice. Due to the image processing improvement, the process was also realized for areas with few features, which increases the transferability of the process. The result of this multimodal registration in 3D space can be viewed and analyzed within a visualization tool. Individual CT slices can be analyzed axially, sagittally, and coronally with the corresponding superficial skin temperature distribution. This is an important and successfully implemented milestone on the way to simulating the internal temperature profile. Using this temperature profile, deep-seated infections and inflammation can be detected in order to reduce animal suffering.


2021 ◽  
Vol 7 (1) ◽  
pp. 540-555
Author(s):  
Hayley L. Mickleburgh ◽  
Liv Nilsson Stutz ◽  
Harry Fokkens

Abstract The reconstruction of past mortuary rituals and practices increasingly incorporates analysis of the taphonomic history of the grave and buried body, using the framework provided by archaeothanatology. Archaeothanatological analysis relies on interpretation of the three-dimensional (3D) relationship of bones within the grave and traditionally depends on elaborate written descriptions and two-dimensional (2D) images of the remains during excavation to capture this spatial information. With the rapid development of inexpensive 3D tools, digital replicas (3D models) are now commonly available to preserve 3D information on human burials during excavation. A procedure developed using a test case to enhance archaeothanatological analysis and improve post-excavation analysis of human burials is described. Beyond preservation of static spatial information, 3D visualization techniques can be used in archaeothanatology to reconstruct the spatial displacement of bones over time, from deposition of the body to excavation of the skeletonized remains. The purpose of the procedure is to produce 3D simulations to visualize and test archaeothanatological hypotheses, thereby augmenting traditional archaeothanatological analysis. We illustrate our approach with the reconstruction of mortuary practices and burial taphonomy of a Bell Beaker burial from the site of Oostwoud-Tuithoorn, West-Frisia, the Netherlands. This case study was selected as the test case because of its relatively complete context information. The test case shows the potential for application of the procedure to older 2D field documentation, even when the amount and detail of documentation is less than ideal.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Yea Som Lee ◽  
Bong-Soo Sohn

3D maps such as Google Earth and Apple Maps (3D mode), in which users can see and navigate in 3D models of real worlds, are widely available in current mobile and desktop environments. Users usually use a monitor for display and a keyboard/mouse for interaction. Head-mounted displays (HMDs) are currently attracting great attention from industry and consumers because they can provide an immersive virtual reality (VR) experience at an affordable cost. However, conventional keyboard and mouse interfaces decrease the level of immersion because the manipulation method does not resemble actual actions in reality, which often makes the traditional interface method inappropriate for the navigation of 3D maps in virtual environments. From this motivation, we design immersive gesture interfaces for the navigation of 3D maps which are suitable for HMD-based virtual environments. We also describe a simple algorithm to capture and recognize the gestures in real-time using a Kinect depth camera. We evaluated the usability of the proposed gesture interfaces and compared them with conventional keyboard and mouse-based interfaces. Results of the user study indicate that our gesture interfaces are preferable for obtaining a high level of immersion and fun in HMD-based virtual environments.


Author(s):  
G. S. Floros ◽  
C. Ellul ◽  
E. Dimopoulou

<p><strong>Abstract.</strong> Applications of 3D City Models range from assessing the potential output of solar panels across a city to determining the best location for 5G mobile phone masts. While in the past these models were not readily available, the rapid increase of available data from sources such as Open Data (e.g. OpenStreetMap), National Mapping and Cadastral Agencies and increasingly Building Information Models facilitates the implementation of increasingly detailed 3D Models. However, these sources also generate integration challenges relating to heterogeneity, storage and efficient management and visualization. CityGML and IFC (Industry Foundation Classes) are two standards that serve different application domains (GIS and BIM) and are commonly used to store and share 3D information. The ability to convert data from IFC to CityGML in a consistent manner could generate 3D City Models able to represent an entire city, but that also include detailed geometric and semantic information regarding its elements. However, CityGML and IFC present major differences in their schemas, rendering interoperability a challenging task, particularly when details of a building’s internal structure are considered (Level of Detail 4 in CityGML). The aim of this paper is to investigate interoperability options between the aforementioned standards, by converting IFC models to CityGML LoD 4 Models. The CityGML Models are then semantically enriched and the proposed methodology is assessed in terms of model’s geometric validity and capability to preserve semantics.</p>


Author(s):  
J. Tian ◽  
T. Krauß ◽  
P. d’Angelo

Automatic rooftop extraction is one of the most challenging problems in remote sensing image analysis. Classical 2D image processing techniques are expensive due to the high amount of features required to locate buildings. This problem can be avoided when 3D information is available. In this paper, we show how to fuse the spectral and height information of stereo imagery to achieve an efficient and robust rooftop extraction. In the first step, the digital terrain model (DTM) and in turn the normalized digital surface model (nDSM) is generated by using a newly step-edge approach. In the second step, the initial building locations and rooftop boundaries are derived by removing the low-level pixels and high-level pixels with higher probability to be trees and shadows. This boundary is then served as the initial level set function, which is further refined to fit the best possible boundaries through distance regularized level-set curve evolution. During the fitting procedure, the edge-based active contour model is adopted and implemented by using the edges indicators extracted from panchromatic image. The performance of the proposed approach is tested by using the WorldView-2 satellite data captured over Munich.


Sign in / Sign up

Export Citation Format

Share Document