scholarly journals Different Levels of Complexity for Integrating Textured Extra-terrestrial Elevation Data in Game Engines for Educational Augmented and Virtual Reality Applications

Author(s):  
Claudia Lindner ◽  
Annette Ortwein ◽  
Kilian Staar ◽  
Andreas Rienow

AbstractElevation and visual data from Chang’E-2, Mars Viking, and MOLA were transformed into 3D models and environments using unity and unreal engine to be implemented in augmented (AR) and virtual reality (VR) applications, respectively. The workflows for the two game development engines and the two purposes overlap, but have significant differences stemming from their intended usage: both are used in educational settings, but while the AR app has to run on basic smartphones that students from all socio-economic backgrounds might have, the VR requires high-end PCs and can therefore make use of respective devices’ potential. Hence, the models for the AR app are reduced to the necessary components and sizes of the highest mountains on Luna and Mars, whereas the VR app contains several models of probe landing sites on Mars, a landscape containing the entire planet at multiple levels of detail and a complex environment. Both applications are enhanced for educational use with annotations and interactive elements. This study focuses on the transfer of scientific data into game development engines for the use in educational settings using the example of scales in extra-terrestrial environments.

Author(s):  
Thomas Kersten ◽  
Daniel Drenkhan ◽  
Simon Deggim

AbstractTechnological advancements in the area of Virtual Reality (VR) in the past years have the potential to fundamentally impact our everyday lives. VR makes it possible to explore a digital world with a Head-Mounted Display (HMD) in an immersive, embodied way. In combination with current tools for 3D documentation, modelling and software for creating interactive virtual worlds, VR has the means to play an important role in the conservation and visualisation of cultural heritage (CH) for museums, educational institutions and other cultural areas. Corresponding game engines offer tools for interactive 3D visualisation of CH objects, which makes a new form of knowledge transfer possible with the direct participation of users in the virtual world. However, to ensure smooth and optimal real-time visualisation of the data in the HMD, VR applications should run at 90 frames per second. This frame rate is dependent on several criteria including the amount of data or number of dynamic objects. In this contribution, the performance of a VR application has been investigated using different digital 3D models of the fortress Al Zubarah in Qatar with various resolutions. We demonstrate the influence on real-time performance by the amount of data and the hardware equipment and that developers of VR applications should find a compromise between the amount of data and the available computer hardware, to guarantee a smooth real-time visualisation with approx. 90 fps (frames per second). Therefore, CAD models offer a better performance for real-time VR visualisation than meshed models due to the significant reduced data volume.


Author(s):  
K. Pavelka jr. ◽  
B. Michalík

<p><strong>Abstract.</strong> Virtual Reality (VR) is a highly topical subject in many branches of science and industry. Thanks to the rapid development and advancement of computer technology in recent years, it can now be used to a large extent, with more detail to show and is now more affordable than before. The use of virtual reality is currently devoted to many disciplines and it can be expected that its popularity will grow progressively over the next few years. The Laboratory of Photogrammetry at the Czech Technical University in Prague is also interested in VR and focuses mainly on documentation and visualization of historical buildings and objects. Our opinion is that in the field of virtual reality there is great potential and extensive possibilities. 3D models of historical objects, primarily created by photogrammetric IBRM technology (image based modelling and rendering) or by laser scanning, gain a completely different perspective in VR. In general, most of the newly designed buildings are now being implemented into BIM. For certain projects, historical buildings or constructions should also have implemented into BIM. As a basic input into BIM, an accurate 3D spatial documentation of the condition is needed with special accent to additional information like engineering networks, materials, etc. Creating BIM is one thing, visualizing a model is another. The historical object is irregular and it is difficult to create its simplified form as the CAD model; it is much easier with modern buildings. The question is always the appropriate type of visualization, where virtual reality can be a very useful technology. So-called game engines such as Unreal engine or Unity are used to create a virtual world. These are highly sophisticated tools that make it possible to create a suitable environment, where you can place created models and then view and analyse them with the help of VR glasses. In our contribution, we would like to show an example of a technology line that allows you to convert an object documented by laser scanning into virtual reality. An older industrial building prepared for future reconstruction was chosen as a case study. This object was scanned by a laser scanner, a 3D model was created and material types and engineering networks were added into the model.</p>


Author(s):  
J.-P. Virtanen ◽  
A. Julin ◽  
H. Handolin ◽  
T. Rantanen ◽  
M. Maksimainen ◽  
...  

Abstract. Visualization applications are an increasingly significant component in the field of 3D geo-information. In them, the utilization of consumer grade virtual reality (VR) head mounted displays (HMD) has become a topical research question. It is notable, that in most presented implementations, the VR visualization is accomplished by a game engine. As game engines rely on textured mesh models as their conventional 3D asset format, the challenge in applying photogrammetric or laser scanning data is in producing models than are suitable for game engine use. We present an example of leveraging immersive visualization in geo-information, including the acquisition of data from the intended environment, processing it to a game engine compatible form, developing the required functions on the game engine and finally utilizing VR HMDs to deploy the application. The presented application combines 3D indoor models obtained via a commercial indoor mapping system, a 3D city model segment obtained by processing airborne laser scanning data, and a set of manually created 3D models. The performance of the application is evaluated on two different VR systems. The observed capabilities of interactive VR applications include: 1) intuitive and free exploration of 3D data, 2) ability of operate in different scales, and with different scales of data, 3) integration of different data types (such as 2D imaging and 3D models) in interactive scenes and 4) the possibility to leverage the rich interaction functions offered by the game engine platform. These capabilities could support several use cases in geo-information.


2020 ◽  
Author(s):  
Laura Daricello ◽  
Laura Leonardi ◽  
Antonio Maggio ◽  
Salvatore Orlando ◽  
Ignazio Pillitteri ◽  
...  

&lt;p dir=&quot;ltr&quot;&gt;Virtual reality (VR) devices allow the exploration of 3D data in a fully immersive fashion and make it possible to create a powerful engagement experience and a direct interaction with current scientific data to learn more about astronomy in Education and Public Outreach (E&amp;PO) activities. In 2019 the INAF Osservatorio Astronomico di Palermo (INAF-OAPa) launched &lt;em&gt;3DMAP-VR&lt;/em&gt; (3-Dimensional Modeling of Astrophysical Phenomena in Virtual Reality; Orlando et al. 2019, RNAAS 3, ID.176), a project for visualizing 3D results of astrophysical (magneto)-hydrodynamic (MHD) simulations, through VR equipments. The models, uploaded on the Sketchfab portal (a platform widely used to publish and share 3D models and VR contents), received a very positive feedback from the scientific community and the general public.&amp;#160;&amp;#160;Here we will show some of the scenes produced in the framework of 3DMAP-VR&amp;#160; to describe astrophysical phenomena. More specifically, we will focus our attention on MHD simulations describing the interaction of exoplanets (https://skfb.ly/6QYtC) with their host stars, and on artististic views of exoplanets which are based on information extracted from multi-wavelength observations, such as in the case of exoplanets 55 Cancri (https://skfb.ly/6R6Pt) and Wasp-76b (https://skfb.ly/6QZHF). Moreover, the 3DMAP-VR project team used augmented reality to produce informative videos to explore the characteristics of some of these models, published on &lt;em&gt;media.inaf.it&lt;/em&gt; and &lt;em&gt;edu.inaf.it&lt;/em&gt;. These E&amp;PO products not only allowed the public to understand the astrophysical phenomena but they have stimulated great synergy between the outreach team and the astronomers, and between researchers and the public.&lt;/p&gt;


Author(s):  
Denys Gorkovchuk ◽  
Julia Gorkovchuk ◽  
Thomas Luhmann

Recently, virtual reality technologies are increasingly being introduced into our lives. The focus of their use is shifting from the entertainment industry to design, healthcare, tourism, architecture, education and more. The advantages of virtual reality technology are especially noticeable in the field of archaeology, as many historical objects have not survived to our time, and their appearance can be reproduced only on the basis of historical sources and archaeological excavations. Most platforms for implementing virtual reality programs are based on game engines that can provide the required level of performance for VR. Such platforms show very good results for architectural objects, which often have many similar elements of simple shapes. Integrating complex objects with unique shapes is usually a problem. In this article, we consider the use of photogrammetric methods to create 3D models of historical objects and the aspects of their integration into a virtual environment based on a game engine. Specifically, aspects such as object resolution and suitable level of detail are discussed. As a case study, such a virtual environment was created for the ancient Trypillia settlement in the territory of Ukraine.


Author(s):  
F. Tschirschwitz ◽  
G. Büyüksalih ◽  
T. P. Kersten ◽  
T. Kan ◽  
G. Enc ◽  
...  

<p><strong>Abstract.</strong> “A picture is worth a thousand words”: a famous quote about knowledge dissemination but also literally true. The documentation of cultural heritage (CH) monuments is carried out by measurements and photos and stored in 3d models &amp;ndash; not by textual information alone. So what could be a more straightforward way to inform the public about CH than visual information? This approach can be extended not only by providing static images or videos from predefined angles but by giving the user the opportunity to interactively explore the virtual representation and interact with the scene. Recent advances in contemporary Virtual Reality (VR) have made it available to more people as prices have dropped. New devices have entered the market so that VR is not limited to VR labs, but is available even at home. With modern head-mounted displays the user can immerse himself in the virtual CH monument to explore and interact with it. Game engines offer tools for rapid development of interactions and help to produce visually appealing worlds.</p><p>In this paper is presented the generation of a virtual 3D model of Rumeli Hisarı, an Ottoman fortress at the Bosporus in Istanbul, Turkey (Fig. 1) and its processing for data integration into the game engine Unity. The project has been carried out as a co-operation between BİMTAŞ, a company of the Greater Municipality of Istanbul, Turkey and the Photogrammetry &amp; Laser Scanning Lab of the HafenCity University Hamburg, Germany with the aim of a VR application for an immersive and interactive visualisation of the fortress using the VR system HTC Vive. The workflow from data acquisition to VR visualisation, including the necessary programming for navigation, is described. Furthermore, the possible use (including simultaneous multiple users environments) of such a VR visualisation for a CH monument is discussed.</p>


2020 ◽  
Vol 22 (Supplement_3) ◽  
pp. iii461-iii461
Author(s):  
Andrea Carai ◽  
Angela Mastronuzzi ◽  
Giovanna Stefania Colafati ◽  
Paul Voicu ◽  
Nicola Onorini ◽  
...  

Abstract Tridimensional (3D) rendering of volumetric neuroimaging is increasingly been used to assist surgical management of brain tumors. New technologies allowing immersive virtual reality (VR) visualization of obtained models offer the opportunity to appreciate neuroanatomical details and spatial relationship between the tumor and normal neuroanatomical structures to a level never seen before. We present our preliminary experience with the Surgical Theatre, a commercially available 3D VR system, in 60 consecutive neurosurgical oncology cases. 3D models were developed from volumetric CT scans and MR standard and advanced sequences. The system allows the loading of 6 different layers at the same time, with the possibility to modulate opacity and threshold in real time. Use of the 3D VR was used during preoperative planning allowing a better definition of surgical strategy. A tailored craniotomy and brain dissection can be simulated in advanced and precisely performed in the OR, connecting the system to intraoperative neuronavigation. Smaller blood vessels are generally not included in the 3D rendering, however, real-time intraoperative threshold modulation of the 3D model assisted in their identification improving surgical confidence and safety during the procedure. VR was also used offline, both before and after surgery, in the setting of case discussion within the neurosurgical team and during MDT discussion. Finally, 3D VR was used during informed consent, improving communication with families and young patients. 3D VR allows to tailor surgical strategies to the single patient, contributing to procedural safety and efficacy and to the global improvement of neurosurgical oncology care.


2021 ◽  
Author(s):  
Haowen Jiang ◽  
Sunitha Vimalesvaran ◽  
Jeremy King Wang ◽  
Kee Boon Lim ◽  
Sreenivasulu Reddy Mogali ◽  
...  

BACKGROUND Virtual reality (VR) is a digital education modality that produces a virtual manifestation of the real world and it has been increasingly used in medical education. As VR encompasses different modalities, tools and applications, there is a need to explore how VR has been employed in medical education. OBJECTIVE The objective of this scoping review is to map existing research on the use of VR in undergraduate medical education and to identify areas of future research METHODS We performed a search of 4 bibliographic databases in December 2020, with data extracted using a standardized data extraction form. The data was narratively synthesized and reported in line with the PRISMA-ScR guidelines. RESULTS Of 114 included studies, 69 studies (61%) reported the use of commercially available surgical VR simulators. Other VR modalities included 3D models (15 [14%]) and virtual worlds (20 [18%]), mainly used for anatomy education. Most of the VR modalities included were semi-immersive (68 [60%]) and of high interactivity (79 [70%]). There is limited evidence on the use of more novel VR modalities such as mobile VR and virtual dissection tables (8 [7%]), as well as the use of VR for training of non-surgical and non-psychomotor skills (20 [18%]) or in group setting (16 [14%]). Only 3 studies reported the use conceptual frameworks or theories in the design of VR. CONCLUSIONS Despite extensive research available on VR in medical education, there continues to be important gaps in the evidence. Future studies should explore the use of VR for the development of non-psychomotor skills and in areas other than surgery and anatomy.


Sign in / Sign up

Export Citation Format

Share Document