scholarly journals Virtual Reality Application of the Fortress Al Zubarah in Qatar Including Performance Analysis of Real-Time Visualisation

Author(s):  
Thomas Kersten ◽  
Daniel Drenkhan ◽  
Simon Deggim

AbstractTechnological advancements in the area of Virtual Reality (VR) in the past years have the potential to fundamentally impact our everyday lives. VR makes it possible to explore a digital world with a Head-Mounted Display (HMD) in an immersive, embodied way. In combination with current tools for 3D documentation, modelling and software for creating interactive virtual worlds, VR has the means to play an important role in the conservation and visualisation of cultural heritage (CH) for museums, educational institutions and other cultural areas. Corresponding game engines offer tools for interactive 3D visualisation of CH objects, which makes a new form of knowledge transfer possible with the direct participation of users in the virtual world. However, to ensure smooth and optimal real-time visualisation of the data in the HMD, VR applications should run at 90 frames per second. This frame rate is dependent on several criteria including the amount of data or number of dynamic objects. In this contribution, the performance of a VR application has been investigated using different digital 3D models of the fortress Al Zubarah in Qatar with various resolutions. We demonstrate the influence on real-time performance by the amount of data and the hardware equipment and that developers of VR applications should find a compromise between the amount of data and the available computer hardware, to guarantee a smooth real-time visualisation with approx. 90 fps (frames per second). Therefore, CAD models offer a better performance for real-time VR visualisation than meshed models due to the significant reduced data volume.

2020 ◽  
Vol 22 (Supplement_3) ◽  
pp. iii461-iii461
Author(s):  
Andrea Carai ◽  
Angela Mastronuzzi ◽  
Giovanna Stefania Colafati ◽  
Paul Voicu ◽  
Nicola Onorini ◽  
...  

Abstract Tridimensional (3D) rendering of volumetric neuroimaging is increasingly been used to assist surgical management of brain tumors. New technologies allowing immersive virtual reality (VR) visualization of obtained models offer the opportunity to appreciate neuroanatomical details and spatial relationship between the tumor and normal neuroanatomical structures to a level never seen before. We present our preliminary experience with the Surgical Theatre, a commercially available 3D VR system, in 60 consecutive neurosurgical oncology cases. 3D models were developed from volumetric CT scans and MR standard and advanced sequences. The system allows the loading of 6 different layers at the same time, with the possibility to modulate opacity and threshold in real time. Use of the 3D VR was used during preoperative planning allowing a better definition of surgical strategy. A tailored craniotomy and brain dissection can be simulated in advanced and precisely performed in the OR, connecting the system to intraoperative neuronavigation. Smaller blood vessels are generally not included in the 3D rendering, however, real-time intraoperative threshold modulation of the 3D model assisted in their identification improving surgical confidence and safety during the procedure. VR was also used offline, both before and after surgery, in the setting of case discussion within the neurosurgical team and during MDT discussion. Finally, 3D VR was used during informed consent, improving communication with families and young patients. 3D VR allows to tailor surgical strategies to the single patient, contributing to procedural safety and efficacy and to the global improvement of neurosurgical oncology care.


2021 ◽  
Vol 1 (1) ◽  
pp. 48-67
Author(s):  
Dylan Yamada-Rice

This article reports on one stage of a project that considered twenty 8–12-years-olds use of Virtual Reality (VR) for entertainment. The entire project considered this in relation to interaction and engagement, health and safety and how VR play fitted into children’s everyday home lives. The specific focus of this article is solely on children’s interaction and engagement with a range of VR content on both a low-end and high-end head mounted display (HMD). The data were analysed using novel multimodal methods that included stop-motion animation and graphic narratives to develop multimodal means for analysis within the context of VR. The data highlighted core design elements in VR content that promoted or inhibited children’s storytelling in virtual worlds. These are visual style, movement and sound which are described in relation to three core points of the user’s journey through the virtual story; (1) entering the virtual environment, (2) being in the virtual story world, and (3) affecting the story through interactive objects. The findings offer research-based design implications for the improvement of virtual content for children, specifically in relation to creating content that promotes creativity and storytelling, thereby extending the benefits that have previously been highlighted in the field of interactive storytelling with other digital media.


2021 ◽  
Author(s):  
Haowen Jiang ◽  
Sunitha Vimalesvaran ◽  
Jeremy King Wang ◽  
Kee Boon Lim ◽  
Sreenivasulu Reddy Mogali ◽  
...  

BACKGROUND Virtual reality (VR) is a digital education modality that produces a virtual manifestation of the real world and it has been increasingly used in medical education. As VR encompasses different modalities, tools and applications, there is a need to explore how VR has been employed in medical education. OBJECTIVE The objective of this scoping review is to map existing research on the use of VR in undergraduate medical education and to identify areas of future research METHODS We performed a search of 4 bibliographic databases in December 2020, with data extracted using a standardized data extraction form. The data was narratively synthesized and reported in line with the PRISMA-ScR guidelines. RESULTS Of 114 included studies, 69 studies (61%) reported the use of commercially available surgical VR simulators. Other VR modalities included 3D models (15 [14%]) and virtual worlds (20 [18%]), mainly used for anatomy education. Most of the VR modalities included were semi-immersive (68 [60%]) and of high interactivity (79 [70%]). There is limited evidence on the use of more novel VR modalities such as mobile VR and virtual dissection tables (8 [7%]), as well as the use of VR for training of non-surgical and non-psychomotor skills (20 [18%]) or in group setting (16 [14%]). Only 3 studies reported the use conceptual frameworks or theories in the design of VR. CONCLUSIONS Despite extensive research available on VR in medical education, there continues to be important gaps in the evidence. Future studies should explore the use of VR for the development of non-psychomotor skills and in areas other than surgery and anatomy.


Author(s):  
Ratnadeep Paul ◽  
Sam Anand

Product Life-cycle Management (PLM) has been one of the single most important techniques to have been developed in the manufacturing industry. The increasing capabilities of internet and the ever increasing dependence of business entities on internet have led to the development of metaverses — internet-based 3D virtual worlds — which act as business platforms where companies display and showcase their latest products and services. This is in turn has led to a demand for development of methods for the easy transfer of data from stand alone PLM systems to the internet based virtual worlds. This paper presents the development of a translator which will transfer product data of 3D models created in CAD systems to an internet based virtual world. This translator uses a faceted-surface approach to transfer the product information. In this work CAD models were converted to a CAD-neutral data format, JT file format, and finally recreated in the metaverse Second Life (SL). Examples of models translated from JT to SL have been presented. A technique known as prim optimization, which increases the efficiency of the translation was also incorporated in the algorithm for the translator. Examples of prim optimization have been provided in the paper.


2013 ◽  
Vol 380-384 ◽  
pp. 1847-1850
Author(s):  
Yan Jun Chang

Virtual reality technology with the help of computer hardware and software resources to create and experience the virtual world integration technology can realize the dynamic simulation of the real world,and the dynamic environment to the users attitude and language command could make a real-time response, making the user and the simulation environment to build up a real-time interactive relationship. With the Key parameters acquisition from sports technology and the quantification of technology action , we puts forward the application methods of virtual reality technology in the diagnosis the steps of Virtual reality technology :Calibration system posting signs to the tester Motion tracks capture the analysis of Collection of data.Discuss the virtual reality technologys effect and composition in sports.


Author(s):  
Claudia Lindner ◽  
Annette Ortwein ◽  
Kilian Staar ◽  
Andreas Rienow

AbstractElevation and visual data from Chang’E-2, Mars Viking, and MOLA were transformed into 3D models and environments using unity and unreal engine to be implemented in augmented (AR) and virtual reality (VR) applications, respectively. The workflows for the two game development engines and the two purposes overlap, but have significant differences stemming from their intended usage: both are used in educational settings, but while the AR app has to run on basic smartphones that students from all socio-economic backgrounds might have, the VR requires high-end PCs and can therefore make use of respective devices’ potential. Hence, the models for the AR app are reduced to the necessary components and sizes of the highest mountains on Luna and Mars, whereas the VR app contains several models of probe landing sites on Mars, a landscape containing the entire planet at multiple levels of detail and a complex environment. Both applications are enhanced for educational use with annotations and interactive elements. This study focuses on the transfer of scientific data into game development engines for the use in educational settings using the example of scales in extra-terrestrial environments.


2021 ◽  
Vol 10 (5) ◽  
pp. 3546-3551
Author(s):  
Tamanna Nurai

Cybersickness continues to become a negative consequence that degrades the interface for users of virtual worlds created for Virtual Reality (VR) users. There are various abnormalities that might cause quantifiable changes in body awareness when donning an Head Mounted Display (HMD) in a Virtual Environment (VE). VR headsets do provide VE that matches the actual world and allows users to have a range of experiences. Motion sickness and simulation sickness performance gives self-report assessments of cybersickness with VEs. In this study a simulator sickness questionnaire is being used to measure the aftereffects of the virtual environment. This research aims to answer if Immersive VR induce cybersickness and impact equilibrium coordination. The present research is formed as a cross-sectional observational analysis. According to the selection criteria, a total of 40 subjects would be recruited from AVBRH, Sawangi Meghe for the research. With intervention being used the experiment lasted 6 months. Simulator sickness questionnaire is used to evaluate the after-effects of a virtual environment. It holds a single period for measuring motion sickness and evaluation of equilibrium tests were done twice at exit and after 10 mins. Virtual reality being used in video games is still in its development. Integrating gameplay action into the VR experience will necessitate a significant amount of study and development. The study has evaluated if Immersive VR induce cybersickness and impact equilibrium coordination. To measure cybersickness, numerous scales have been developed. The essence of cybersickness has been revealed owing to work on motion sickness in a simulated system.


2011 ◽  
Vol 2 (4) ◽  
pp. 89 ◽  
Author(s):  
Donald H. Sanders

<p>This paper focuses on a system that can ensure that excavations are indeed fully documented and that the record is accurate. REVEAL is a single piece of software that coordinates all data types used at excavations with semi-automated tools that in turn can ease the process of documenting sites, trenches and objects, of recording excavation progress, of researching and analyzing the collected evidence, and even of creating 3D models and virtual worlds. Search and retrieval, and thus testing hypotheses against the excavated material happens in real time, as the excavation proceeds. That is the important advance.</p>


2020 ◽  
Vol 10 (7) ◽  
pp. 2248
Author(s):  
Syed Hammad Hussain Shah ◽  
Kyungjin Han ◽  
Jong Weon Lee

We propose a novel authoring and viewing system for generating multiple experiences with a single 360° video and efficiently transferring these experiences to the user. An immersive video contains much more interesting information within the 360° environment than normal videos. There can be multiple interesting areas within a 360° frame at the same time. Due to the narrow field of view in virtual reality head-mounted displays, a user can only view a limited area of a 360° video. Hence, our system is aimed at generating multiple experiences based on interesting information in different regions of a 360° video and efficient transferring of these experiences to prospective users. The proposed system generates experiences by using two approaches: (1) Recording of the user’s experience when the user watches a panoramic video using a virtual reality head-mounted display, and (2) tracking of an arbitrary interesting object in a 360° video selected by the user. For tracking of an arbitrary interesting object, we have developed a pipeline around an existing simple object tracker to adapt it for 360° videos. This tracking algorithm was performed in real time on a CPU with high precision. Moreover, to the best of our knowledge, there is no such existing system that can generate a variety of different experiences from a single 360° video and enable the viewer to watch one 360° visual content from various interesting perspectives in immersive virtual reality. Furthermore, we have provided an adaptive focus assistance technique for efficient transferring of the generated experiences to other users in virtual reality. In this study, technical evaluation of the system along with a detailed user study has been performed to assess the system’s application. Findings from evaluation of the system showed that a single 360° multimedia content has the capability of generating multiple experiences and transfers among users. Moreover, sharing of the 360° experiences enabled viewers to watch multiple interesting contents with less effort.


2020 ◽  
Vol 9 (2) ◽  
pp. 118
Author(s):  
Robert Olszewski ◽  
Mateusz Cegiełka ◽  
Urszula Szczepankowska ◽  
Jacek Wesołowski

Game engines are not only capable of creating virtual worlds or providing entertainment, but also of modelling actual geographical space and producing solutions that support the process of social participation. This article presents an authorial concept of using the environment of Cities: Skylines and the C# programming language to automate the process of importing official topographic data into the game engine and developing a prototype of a serious game that supports solving social and ecological problems. The model—developed using digital topographic data, digital terrain models, and CityGML 3D models—enabled the creation of a prototype of a serious game, later endorsed by the residents of the municipality, local authorities, as well as the Ministry of Investment and Economic Development.


Sign in / Sign up

Export Citation Format

Share Document