Real-time sound propagation hardware accelerator for immersive virtual reality 3D audio

Author(s):  
Dukki Hong ◽  
Tae-Hyoung Lee ◽  
Yejong Joo ◽  
Woo-Chan Park
2020 ◽  
Vol 22 (Supplement_3) ◽  
pp. iii461-iii461
Author(s):  
Andrea Carai ◽  
Angela Mastronuzzi ◽  
Giovanna Stefania Colafati ◽  
Paul Voicu ◽  
Nicola Onorini ◽  
...  

Abstract Tridimensional (3D) rendering of volumetric neuroimaging is increasingly been used to assist surgical management of brain tumors. New technologies allowing immersive virtual reality (VR) visualization of obtained models offer the opportunity to appreciate neuroanatomical details and spatial relationship between the tumor and normal neuroanatomical structures to a level never seen before. We present our preliminary experience with the Surgical Theatre, a commercially available 3D VR system, in 60 consecutive neurosurgical oncology cases. 3D models were developed from volumetric CT scans and MR standard and advanced sequences. The system allows the loading of 6 different layers at the same time, with the possibility to modulate opacity and threshold in real time. Use of the 3D VR was used during preoperative planning allowing a better definition of surgical strategy. A tailored craniotomy and brain dissection can be simulated in advanced and precisely performed in the OR, connecting the system to intraoperative neuronavigation. Smaller blood vessels are generally not included in the 3D rendering, however, real-time intraoperative threshold modulation of the 3D model assisted in their identification improving surgical confidence and safety during the procedure. VR was also used offline, both before and after surgery, in the setting of case discussion within the neurosurgical team and during MDT discussion. Finally, 3D VR was used during informed consent, improving communication with families and young patients. 3D VR allows to tailor surgical strategies to the single patient, contributing to procedural safety and efficacy and to the global improvement of neurosurgical oncology care.


2018 ◽  
Vol 9 (6) ◽  
pp. 2825 ◽  
Author(s):  
Mark Draelos ◽  
Brenton Keller ◽  
Christian Viehland ◽  
Oscar M. Carrasco-Zevallos ◽  
Anthony Kuo ◽  
...  

Author(s):  
Gabriele Montecchiari ◽  
Gabriele Bulian ◽  
Paolo Gallina

The analysis of the ship layout from the point of view of safe and orderly evacuation represents an important step in ship design, which can be carried out through agent-based evacuation simulation tools, typically run in batch mode. Introducing the possibility for humans to interactively participate in a simulated evacuation process together with computer-controlled agents can open a series of interesting possibilities for design, research and development. To this aim, this article presents the development of a validated agent-based evacuation simulation tool which allows real-time human participation through immersive virtual reality. The main characteristics of the underlying social-force-based modelling technique are described. The tool is verified and validated by making reference to International Maritime Organization test cases, experimental data and FDS + Evac simulations. The first approach for supporting real-time human participation is then presented. An initial experiment embedding immersive virtual reality human participation is described, together with outcomes regarding comparisons between human-controlled avatars and computer-controlled agents. Results from this initial testing are encouraging in pursuing the use of virtual reality as a tool to obtain information on human behaviour during evacuation.


Author(s):  
Kevin Lesniak ◽  
Conrad S. Tucker ◽  
Sven Bilen ◽  
Janis Terpenny ◽  
Chimay Anumba

Immersive virtual reality systems have the potential to transform the manner in which designers create prototypes and collaborate in teams. Using technologies such as the Oculus Rift or the HTC Vive, a designer can attain a sense of “presence” and “immersion” typically not experienced by traditional CAD-based platforms. However, one of the fundamental challenges of creating a high quality immersive virtual reality experience is actually creating the immersive virtual reality environment itself. Typically, designers spend a considerable amount of time manually designing virtual models that replicate physical, real world artifacts. While there exists the ability to import standard 3D models into these immersive virtual reality environments, these models are typically generic in nature and do not represent the designer’s intent. To mitigate these challenges, the authors of this work propose the real time translation of physical objects into an immersive virtual reality environment using readily available RGB-D sensing systems and standard networking connections. The emergence of commercial, off-the shelf RGB-D sensing systems such as the Microsoft Kinect, have enabled the rapid 3D reconstruction of physical environments. The authors present a methodology that employs 3D mesh reconstruction algorithms and real time rendering techniques to capture physical objects in the real world and represent their 3D reconstruction in an immersive virtual realilty environment with which the user can then interact. A case study involving a commodity RGB-D sensor and multiple computers connected through standard TCP internet connections is presented to demonstrate the viability of the proposed methodology.


2021 ◽  
Author(s):  
◽  
F. J. Rodal Martínez

Virtual Reality is defined as an interactive and multisensory computer system in which an environment is simulated in real time, and there can be two categories: Immersive Virtual Reality and Non-Immersive Virtual Reality. To date, Virtual Reality has been used in different areas such as education, entertainment and rehabilitation. The WHO estimates that around 15% of the world's population suffers from a disabling condition. This organization in conjunction with the ISPO determined that about 0.5% of the world's population requires an orthotic or prosthetic system. In Mexico, in the National Survey of Demographic Dynamics it is estimated that 10.9% of the population has difficulty walking or moving. The objective of this project is to design a Virtual Reality system that allows training transhumeral amputees in the use of the prosthesis. 2 virtual environments and 8 3D-characters were created so that the subjects to be trained can select between these possibilities to carry out the training sessions. The subjects control these 3D-characters in real time through a motion capture system, which also generates a biomechanical analysis of the movement of the shoulder during the execution of the movements.


Author(s):  
Kevin Lesniak ◽  
Janis Terpenny ◽  
Conrad S. Tucker ◽  
Chimay Anumba ◽  
Sven G. Bilén

With design teams becoming more distributed, the sharing and interpreting of complex data about design concepts/prototypes and environments have become increasingly challenging. The size and quality of data that can be captured and shared directly affects the ability of receivers of that data to collaborate and provide meaningful feedback. To mitigate these challenges, the authors of this work propose the real-time translation of physical objects into an immersive virtual reality environment using readily available red, green, blue, and depth (RGB-D) sensing systems and standard networking connections. The emergence of commercial, off-the-shelf RGB-D sensing systems, such as the Microsoft Kinect, has enabled the rapid three-dimensional (3D) reconstruction of physical environments. The authors present a method that employs 3D mesh reconstruction algorithms and real-time rendering techniques to capture physical objects in the real world and represent their 3D reconstruction in an immersive virtual reality environment with which the user can then interact. Providing these features allows distributed design teams to share and interpret complex 3D data in a natural manner. The method reduces the processing requirements of the data capture system while enabling it to be portable. The method also provides an immersive environment in which designers can view and interpret the data remotely. A case study involving a commodity RGB-D sensor and multiple computers connected through standard TCP internet connections is presented to demonstrate the viability of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document