Networked, Real Time Translation of 3D Mesh Data to Immersive Virtual Reality Environments

Author(s):  
Kevin Lesniak ◽  
Conrad S. Tucker ◽  
Sven Bilen ◽  
Janis Terpenny ◽  
Chimay Anumba

Immersive virtual reality systems have the potential to transform the manner in which designers create prototypes and collaborate in teams. Using technologies such as the Oculus Rift or the HTC Vive, a designer can attain a sense of “presence” and “immersion” typically not experienced by traditional CAD-based platforms. However, one of the fundamental challenges of creating a high quality immersive virtual reality experience is actually creating the immersive virtual reality environment itself. Typically, designers spend a considerable amount of time manually designing virtual models that replicate physical, real world artifacts. While there exists the ability to import standard 3D models into these immersive virtual reality environments, these models are typically generic in nature and do not represent the designer’s intent. To mitigate these challenges, the authors of this work propose the real time translation of physical objects into an immersive virtual reality environment using readily available RGB-D sensing systems and standard networking connections. The emergence of commercial, off-the shelf RGB-D sensing systems such as the Microsoft Kinect, have enabled the rapid 3D reconstruction of physical environments. The authors present a methodology that employs 3D mesh reconstruction algorithms and real time rendering techniques to capture physical objects in the real world and represent their 3D reconstruction in an immersive virtual realilty environment with which the user can then interact. A case study involving a commodity RGB-D sensor and multiple computers connected through standard TCP internet connections is presented to demonstrate the viability of the proposed methodology.

Author(s):  
Kevin Lesniak ◽  
Janis Terpenny ◽  
Conrad S. Tucker ◽  
Chimay Anumba ◽  
Sven G. Bilén

With design teams becoming more distributed, the sharing and interpreting of complex data about design concepts/prototypes and environments have become increasingly challenging. The size and quality of data that can be captured and shared directly affects the ability of receivers of that data to collaborate and provide meaningful feedback. To mitigate these challenges, the authors of this work propose the real-time translation of physical objects into an immersive virtual reality environment using readily available red, green, blue, and depth (RGB-D) sensing systems and standard networking connections. The emergence of commercial, off-the-shelf RGB-D sensing systems, such as the Microsoft Kinect, has enabled the rapid three-dimensional (3D) reconstruction of physical environments. The authors present a method that employs 3D mesh reconstruction algorithms and real-time rendering techniques to capture physical objects in the real world and represent their 3D reconstruction in an immersive virtual reality environment with which the user can then interact. Providing these features allows distributed design teams to share and interpret complex 3D data in a natural manner. The method reduces the processing requirements of the data capture system while enabling it to be portable. The method also provides an immersive environment in which designers can view and interpret the data remotely. A case study involving a commodity RGB-D sensor and multiple computers connected through standard TCP internet connections is presented to demonstrate the viability of the proposed method.


2020 ◽  
Author(s):  
Paola Araiza-Alba ◽  
Therese Keane ◽  
Jennifer L Beaudry ◽  
Jordy Kaufman

In recent years, immersive virtual reality technology (IVR) has seen a substantial improvement in its quality, affordability, and ability to simulate the real world. Virtual reality in psychology can be used for three basic purposes: immersion, simulation, and a combination of both. While the psychological implementations of IVR have been predominately used with adults, this review seeks to update our knowledge about the uses and effectiveness of IVR with children. Specifically, its use as a tool for pain distraction, neuropsychological assessment, and skills training. Results showed that IVR is a useful tool when it is used either for immersive or simulative purposes (e.g., pain distraction, neuropsychological assessment), but when its use requires both simulation (of the real world) and immersion (e.g., a vivid environment), it is trickier to implement effectively.


2021 ◽  
Author(s):  
Ezgi Pelin Yildiz

Augmented reality is defined as the technology in which virtual objects are blended with the real world and also interact with each other. Although augmented reality applications are used in many areas, the most important of these areas is the field of education. AR technology allows the combination of real objects and virtual information in order to increase students’ interaction with physical environments and facilitate their learning. Developing technology enables students to learn complex topics in a fun and easy way through virtual reality devices. Students interact with objects in the virtual environment and can learn more about it. For example; by organizing digital tours to a museum or zoo in a completely different country, lessons can be taught in the company of a teacher as if they were there at that moment. In the light of all these, this study is a compilation study. In this context, augmented reality technologies were introduced and attention was drawn to their use in different fields of education with their examples. As a suggestion at the end of the study, it was emphasized that the prepared sections should be carefully read by the educators and put into practice in their lessons. In addition it was also pointed out that it should be preferred in order to communicate effectively with students by interacting in real time, especially during the pandemic process.


2020 ◽  
Vol 20 (2) ◽  
Author(s):  
Paola Araiza ◽  
Therese Keane ◽  
Jennifer L. Beaudry ◽  
Jordy Kaufman

In recent years, immersive virtual reality technology (IVR) has seen a substantial improvement in its quality, affordability, and ability to simulate the real world. Virtual reality in psychology can be used for three basic purposes: immersion, simulation, and a combination of both. While the psychological implementations of IVR have been predominately used with adults, this review seeks to update our knowledge about the uses and effectiveness of IVR with children. Specifically, its use as a tool for pain distraction, neuropsychological assessment, and skills training. Results showed that IVR is a useful tool when it is used either for immersive or simulative purposes (e.g., pain distraction, neuropsychological assessment), but when its use requires both simulation (of the real world) and immersion (e.g., a vivid environment), it is trickier to implement effectively.


Author(s):  
Shiguang Qiu ◽  
Xu Jing ◽  
Xiumin Fan ◽  
Qichang He ◽  
Dianliang Wu

In the case that a real operator drives a virtual human in real-time using the motion capture method and performs complex products assembling and disassembling simulation, a very high driven accuracy is needed to meet the quality requirements of interactivity and simulation results. In order to improve the driven accuracy in virtual reality environment, a method is put forward which analyzes the influence factors of virtual human real-time driven accuracy and optimize the factors. A systematical analysis of factors affecting the accuracy is given. The factors can be sorted into hardware factors and software factors. We find out that the software factors are the main ones affecting the accuracy, and it is very hard to analyse their influence separately. Therefore, we take the virtual human kinematic system as a fuzzy system and improve the real-time driven accuracy using an optimization method. Firstly, a real-time driven model is built on dynamic constraints and body joint rotation information and supports personalized human driven. Secondly, a function is established to describe the driven error during interactive operations in the virtual environment. Then, based on the principle of minimum cumulative error, we establish an optimization model with a specified optimization zone and constraints set according to the standard Chinese adult dimensions. Next, the model is solved using genetic algorithm to get the best virtual human segment dimensions matching the real operator. Lastly, the method is verified with an example of auto engine virtual assembly. The result shows that the method can improve the driven accuracy effectively.


Author(s):  
Thomas C. Edwards ◽  
Arjun Patel ◽  
Bartosz Szyszka ◽  
Alexander W. Coombs ◽  
Alexander D. Liddle ◽  
...  

Abstract Introduction Immersive Virtual Reality (iVR) is a novel technology which can enhance surgical training in a virtual environment without supervision. However, it is untested for the training to select, assemble and deliver instrumentation in orthopaedic surgery—typically performed by scrub nurses. This study investigates the impact of an iVR curriculum on this facet of the technically demanding revision total knee arthroplasty. Materials and methods Ten scrub nurses completed training in four iVR sessions over a 4-week period. Initially, nurses completed a baseline real-world assessment, performing their role with real equipment in a simulated operation assessment. Each subsequent iVR session involved a guided mode, where the software taught participants the procedural choreography and assembly of instrumentation in a simulated operating room. In the latter three sessions, nurses also undertook an assessment in iVR. Outcome measures were related to procedural sequence, duration of surgery and efficiency of movement. Transfer of skills from iVR to the real world was assessed in a post-training simulated operation assessment. A pre- and post-training questionnaire assessed the participants knowledge, confidence and anxiety. Results Operative time reduced by an average of 47% across the 3 unguided sessions (mean 55.5 ± 17.6 min to 29.3 ± 12.1 min, p > 0.001). Assistive prompts reduced by 75% (34.1 ± 16.8 to 8.6 ± 8.8, p < 0.001), dominant hand motion by 28% (881.3 ± 178.5 m to 643.3 ± 119.8 m, p < 0.001) and head motion by 36% (459.9 ± 99.7 m to 292.6 ± 85.3 m, p < 0.001). Real-world skill improved from 11% prior to iVR training to 84% correct post-training. Participants reported increased confidence and reduced anxiety in scrubbing for rTKA procedures (p < 0.001). Conclusions For scrub nurses, unfamiliarity with complex surgical procedures or equipment is common. Immersive VR training improved their understanding, technical skills and efficiency. These iVR-learnt skills transferred into the real world.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1069
Author(s):  
Deyby Huamanchahua ◽  
Adriana Vargas-Martinez ◽  
Ricardo Ramirez-Mendoza

Exoskeletons are an external structural mechanism with joints and links that work in tandem with the user, which increases, reinforces, or restores human performance. Virtual Reality can be used to produce environments, in which the intensity of practice and feedback on performance can be manipulated to provide tailored motor training. Will it be possible to combine both technologies and have them synchronized to reach better performance? This paper consists of the kinematics analysis for the position and orientation synchronization between an n DoF upper-limb exoskeleton pose and a projected object in an immersive virtual reality environment using a VR headset. To achieve this goal, the exoskeletal mechanism is analyzed using Euler angles and the Pieper technique to obtain the equations that lead to its orientation, forward, and inverse kinematic models. This paper extends the author’s previous work by using an early stage upper-limb exoskeleton prototype for the synchronization process.


2020 ◽  
Vol 22 (Supplement_3) ◽  
pp. iii461-iii461
Author(s):  
Andrea Carai ◽  
Angela Mastronuzzi ◽  
Giovanna Stefania Colafati ◽  
Paul Voicu ◽  
Nicola Onorini ◽  
...  

Abstract Tridimensional (3D) rendering of volumetric neuroimaging is increasingly been used to assist surgical management of brain tumors. New technologies allowing immersive virtual reality (VR) visualization of obtained models offer the opportunity to appreciate neuroanatomical details and spatial relationship between the tumor and normal neuroanatomical structures to a level never seen before. We present our preliminary experience with the Surgical Theatre, a commercially available 3D VR system, in 60 consecutive neurosurgical oncology cases. 3D models were developed from volumetric CT scans and MR standard and advanced sequences. The system allows the loading of 6 different layers at the same time, with the possibility to modulate opacity and threshold in real time. Use of the 3D VR was used during preoperative planning allowing a better definition of surgical strategy. A tailored craniotomy and brain dissection can be simulated in advanced and precisely performed in the OR, connecting the system to intraoperative neuronavigation. Smaller blood vessels are generally not included in the 3D rendering, however, real-time intraoperative threshold modulation of the 3D model assisted in their identification improving surgical confidence and safety during the procedure. VR was also used offline, both before and after surgery, in the setting of case discussion within the neurosurgical team and during MDT discussion. Finally, 3D VR was used during informed consent, improving communication with families and young patients. 3D VR allows to tailor surgical strategies to the single patient, contributing to procedural safety and efficacy and to the global improvement of neurosurgical oncology care.


Sign in / Sign up

Export Citation Format

Share Document