An Immersive Telepresence System Using a Real-Time Omnidirectional Camera and a Virtual Reality Head-Mounted Display

Author(s):  
Luis Gaemperle ◽  
Kerem Seyid ◽  
Vladan Popovic ◽  
Yusuf Leblebici
2020 ◽  
Vol 10 (7) ◽  
pp. 2248
Author(s):  
Syed Hammad Hussain Shah ◽  
Kyungjin Han ◽  
Jong Weon Lee

We propose a novel authoring and viewing system for generating multiple experiences with a single 360° video and efficiently transferring these experiences to the user. An immersive video contains much more interesting information within the 360° environment than normal videos. There can be multiple interesting areas within a 360° frame at the same time. Due to the narrow field of view in virtual reality head-mounted displays, a user can only view a limited area of a 360° video. Hence, our system is aimed at generating multiple experiences based on interesting information in different regions of a 360° video and efficient transferring of these experiences to prospective users. The proposed system generates experiences by using two approaches: (1) Recording of the user’s experience when the user watches a panoramic video using a virtual reality head-mounted display, and (2) tracking of an arbitrary interesting object in a 360° video selected by the user. For tracking of an arbitrary interesting object, we have developed a pipeline around an existing simple object tracker to adapt it for 360° videos. This tracking algorithm was performed in real time on a CPU with high precision. Moreover, to the best of our knowledge, there is no such existing system that can generate a variety of different experiences from a single 360° video and enable the viewer to watch one 360° visual content from various interesting perspectives in immersive virtual reality. Furthermore, we have provided an adaptive focus assistance technique for efficient transferring of the generated experiences to other users in virtual reality. In this study, technical evaluation of the system along with a detailed user study has been performed to assess the system’s application. Findings from evaluation of the system showed that a single 360° multimedia content has the capability of generating multiple experiences and transfers among users. Moreover, sharing of the 360° experiences enabled viewers to watch multiple interesting contents with less effort.


Author(s):  
Thomas Kersten ◽  
Daniel Drenkhan ◽  
Simon Deggim

AbstractTechnological advancements in the area of Virtual Reality (VR) in the past years have the potential to fundamentally impact our everyday lives. VR makes it possible to explore a digital world with a Head-Mounted Display (HMD) in an immersive, embodied way. In combination with current tools for 3D documentation, modelling and software for creating interactive virtual worlds, VR has the means to play an important role in the conservation and visualisation of cultural heritage (CH) for museums, educational institutions and other cultural areas. Corresponding game engines offer tools for interactive 3D visualisation of CH objects, which makes a new form of knowledge transfer possible with the direct participation of users in the virtual world. However, to ensure smooth and optimal real-time visualisation of the data in the HMD, VR applications should run at 90 frames per second. This frame rate is dependent on several criteria including the amount of data or number of dynamic objects. In this contribution, the performance of a VR application has been investigated using different digital 3D models of the fortress Al Zubarah in Qatar with various resolutions. We demonstrate the influence on real-time performance by the amount of data and the hardware equipment and that developers of VR applications should find a compromise between the amount of data and the available computer hardware, to guarantee a smooth real-time visualisation with approx. 90 fps (frames per second). Therefore, CAD models offer a better performance for real-time VR visualisation than meshed models due to the significant reduced data volume.


1999 ◽  
Vol 8 (4) ◽  
pp. 462-468 ◽  
Author(s):  
Giuseppe Riva

Virtual reality (VR) is usually described by the media as a particular collection of technological hardware: a computer capable of 3-D real-time animation, a head-mounted display, and data gloves equipped with one or more position trackers. However, this focus on technology is somewhat disappointing for communication researchers and VR designers. To overcome this limitation, this paper describes VR as a communication tool: a communication medium in the case of multiuser VR and a communication interface in single-user VR. The consequences of this approach for the design and the development of VR systems are presented, together with the methodological and technical implications for the study of interactive communication via computers.


Author(s):  
Daniel Lanzoni ◽  
Andrea Vitali ◽  
Daniele Regazzoni ◽  
Caterina Rizzi

Abstract The research work presents a preliminary study to create a virtual reality platform for the medical assessment of spatial extrapersonal neglect, a syndrome affecting human awareness of a hemi-space that may be caused by cerebral lesions. Nowadays, the extrapersonal neglect is assessed by using real objects positioned in the space around the patient, with a poor capability of repetition and data gathering. Therefore, the aim of this research work is the introduction of a virtual reality solution based on consumer technology for the assessment of the extrapersonal neglect. By starting from the needs of the involved medical personnel, an online serious-game platform has been developed, which permits to perform a test and a real-time evaluation by means of objective data tracked by exploited technologies, i.e. an HTC Vive Pro head mounted display and ad-hoc IT solutions. The test is based on a virtual environment composed by a table on which twenty objects have been placed, ten on the right side and ten on the left side. The whole 3D virtual environment has been developed using low-cost and free development tools, such as Unity and Blender. The interaction with the virtual environment is based on voice recognition technology, therefore the patient interact with the application by pronouncing the name of each object aloud. The VR application has been developed according to an online gaming software architecture, which permits to share the 3D scene by exploiting a Wi-Fi hotspot network. Furthermore, the on-line gaming software architecture allows sending and receiving data between the doctor’s laptop and the VR system used by the patient on another laptop. The therapist can see through his/her personal computer a real time faithful replica of the test performed by the patient in order to have a fast feedback on patient’s field of view orientation during the evaluation of 3D objects. A preliminary test has been carried out to evaluate the ease of use for medical personnel of the developed VR platform. The big amount of recorded data and the possibility to manage the selection of objects when the voice commands are not correctly interpreted has been greatly appreciated. The review of the performed test represents for doctors the possibility of objectively reconstructing the improvements of patients during the whole period of the rehabilitation process. Medical feedback highlighted how the developed prototype can already be tested involving patients and thus, a procedure for enrolling a group of patients has been planned. Finally, future tests have been planned to compare the developed solution with the Caterine Bergero Scale to define a future standardization.


2020 ◽  
Author(s):  
V. Gaveau ◽  
A. Coudert ◽  
R. Salemme ◽  
E. Koun ◽  
C. Desoche ◽  
...  

AbstractIn everyday life, localizing a sound source in free-field entails more than the sole extraction of monaural and binaural auditory cues to define its location in the three-dimensions (azimuth, elevation and distance). In spatial hearing, we also take into account all the available visual information (e.g., cues to sound position, cues to the structure of the environment), and we resolve perceptual ambiguities through active listening behavior, exploring the auditory environment with head or/and body movements. Here we introduce a novel approach to sound localization in 3D named SPHERE (European patent n° WO2017203028A1), which exploits a commercially available Virtual Reality Head-mounted display system with real-time kinematic tracking to combine all of these elements (controlled positioning of a real sound source and recording of participants’ responses in 3D, controlled visual stimulations and active listening behavior). We prove that SPHERE allows accurate sampling of the 3D spatial hearing abilities of normal hearing adults, and it allowed detecting and quantifying the contribution of active listening. Specifically, comparing static vs. free head-motion during sound emission we found an improvement of sound localization accuracy and precisions. By combining visual virtual reality, real-time kinematic tracking and real-sound delivery we have achieved a novel approach to the study of spatial hearing, with the potentials to capture real-life behaviors in laboratory conditions. Furthermore, our new approach also paves the way for clinical and industrial applications that will leverage the full potentials of active listening and multisensory stimulation intrinsic to the SPHERE approach for the purpose rehabilitation and product assessment.


2020 ◽  
Vol 6 (3) ◽  
pp. 127-130
Author(s):  
Max B. Schäfer ◽  
Kent W. Stewart ◽  
Nico Lösch ◽  
Peter P. Pott

AbstractAccess to systems for robot-assisted surgery is limited due to high costs. To enable widespread use, numerous issues have to be addressed to improve and/or simplify their components. Current systems commonly use universal linkage-based input devices, and only a few applicationoriented and specialized designs are used. A versatile virtual reality controller is proposed as an alternative input device for the control of a seven degree of freedom articulated robotic arm. The real-time capabilities of the setup, replicating a system for robot-assisted teleoperated surgery, are investigated to assess suitability. Image-based assessment showed a considerable system latency of 81.7 ± 27.7 ms. However, due to its versatility, the virtual reality controller is a promising alternative to current input devices for research around medical telemanipulation systems.


2021 ◽  
Vol 11 (7) ◽  
pp. 3090
Author(s):  
Sangwook Yoo ◽  
Cheongho Lee ◽  
Seongah Chin

To experience a real soap bubble show, materials and tools are required, as are skilled performers who produce the show. However, in a virtual space where spatial and temporal constraints do not exist, bubble art can be performed without real materials and tools to give a sense of immersion. For this, the realistic expression of soap bubbles is an interesting topic for virtual reality (VR). However, the current performance of VR soap bubbles is not satisfying the high expectations of users. Therefore, in this study, we propose a physically based approach for reproducing the shape of the bubble by calculating the measured parameters required for bubble modeling and the physical motion of bubbles. In addition, we applied the change in the flow of the surface of the soap bubble measured in practice to the VR rendering. To improve users’ VR experience, we propose that they should experience a bubble show in a VR HMD (Head Mounted Display) environment.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4663
Author(s):  
Janaina Cavalcanti ◽  
Victor Valls ◽  
Manuel Contero ◽  
David Fonseca

An effective warning attracts attention, elicits knowledge, and enables compliance behavior. Game mechanics, which are directly linked to human desires, stand out as training, evaluation, and improvement tools. Immersive virtual reality (VR) facilitates training without risk to participants, evaluates the impact of an incorrect action/decision, and creates a smart training environment. The present study analyzes the user experience in a gamified virtual environment of risks using the HTC Vive head-mounted display. The game was developed in the Unreal game engine and consisted of a walk-through maze composed of evident dangers and different signaling variables while user action data were recorded. To demonstrate which aspects provide better interaction, experience, perception and memory, three different warning configurations (dynamic, static and smart) and two different levels of danger (low and high) were presented. To properly assess the impact of the experience, we conducted a survey about personality and knowledge before and after using the game. We proceeded with the qualitative approach by using questions in a bipolar laddering assessment that was compared with the recorded data during the game. The findings indicate that when users are engaged in VR, they tend to test the consequences of their actions rather than maintaining safety. The results also reveal that textual signal variables are not accessed when users are faced with the stress factor of time. Progress is needed in implementing new technologies for warnings and advance notifications to improve the evaluation of human behavior in virtual environments of high-risk surroundings.


2021 ◽  
pp. 104687812110082
Author(s):  
Omamah Almousa ◽  
Ruby Zhang ◽  
Meghan Dimma ◽  
Jieming Yao ◽  
Arden Allen ◽  
...  

Objective. Although simulation-based medical education is fundamental for acquisition and maintenance of knowledge and skills; simulators are often located in urban centers and they are not easily accessible due to cost, time, and geographic constraints. Our objective is to develop a proof-of-concept innovative prototype using virtual reality (VR) technology for clinical tele simulation training to facilitate access and global academic collaborations. Methodology. Our project is a VR-based system using Oculus Quest as a standalone, portable, and wireless head-mounted device, along with a digital platform to deliver immersive clinical simulation sessions. Instructor’s control panel (ICP) application is designed to create VR-clinical scenarios remotely, live-stream sessions, communicate with learners and control VR-clinical training in real-time. Results. The Virtual Clinical Simulation (VCS) system offers realistic clinical training in virtual space that mimics hospital environments. Those VR clinical scenarios are customizable to suit the need, with high-fidelity lifelike characters designed to deliver interactive and immersive learning experience. The real-time connection and live-stream between ICP and VR-training system enables interactive academic learning and facilitates access to tele simulation training. Conclusions. VCS system provides innovative solutions to major challenges associated with conventional simulation training such as access, cost, personnel, and curriculum. VCS facilitates the delivery of academic and interactive clinical training that is similar to real-life settings. Tele-clinical simulation systems like VCS facilitate necessary academic-community partnerships, as well as global education network between resource-rich and low-income countries.


Sign in / Sign up

Export Citation Format

Share Document