scholarly journals Time Sequential Motion-to-Photon Latency Measurement System for Virtual Reality Head-Mounted Displays

Electronics ◽  
2018 ◽  
Vol 7 (9) ◽  
pp. 171 ◽  
Author(s):  
Song-Woo Choi ◽  
Siyeong Lee ◽  
Min-Woo Seo ◽  
Suk-Ju Kang

Because the interest in virtual reality (VR) has increased recently, studies on head-mounted displays (HMDs) have been actively conducted. However, HMD causes motion sickness and dizziness to the user, who is most affected by motion-to-photon latency. Therefore, equipment for measuring and quantifying this occurrence is very necessary. This paper proposes a novel system to measure and visualize the time sequential motion-to-photon latency in real time for HMDs. Conventional motion-to-photon latency measurement methods can measure the latency only at the beginning of the physical motion. On the other hand, the proposed method can measure the latency in real time at every input time. Specifically, it generates the rotation data with intensity levels of pixels on the measurement area, and it can obtain the motion-to-photon latency data in all temporal ranges. Concurrently, encoders measure the actual motion from a motion generator designed to control the actual posture of the HMD device. The proposed system conducts a comparison between two motions from encoders and the output image on a display. Finally, it calculates the motion-to-photon latency for all time points. The experiment shows that the latency increases from a minimum of 46.55 ms to a maximum of 154.63 ms according to the workload levels.

2020 ◽  
Vol 10 (7) ◽  
pp. 2248
Author(s):  
Syed Hammad Hussain Shah ◽  
Kyungjin Han ◽  
Jong Weon Lee

We propose a novel authoring and viewing system for generating multiple experiences with a single 360° video and efficiently transferring these experiences to the user. An immersive video contains much more interesting information within the 360° environment than normal videos. There can be multiple interesting areas within a 360° frame at the same time. Due to the narrow field of view in virtual reality head-mounted displays, a user can only view a limited area of a 360° video. Hence, our system is aimed at generating multiple experiences based on interesting information in different regions of a 360° video and efficient transferring of these experiences to prospective users. The proposed system generates experiences by using two approaches: (1) Recording of the user’s experience when the user watches a panoramic video using a virtual reality head-mounted display, and (2) tracking of an arbitrary interesting object in a 360° video selected by the user. For tracking of an arbitrary interesting object, we have developed a pipeline around an existing simple object tracker to adapt it for 360° videos. This tracking algorithm was performed in real time on a CPU with high precision. Moreover, to the best of our knowledge, there is no such existing system that can generate a variety of different experiences from a single 360° video and enable the viewer to watch one 360° visual content from various interesting perspectives in immersive virtual reality. Furthermore, we have provided an adaptive focus assistance technique for efficient transferring of the generated experiences to other users in virtual reality. In this study, technical evaluation of the system along with a detailed user study has been performed to assess the system’s application. Findings from evaluation of the system showed that a single 360° multimedia content has the capability of generating multiple experiences and transfers among users. Moreover, sharing of the 360° experiences enabled viewers to watch multiple interesting contents with less effort.


Author(s):  
Alejandro Rosa-Pujazón ◽  
Isabel Barbancho ◽  
Lorenzo J. Tardón ◽  
Ana M. Barbancho

In this paper, an implementation of a virtual reality based application for drumkit simulation is presented. The system tracks user motion through the use of a Kinect camera sensor, and recognizes and detects user-generated drum-hitting gestures in real-time. In order to compensate the effects of latency in the sensing stage and provide real-time interaction, the system uses a gesture detection model to predict user movements. The paper discusses the use of two different machine learning based solutions to this problem: the first one is based on the analysis of velocity and acceleration peaks, the other solution is based on Wiener filtering. This gesture detector was tested and integrated into a full implementation of a drumkit simulator, capable of discriminating up to 3, 5 or 7 different drum sounds. An experiment with 14 participants was conducted to assess the system's viability and impact on user experience and satisfaction.


Author(s):  
Michelle LaBrunda ◽  
Andrew LaBrunda

Virtual reality is a collection of technologies that enable people to use their senses to experience sensory input provided from a source other than the immediate environment. These events may occur in real time, can be a simulation, or can be completely fictional. Virtual reality (VR) has progressed beyond its military beginnings and is progressively making its way into people’s daily lives. The most prevalent implementation of VR can be found in many forms of modern entertainment such as computer games or IMAX (image maximum) theaters. VR has received little publicity but has enormous potential in the realm of medicine. The utility of VR is starting to be appreciated by the medical community. It is slowly being adopted and implemented in the surgical, medical, and psychiatric specialties. Medical uses of VR are primarily directed toward the simulation of visual, audio, and tactile input. With the aid of VR doctors will be able to perform specialized surgery on a patient from the other side of the world. Students are able to simulate and experience surgical procedures without compromising a patient’s health. Finally, VR can heighten a doctor´s senses and allows input that would be absent without the aid of VR, such as relative bone positions and tissue temperature.


2021 ◽  
pp. 174702182110248
Author(s):  
Rémi Thériault ◽  
Jay A. Olson ◽  
Sonia A. Krol ◽  
Amir Raz

Perspective-taking, whether through imagination or virtual-reality interventions, seems to improve intergroup relations; however, what intervention leads to better outcomes remains unclear. This pre-registered study collected measures of empathy and race bias from 90 participants, split into one of three perspective-taking groups: embodied perspective-taking, mental perspective-taking, and a control group. We drew on virtual-reality technology alongside a Black confederate across all conditions. Only in the first group, participants got to exchange real-time viewpoints with the confederate and literally “see through the eyes of another.” In the two other conditions, participants either imagined a day in the life of the Black confederate or in their own life, respectively. Our findings show that, compared to the control group, the embodied perspective-taking group scored higher on empathy sub-components. On the other hand, both perspective-taking interventions differentially affected neither explicit nor implicit race bias. Our study suggests that embodiment of an outgroup can enhance empathy.


2011 ◽  
Vol 2 (1) ◽  
pp. 1
Author(s):  
Roberto Cesar Cavalcante Vieira ◽  
Creto Vidal ◽  
Joaquim Bento Cavalcante-Neto

Virtual tridimensional creatures are active actors in many types of applications nowadays, such as virtual reality, games and computer animation. The virtual actors encountered in those applications are very diverse, but usually have humanlike behavior and facial expressions. This paper deals with the mapping of facial expressions between virtual characters, based on anthropometric proportions and geometric manipulations by moving influence zones. Facial proportions of a base model is used to transfer expressions to any other model with similar global characteristics (if the base model is a human, for instance, the other models need to have two eyes, one nose and one mouth). With this solution, it is possible to insert new virtual characters in real-time applications without having to go through the tedious process of customizing the characters’ emotions.


2020 ◽  
Vol 6 (3) ◽  
pp. 127-130
Author(s):  
Max B. Schäfer ◽  
Kent W. Stewart ◽  
Nico Lösch ◽  
Peter P. Pott

AbstractAccess to systems for robot-assisted surgery is limited due to high costs. To enable widespread use, numerous issues have to be addressed to improve and/or simplify their components. Current systems commonly use universal linkage-based input devices, and only a few applicationoriented and specialized designs are used. A versatile virtual reality controller is proposed as an alternative input device for the control of a seven degree of freedom articulated robotic arm. The real-time capabilities of the setup, replicating a system for robot-assisted teleoperated surgery, are investigated to assess suitability. Image-based assessment showed a considerable system latency of 81.7 ± 27.7 ms. However, due to its versatility, the virtual reality controller is a promising alternative to current input devices for research around medical telemanipulation systems.


2021 ◽  
Author(s):  
Polona Caserman ◽  
Augusto Garcia-Agundez ◽  
Alvar Gámez Zerban ◽  
Stefan Göbel

AbstractCybersickness (CS) is a term used to refer to symptoms, such as nausea, headache, and dizziness that users experience during or after virtual reality immersion. Initially discovered in flight simulators, commercial virtual reality (VR) head-mounted displays (HMD) of the current generation also seem to cause CS, albeit in a different manner and severity. The goal of this work is to summarize recent literature on CS with modern HMDs, to determine the specificities and profile of immersive VR-caused CS, and to provide an outlook for future research areas. A systematic review was performed on the databases IEEE Xplore, PubMed, ACM, and Scopus from 2013 to 2019 and 49 publications were selected. A summarized text states how different VR HMDs impact CS, how the nature of movement in VR HMDs contributes to CS, and how we can use biosensors to detect CS. The results of the meta-analysis show that although current-generation VR HMDs cause significantly less CS ($$p<0.001$$ p < 0.001 ), some symptoms remain as intense. Further results show that the nature of movement and, in particular, sensory mismatch as well as perceived motion have been the leading cause of CS. We suggest an outlook on future research, including the use of galvanic skin response to evaluate CS in combination with the golden standard (Simulator Sickness Questionnaire, SSQ) as well as an update on the subjective evaluation scores of the SSQ.


2021 ◽  
Vol 11 (7) ◽  
pp. 3090
Author(s):  
Sangwook Yoo ◽  
Cheongho Lee ◽  
Seongah Chin

To experience a real soap bubble show, materials and tools are required, as are skilled performers who produce the show. However, in a virtual space where spatial and temporal constraints do not exist, bubble art can be performed without real materials and tools to give a sense of immersion. For this, the realistic expression of soap bubbles is an interesting topic for virtual reality (VR). However, the current performance of VR soap bubbles is not satisfying the high expectations of users. Therefore, in this study, we propose a physically based approach for reproducing the shape of the bubble by calculating the measured parameters required for bubble modeling and the physical motion of bubbles. In addition, we applied the change in the flow of the surface of the soap bubble measured in practice to the VR rendering. To improve users’ VR experience, we propose that they should experience a bubble show in a VR HMD (Head Mounted Display) environment.


2021 ◽  
pp. 104687812110082
Author(s):  
Omamah Almousa ◽  
Ruby Zhang ◽  
Meghan Dimma ◽  
Jieming Yao ◽  
Arden Allen ◽  
...  

Objective. Although simulation-based medical education is fundamental for acquisition and maintenance of knowledge and skills; simulators are often located in urban centers and they are not easily accessible due to cost, time, and geographic constraints. Our objective is to develop a proof-of-concept innovative prototype using virtual reality (VR) technology for clinical tele simulation training to facilitate access and global academic collaborations. Methodology. Our project is a VR-based system using Oculus Quest as a standalone, portable, and wireless head-mounted device, along with a digital platform to deliver immersive clinical simulation sessions. Instructor’s control panel (ICP) application is designed to create VR-clinical scenarios remotely, live-stream sessions, communicate with learners and control VR-clinical training in real-time. Results. The Virtual Clinical Simulation (VCS) system offers realistic clinical training in virtual space that mimics hospital environments. Those VR clinical scenarios are customizable to suit the need, with high-fidelity lifelike characters designed to deliver interactive and immersive learning experience. The real-time connection and live-stream between ICP and VR-training system enables interactive academic learning and facilitates access to tele simulation training. Conclusions. VCS system provides innovative solutions to major challenges associated with conventional simulation training such as access, cost, personnel, and curriculum. VCS facilitates the delivery of academic and interactive clinical training that is similar to real-life settings. Tele-clinical simulation systems like VCS facilitate necessary academic-community partnerships, as well as global education network between resource-rich and low-income countries.


2020 ◽  
Vol 22 (Supplement_3) ◽  
pp. iii461-iii461
Author(s):  
Andrea Carai ◽  
Angela Mastronuzzi ◽  
Giovanna Stefania Colafati ◽  
Paul Voicu ◽  
Nicola Onorini ◽  
...  

Abstract Tridimensional (3D) rendering of volumetric neuroimaging is increasingly been used to assist surgical management of brain tumors. New technologies allowing immersive virtual reality (VR) visualization of obtained models offer the opportunity to appreciate neuroanatomical details and spatial relationship between the tumor and normal neuroanatomical structures to a level never seen before. We present our preliminary experience with the Surgical Theatre, a commercially available 3D VR system, in 60 consecutive neurosurgical oncology cases. 3D models were developed from volumetric CT scans and MR standard and advanced sequences. The system allows the loading of 6 different layers at the same time, with the possibility to modulate opacity and threshold in real time. Use of the 3D VR was used during preoperative planning allowing a better definition of surgical strategy. A tailored craniotomy and brain dissection can be simulated in advanced and precisely performed in the OR, connecting the system to intraoperative neuronavigation. Smaller blood vessels are generally not included in the 3D rendering, however, real-time intraoperative threshold modulation of the 3D model assisted in their identification improving surgical confidence and safety during the procedure. VR was also used offline, both before and after surgery, in the setting of case discussion within the neurosurgical team and during MDT discussion. Finally, 3D VR was used during informed consent, improving communication with families and young patients. 3D VR allows to tailor surgical strategies to the single patient, contributing to procedural safety and efficacy and to the global improvement of neurosurgical oncology care.


Sign in / Sign up

Export Citation Format

Share Document