head mounted display
Recently Published Documents





2022 ◽  
Vol 19 (1) ◽  
pp. 1-18
Björn Blissing ◽  
Fredrik Bruzelius ◽  
Olle Eriksson

Driving simulators are established tools used during automotive development and research. Most simulators use either monitors or projectors as their primary display system. However, the emergence of a new generation of head-mounted displays has triggered interest in using these as the primary display type. The general benefits and drawbacks of head-mounted displays are well researched, but their effect on driving behavior in a simulator has not been sufficiently quantified. This article presents a study of driving behavior differences between projector-based graphics and head-mounted display in a large dynamic driving simulator. This study has selected five specific driving maneuvers suspected of affecting driving behavior differently depending on the choice of display technology. Some of these maneuvers were chosen to reveal changes in lateral and longitudinal driving behavior. Others were picked for their ability to highlight the benefits and drawbacks of head-mounted displays in a driving context. The results show minor changes in lateral and longitudinal driver behavior changes when comparing projectors and a head-mounted display. The most noticeable difference in favor of projectors was seen when the display resolution is critical to the driving task. The choice of display type did not affect simulator sickness nor the realism rated by the subjects.

2022 ◽  
Fiona Zisch ◽  
Coco Newton ◽  
Antoine Coutrot ◽  
Maria Murcia-Lopez ◽  
Anisa Motala ◽  

Boundaries define regions of space and are integral to episodic memories. The impact of boundaries on spatial memory and neural representations of space has been extensively studied in freely-moving rodents. But less is known in humans and many prior studies have employed desktop virtual reality (VR) which lacks the body-based self-motion cues of the physical world, diminishing the potentially strong input from path integration to spatial memory. We replicated a desktop-VR study testing the impact of boundaries on spatial memory (Hartley et al., 2004) in a physical room (2.4m x 2.4m, 2m tall) by having participants (N = 27) learn the location of a circular stool and then after a short delay replace it where they thought they had found it. During the delay, the wall boundaries were either expanded or contracted. We compared performance to groups of participants undergoing the same procedure in a laser-scanned replica in both desktop VR (N = 44) and freely-walking head mounted display (HMD) VR (N = 39) environments. Performance was measured as goodness of fit between the spatial distributions of group responses and seven modelled distributions that prioritised different metrics based on boundary geometry or walking paths to estimate the stool location. The best fitting model was a weighted linear combination of all the geometric spatial models, but an individual model derived from place cell firing in Hartley et al. 2004 also fit well. High levels of disorientation in all three environments prevented detailed analysis on the contribution of path integration. We found identical model fits across the three environments, though desktop VR and HMD-VR appeared more consistent in spatial distributions of group responses than the physical environment and displayed known variations in virtual depth perception. Thus, while human spatial representation appears differentially influenced by environmental boundaries, the influence is similar across virtual and physical environments. Despite differences in body-based cue availability, desktop and HMD-VR allow a good and interchangeable approximation for examining human spatial memory in small-scale physical environments.

Cigdem Uz-Bilgin ◽  
Meredith Thompson ◽  
Eric Klopfer

Abstract A key affordance of virtual reality is the capability of immersive VR to prompt spatial presence resulting from the stereoscopic lenses in the head mounted display (HMD). We investigated the effect of a stereoscopic view of a game, Cellverse, on users' perceived spatial presence, knowledge of cells, and learning in three levels of spatial knowledge: route, landmark, and survey knowledge. Fifty-one participants played the game using the same game controllers but with different views; 28 had a stereoscopic view (HMD), and 23 had a non-stereoscopic view (computer monitor). Participants explored a diseased cell for clues to diagnose the disease type and recommend a therapy. We gathered surveys, drawings, and spatial tasks conducted in the game environment to gauge learning. Participants' spatial knowledge of the cell environment and knowledge of cell concepts improved after gameplay in both conditions. Spatial presence scores in the stereoscopic condition were higher than the non-stereoscopic condition with a large effect size, however there was no significant difference in levels of spatial knowledge between the two groups. Most all drawings showed a change in cell knowledge, yet some participants only changed in spatial knowledge of the cell, and some changed in both cell knowledge and spatial knowledge. Evidence suggests that a stereoscopic view has a significant effect on users' experience of spatial presence, but that increased presence does not directly translate into spatial learning.

Denis Bienroth ◽  
Hieu T. Nim ◽  
Dimitar Garkov ◽  
Karsten Klein ◽  
Sabrina Jaeger-Honz ◽  

AbstractSpatially resolved transcriptomics is an emerging class of high-throughput technologies that enable biologists to systematically investigate the expression of genes along with spatial information. Upon data acquisition, one major hurdle is the subsequent interpretation and visualization of the datasets acquired. To address this challenge, VR-Cardiomicsis presented, which is a novel data visualization system with interactive functionalities designed to help biologists interpret spatially resolved transcriptomic datasets. By implementing the system in two separate immersive environments, fish tank virtual reality (FTVR) and head-mounted display virtual reality (HMD-VR), biologists can interact with the data in novel ways not previously possible, such as visually exploring the gene expression patterns of an organ, and comparing genes based on their 3D expression profiles. Further, a biologist-driven use-case is presented, in which immersive environments facilitate biologists to explore and compare the heart expression profiles of different genes.

2022 ◽  
pp. 155335062110689
Shotaro Okachi ◽  
Takayasu Ito ◽  
Kazuhide Sato ◽  
Shingo Iwano ◽  
Yuka Shinohara ◽  

Background/need. The increases in reference images and information during bronchoscopy using virtual bronchoscopic navigation (VBN) and fluoroscopy has potentially created the need for support using a head-mounted display (HMD) because bronchoscopists feel difficulty to see displays that are at a distance from them and turn their head and body in various directions. Methodology and device description. The binocular see-through Moverio BT-35E Smart Glasses can be connected via a high-definition multimedia interface and have a 720p high-definition display. We developed a system that converts fluoroscopic (live and reference), VBN, and bronchoscopic image signals through a converter and references them using the Moverio BT-35E. Preliminary results. We performed a virtual bronchoscopy-guided transbronchial biopsy simulation using the system. Four experienced pulmonologists performed a simulated bronchoscopy of 5 cases each with the Moverio BT-35E glasses, using bronchoscopy training model. For all procedures, the bronchoscope was advanced successfully into the target bronchus according to the VBN image. None of the operators reported eye or body fatigue during or after the procedure. Current status. This small-scale simulation study suggests the feasibility of using a HMD during bronchoscopy. For clinical use, it is necessary to evaluate the safety and usefulness of the system in larger clinical trials in the future.

Mara Kaufeld ◽  
Katharina De Coninck ◽  
Jennifer Schmidt ◽  
Heiko Hecht

AbstractVisually induced motion sickness (VIMS) is a common side-effect of exposure to virtual reality (VR). Its unpleasant symptoms may limit the acceptance of VR technologies for training or clinical purposes. Mechanical stimulation of the mastoid and diverting attention to pleasant stimuli-like odors or music have been found to ameliorate VIMS. Chewing gum combines both in an easy-to-administer fashion and should thus be an effective countermeasure against VIMS. Our study investigated whether gustatory-motor stimulation by chewing gum leads to a reduction of VIMS symptoms. 77 subjects were assigned to three experimental groups (control, peppermint gum, and ginger gum) and completed a 15-min virtual helicopter flight, using a VR head-mounted display. Before and after VR exposure, we assessed VIMS with the Simulator Sickness Questionnaire (SSQ), and during the virtual flight once every minute with the Fast Motion Sickness Scale (FMS). Chewing gum (peppermint gum: M = 2.44, SD = 2.67; ginger gum: M = 2.57, SD = 3.30) reduced the peak FMS scores by 2.05 (SE = 0.76) points as compared with the control group (M = 4.56, SD = 3.52), p < 0.01, d = 0.65. Additionally, taste ratings correlated slightly negatively with both the SSQ and the peak FMS scores, suggesting that pleasant taste of the chewing gum is associated with less VIMS. Thus, chewing gum may be useful as an affordable, accepted, and easy-to-access way to mitigate VIMS in numerous applications like education or training. Possible mechanisms behind the effect are discussed.

2022 ◽  
Jonathan Kelly ◽  
Taylor Doty ◽  
Morgan Ambourn ◽  
Lucia Cherep

Distances in virtual environments (VEs) viewed on a head-mounted display (HMD) are typically underperceived relative to the intended distance. This paper presents an experiment comparing perceived egocentric distance in a real environment with that in a matched VE presented in the Oculus Quest and Oculus Quest 2. Participants made verbal judgments and blind walking judgments to an object on the ground. Both the Quest and Quest 2 produced underperception compared to the real environment. Verbal judgments in the VE were 86\% and 79\% of real world judgments in the Quest and Quest 2, respectively. Blind walking judgments were 78% and 79% of real world judgments in the Quest and Quest 2, respectively. This project shows that significant underperception of distance persists even in modern HMDs.

2022 ◽  
Vol 12 ◽  
Jianlan Wen ◽  
Yuming Piao

African literature has played a major role in changing and shaping perceptions about African people and their way of life for the longest time. Unlike western cultures that are associated with advanced forms of writing, African literature is oral in nature, meaning it has to be recited and even performed. Although Africa has an old tribal culture, African philosophy is a new and strange idea among us. Although the problem of “universality” of African philosophy actually refers to the question of whether Africa has heckling of philosophy in the Western sense, obviously, the philosophy bred by Africa’s native culture must be acknowledged. Therefore, the human–computer interaction-oriented (HCI-oriented) method is proposed to appreciate African literature and African philosophy. To begin with, a physical object of tablet-aid is designed, and a depth camera is used to track the user’s hand and tablet-aid and then map them to the virtual scene, respectively. Then, a tactile redirection method is proposed to meet the user’s requirement of tactile consistency in head-mounted display virtual reality environment. Finally, electroencephalogram (EEG) emotion recognition, based on multiscale convolution kernel convolutional neural networks, is proposed to appreciate the reflection of African philosophy in African literature. The experimental results show that the proposed method has a strong immersion and a good interactive experience in navigation, selection, and manipulation. The proposed HCI method is not only easy to use, but also improves the interaction efficiency and accuracy during appreciation. In addition, the simulation of EEG emotion recognition reveals that the accuracy of emotion classification in 33-channel is 90.63%, almost close to the accuracy of the whole channel, and the proposed algorithm outperforms three baselines with respect to classification accuracy.

2022 ◽  
Vol 8 (1) ◽  
pp. 7
Leah Groves ◽  
Natalie Li ◽  
Terry M. Peters ◽  
Elvis C. S. Chen

While ultrasound (US) guidance has been used during central venous catheterization to reduce complications, including the puncturing of arteries, the rate of such problems remains non-negligible. To further reduce complication rates, mixed-reality systems have been proposed as part of the user interface for such procedures. We demonstrate the use of a surgical navigation system that renders a calibrated US image, and the needle and its trajectory, in a common frame of reference. We compare the effectiveness of this system, whereby images are rendered on a planar monitor and within a head-mounted display (HMD), to the standard-of-care US-only approach, via a phantom-based user study that recruited 31 expert clinicians and 20 medical students. These users performed needle-insertions into a phantom under the three modes of visualization. The success rates were significantly improved under HMD-guidance as compared to US-guidance, for both expert clinicians (94% vs. 70%) and medical students (70% vs. 25%). Users more consistently positioned their needle closer to the center of the vessel’s lumen under HMD-guidance compared to US-guidance. The performance of the clinicians when interacting with this monitor system was comparable to using US-only guidance, with no significant difference being observed across any metrics. The results suggest that the use of an HMD to align the clinician’s visual and motor fields promotes successful needle guidance, highlighting the importance of continued HMD-guidance research.

2022 ◽  
Vol 27 ◽  
pp. 48-69
Sahar Y. Ghanem

As the industry transitions towards incorporating BIM in construction projects, adequately qualified students and specialists are essential to this transition. It became apparent that construction management programs required integrating Building Information Modeling (BIM) into the curriculum. By bringing Virtual Reality (VR) technology to BIM, VR-BIM would transform the architectural, engineering, and construction (AEC) industry, and three-dimensional (3D) immersive learning can be a valuable platform to enhance students' ability to recognize a variety of building principles. The study carries out a methodology for implementing the VR-BIM in the construction management undergraduate program. Based on the previous literature review, in-depth analysis of the program, and accreditation requirements, VR-BIM will be implemented throughout the curriculum by combining stand-alone class and integration in the existing courses method. The challenges that may face the program planning to implement VR-BIM are discussed, and few solutions are proposed. The lab classroom layout appropriate for the applications is designed to be adjusted for several layouts to accommodate all learning styles and objectives. A comparison between different Head-Mounted Display (HMD) headsets is carried out to choose the appropriate equipment for the lab.

Sign in / Sign up

Export Citation Format

Share Document