A study on the impact of spatial frames of reference on human performance in virtual reality user interfaces

Author(s):  
Marc Bernatchez ◽  
Jean-Marc Robert
Author(s):  
Missie Smith ◽  
Kiran Bagalkotkar ◽  
Joseph L. Gabbard ◽  
David R. Large ◽  
Gary Burnett

Objective We controlled participants’ glance behavior while using head-down displays (HDDs) and head-up displays (HUDs) to isolate driving behavioral changes due to use of different display types across different driving environments. Background Recently, HUD technology has been incorporated into vehicles, allowing drivers to, in theory, gather display information without moving their eyes away from the road. Previous studies comparing the impact of HUDs with traditional displays on human performance show differences in both drivers’ visual attention and driving performance. Yet no studies have isolated glance from driving behaviors, which limits our ability to understand the cause of these differences and resulting impact on display design. Method We developed a novel method to control visual attention in a driving simulator. Twenty experienced drivers sustained visual attention to in-vehicle HDDs and HUDs while driving in both a simple straight and empty roadway environment and a more realistic driving environment that included traffic and turns. Results In the realistic environment, but not the simpler environment, we found evidence of differing driving behaviors between display conditions, even though participants’ glance behavior was similar. Conclusion Thus, the assumption that visual attention can be evaluated in the same way for different types of vehicle displays may be inaccurate. Differences between driving environments bring the validity of testing HUDs using simplistic driving environments into question. Application As we move toward the integration of HUD user interfaces into vehicles, it is important that we develop new, sensitive assessment methods to ensure HUD interfaces are indeed safe for driving.


Author(s):  
Boon Yih Mah ◽  
Suzana Ab Rahim

The use of the internet for teaching and learning has become a global trend among the education practitioners over the recent decades. The integration of technology and media into Malaysian English as a Second Language (ESL) classrooms has altered the methods in English Language Teaching (ELT). In response to the impact of technology in ELT, the needs of a supplementary instructional platform, and the limitations of the learning management system (LMS) in fostering second language (L2) writing skill, a web-based instructional tool was designed and developed based on a theoretical-and-pedagogical framework namely Web-based Cognitive Writing Instruction (WeCWI). To determine the key concepts while identifying the research gap, this study conducted a literature review using online search on specific keywords including “blog”, “Blogger”, “widget”, and “hyperlink” found in the scholarly articles. Based on the review of literature, Blogger was opted due to its on-screen customisable layout editing features that can be embedded with web widgets and hypertext that share the identical features. By looking into the relationship between perceptual learning preferences on perceived information and the visual representations in iconic and symbolic views, the blogs can come with two different user interfaces embedded with web widgets or hypertext. The blog with web widgets appears in a graphical form of iconic view; while hypertext only displays textual form of symbolic view without involving the visual references. With the injection of web widgets and hypertext into the blogs, WeCWI attempts to offer a technological enhanced ELT solution to overcome the poor writing skill with a better engagement while learning online through the learners’ preferred perceptual learning preferences.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4663
Author(s):  
Janaina Cavalcanti ◽  
Victor Valls ◽  
Manuel Contero ◽  
David Fonseca

An effective warning attracts attention, elicits knowledge, and enables compliance behavior. Game mechanics, which are directly linked to human desires, stand out as training, evaluation, and improvement tools. Immersive virtual reality (VR) facilitates training without risk to participants, evaluates the impact of an incorrect action/decision, and creates a smart training environment. The present study analyzes the user experience in a gamified virtual environment of risks using the HTC Vive head-mounted display. The game was developed in the Unreal game engine and consisted of a walk-through maze composed of evident dangers and different signaling variables while user action data were recorded. To demonstrate which aspects provide better interaction, experience, perception and memory, three different warning configurations (dynamic, static and smart) and two different levels of danger (low and high) were presented. To properly assess the impact of the experience, we conducted a survey about personality and knowledge before and after using the game. We proceeded with the qualitative approach by using questions in a bipolar laddering assessment that was compared with the recorded data during the game. The findings indicate that when users are engaged in VR, they tend to test the consequences of their actions rather than maintaining safety. The results also reveal that textual signal variables are not accessed when users are faced with the stress factor of time. Progress is needed in implementing new technologies for warnings and advance notifications to improve the evaluation of human behavior in virtual environments of high-risk surroundings.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1069
Author(s):  
Deyby Huamanchahua ◽  
Adriana Vargas-Martinez ◽  
Ricardo Ramirez-Mendoza

Exoskeletons are an external structural mechanism with joints and links that work in tandem with the user, which increases, reinforces, or restores human performance. Virtual Reality can be used to produce environments, in which the intensity of practice and feedback on performance can be manipulated to provide tailored motor training. Will it be possible to combine both technologies and have them synchronized to reach better performance? This paper consists of the kinematics analysis for the position and orientation synchronization between an n DoF upper-limb exoskeleton pose and a projected object in an immersive virtual reality environment using a VR headset. To achieve this goal, the exoskeletal mechanism is analyzed using Euler angles and the Pieper technique to obtain the equations that lead to its orientation, forward, and inverse kinematic models. This paper extends the author’s previous work by using an early stage upper-limb exoskeleton prototype for the synchronization process.


2021 ◽  
Vol 5 (EICS) ◽  
pp. 1-26
Author(s):  
Carlos Bermejo ◽  
Lik Hang Lee ◽  
Paul Chojecki ◽  
David Przewozny ◽  
Pan Hui

The continued advancement in user interfaces comes to the era of virtual reality that requires a better understanding of how users will interact with 3D buttons in mid-air. Although virtual reality owns high levels of expressiveness and demonstrates the ability to simulate the daily objects in the physical environment, the most fundamental issue of designing virtual buttons is surprisingly ignored. To this end, this paper presents four variants of virtual buttons, considering two design dimensions of key representations and multi-modal cues (audio, visual, haptic). We conduct two multi-metric assessments to evaluate the four virtual variants and the baselines of physical variants. Our results indicate that the 3D-lookalike buttons help users with more refined and subtle mid-air interactions (i.e. lesser press depth) when haptic cues are available; while the users with 2D-lookalike buttons unintuitively achieve better keystroke performance than the 3D counterparts. We summarize the findings, and accordingly, suggest the design choices of virtual reality buttons among the two proposed design dimensions.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Géraldine Fauville ◽  
Anna C. M. Queiroz ◽  
Erika S. Woolsey ◽  
Jonathan W. Kelly ◽  
Jeremy N. Bailenson

AbstractResearch about vection (illusory self-motion) has investigated a wide range of sensory cues and employed various methods and equipment, including use of virtual reality (VR). However, there is currently no research in the field of vection on the impact of floating in water while experiencing VR. Aquatic immersion presents a new and interesting method to potentially enhance vection by reducing conflicting sensory information that is usually experienced when standing or sitting on a stable surface. This study compares vection, visually induced motion sickness, and presence among participants experiencing VR while standing on the ground or floating in water. Results show that vection was significantly enhanced for the participants in the Water condition, whose judgments of self-displacement were larger than those of participants in the Ground condition. No differences in visually induced motion sickness or presence were found between conditions. We discuss the implication of this new type of VR experience for the fields of VR and vection while also discussing future research questions that emerge from our findings.


Author(s):  
Randall Spain ◽  
Jason Saville ◽  
Barry Lui ◽  
Donia Slack ◽  
Edward Hill ◽  
...  

Because advances in broadband capabilities will soon allow first responders to access and use many forms of data when responding to emergencies, it is becoming critically important to design heads-up displays to present first responders with information in a manner that does not induce extraneous mental workload or cause undue interaction errors. Virtual reality offers a unique medium for envisioning and testing user interface concepts in a realistic and controlled environment. In this paper, we describe a virtual reality-based emergency response scenario that was designed to support user experience research for evaluating the efficacy of intelligent user interfaces for firefighters. We describe the results of a usability test that captured firefighters’ feedback and reactions to the VR scenario and the prototype intelligent user interface that presented them with task critical information through the VR headset. The paper concludes with lessons learned from our development process and a discussion of plans for future research.


Vision ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 18
Author(s):  
Olga Lukashova-Sanz ◽  
Siegfried Wahl ◽  
Thomas S. A. Wallis ◽  
Katharina Rifai

With rapidly developing technology, visual cues became a powerful tool for deliberate guiding of attention and affecting human performance. Using cues to manipulate attention introduces a trade-off between increased performance in cued, and decreased in not cued, locations. For higher efficacy of visual cues designed to purposely direct user’s attention, it is important to know how manipulation of cue properties affects attention. In this verification study, we addressed how varying cue complexity impacts the allocation of spatial endogenous covert attention in space and time. To gradually vary cue complexity, the discriminability of the cue was systematically modulated using a shape-based design. Performance was compared in attended and unattended locations in an orientation-discrimination task. We evaluated additional temporal costs due to processing of a more complex cue by comparing performance at two different inter-stimulus intervals. From preliminary data, attention scaled with cue discriminability, even for supra-threshold cue discriminability. Furthermore, individual cue processing times partly impacted performance for the most complex, but not simpler cues. We conclude that, first, cue complexity expressed by discriminability modulates endogenous covert attention at supra-threshold cue discriminability levels, with increasing benefits and decreasing costs; second, it is important to consider the temporal processing costs of complex visual cues.


Sign in / Sign up

Export Citation Format

Share Document