scholarly journals Indicators of Training Success in Virtual Reality Using Head and Eye Movements

Author(s):  
Joy Gisler ◽  
Johannes Schneider ◽  
Joshua Handali ◽  
Valentin Holzwarth ◽  
Christian Hirt ◽  
...  
2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


2017 ◽  
Vol 50 (3) ◽  
pp. 1102-1115 ◽  
Author(s):  
Nicole Eichert ◽  
David Peeters ◽  
Peter Hagoort

Author(s):  
Eunhee Chang ◽  
Hyun Taek Kim ◽  
Byounghyun Yoo

Abstract Cybersickness refers to a group of uncomfortable symptoms experienced in virtual reality (VR). Among several theories of cybersickness, the subjective vertical mismatch (SVM) theory focuses on an individual’s internal model, which is created and updated through past experiences. Although previous studies have attempted to provide experimental evidence for the theory, most approaches are limited to subjective measures or body sway. In this study, we aimed to demonstrate the SVM theory on the basis of the participant’s eye movements and investigate whether the subjective level of cybersickness can be predicted using eye-related measures. 26 participants experienced roller coaster VR while wearing a head-mounted display with eye tracking. We designed four experimental conditions by changing the orientation of the VR scene (upright vs. inverted) or the controllability of the participant’s body (unrestrained vs. restrained body). The results indicated that participants reported more severe cybersickness when experiencing the upright VR content without controllability. Moreover, distinctive eye movements (e.g. fixation duration and distance between the eye gaze and the object position sequence) were observed according to the experimental conditions. On the basis of these results, we developed a regression model using eye-movement features and found that our model can explain 34.8% of the total variance of cybersickness, indicating a substantial improvement compared to the previous work (4.2%). This study provides empirical data for the SVM theory using both subjective and eye-related measures. In particular, the results suggest that participants’ eye movements can serve as a significant index for predicting cybersickness when considering natural gaze behaviors during a VR experience.


PLoS ONE ◽  
2018 ◽  
Vol 13 (12) ◽  
pp. e0207990
Author(s):  
Alexia Bourgeois ◽  
Emmanuel Badier ◽  
Naem Baron ◽  
Fabien Carruzzo ◽  
Patrik Vuilleumier

Author(s):  
Jessica Reitmaier ◽  
Anika Schiller ◽  
Andreas Mühlberger ◽  
Michael Pfaller ◽  
Marie Meyer ◽  
...  

2020 ◽  
Author(s):  
Eleanor Huizeling ◽  
David Peeters ◽  
Peter Hagoort

Traditional experiments indicate that prediction is important for the efficient processing of incoming speech. In three virtual reality (VR) visual world paradigm experiments, we here tested whether such findings hold in naturalistic settings (Experiment 1) and provided novel insights into whether disfluencies in speech (repairs/hesitations) inform one’s predictions in rich environments (Experiments 2-3). In all three experiments, participants’ eye movements were recorded while they listened to sentences spoken by a virtual agent during a virtual tour of eight scenes. Experiment 1 showed that listeners predict upcoming speech in naturalistic environments, with a higher proportion of anticipatory target fixations in Restrictive (predictable) compared to Unrestrictive (unpredictable) trials. Experiments 2-3 provided novel findings that disfluencies reduce anticipatory fixations towards a predicted referent in naturalistic environments, compared to Conjunction sentences (Experiment 2) and Fluent sentences (Experiment 3). Unexpectedly, Experiment 2 provided no evidence that participants made new predictions from a repaired verb – there was no increase in the proportion of fixations towards objects compatible with the repaired verb – thereby supporting an attention rather than a predictive account of effects of repair disfluencies on sentence processing. Experiment 3 provided novel evidence that the proportion of fixations to the speaker increased upon hearing a hesitation, supporting current theories of the effects of hesitations on sentence processing. Together, these findings contribute to a better understanding of how listeners make use of visual (objects, speaker) and auditory (speech, including disfluencies) information to predict upcoming words.


2020 ◽  
Author(s):  
Sandra Chiquet ◽  
Corinna S. Martarelli ◽  
Fred W. Mast

Abstract The role of eye movements in mental imagery and visual memory is typically investigated by presenting stimuli or scenes on a two-dimensional (2D) computer screen. When questioned about objects that had previously been presented on-screen, people gaze back to the location of the stimuli, even though those regions are blank during retrieval. It remains unclear whether this behavior is limited to a highly controlled experimental setting using 2D screens or whether it also occurs in a more naturalistic setting. The present study aims to overcome this shortcoming. Three-dimensional (3D) objects were presented along a circular path in an immersive virtual room. During retrieval, participants were given two tasks: to visualize the objects, which they had encoded before, and to evaluate a statement about visual details of the object. We observed longer fixation duration in the area, on which the object was previously displayed, when compared to other possible target locations. However, in 89% of the time, participants fixated none of the predefined areas. On the one hand, this shows that looking at nothing may be overestimated in 2D screen-based paradigm, on the other hand, the looking at nothing effect was still present in the 3D immersive virtual reality setting, and thus it extends external validity of previous findings. Eye movements during retrieval reinstate spatial information of previously inspected stimuli.


2020 ◽  
Vol 10 (11) ◽  
pp. 841
Author(s):  
Erwan David ◽  
Julia Beitner ◽  
Melissa Le-Hoa Võ

Central and peripheral fields of view extract information of different quality and serve different roles during visual tasks. Past research has studied this dichotomy on-screen in conditions remote from natural situations where the scene would be omnidirectional and the entire field of view could be of use. In this study, we had participants looking for objects in simulated everyday rooms in virtual reality. By implementing a gaze-contingent protocol we masked central or peripheral vision (masks of 6 deg. of radius) during trials. We analyzed the impact of vision loss on visuo-motor variables related to fixation (duration) and saccades (amplitude and relative directions). An important novelty is that we segregated eye, head and the general gaze movements in our analyses. Additionally, we studied these measures after separating trials into two search phases (scanning and verification). Our results generally replicate past on-screen literature and teach about the role of eye and head movements. We showed that the scanning phase is dominated by short fixations and long saccades to explore, and the verification phase by long fixations and short saccades to analyze. One finding indicates that eye movements are strongly driven by visual stimulation, while head movements serve a higher behavioral goal of exploring omnidirectional scenes. Moreover, losing central vision has a smaller impact than reported on-screen, hinting at the importance of peripheral scene processing for visual search with an extended field of view. Our findings provide more information concerning how knowledge gathered on-screen may transfer to more natural conditions, and attest to the experimental usefulness of eye tracking in virtual reality.


2022 ◽  
pp. 233-248
Author(s):  
Scott E. Lee ◽  
Deborah Chen ◽  
Nikita Chigullapally ◽  
Suzy Chung ◽  
Allan Lu Lee ◽  
...  

The visual field (VF) examination is a useful clinical tool for monitoring a variety of ocular diseases. Despite its wide utility in eye clinics, the test as currently conducted is subject to an array of issues that interfere in obtaining accurate results. Visual field exams of patients suffering from additional ocular conditions are often unreliable due to interference between the comorbid diseases. To improve upon these shortcomings, virtual reality (VR) and deep learning are being explored as potential solutions. Virtual reality has been incorporated into novel visual field exams to provide a portable, 3D exam experience. Deep learning, a specialization of machine learning, has been used in conjunction with VR, such as in the iGlaucoma application, to limit subjective bias occurring from patients' eye movements. This chapter seeks to analyze and critique how VR and deep learning can augment the visual field experience by improving accuracy, reducing subjective bias, and ultimately, providing clinicians with a greater capacity to enhance patient outcomes.


Sign in / Sign up

Export Citation Format

Share Document