Scaffolding in immersive virtual reality environments for learning English: an eye tracking study

Author(s):  
Jorge Bacca-Acosta ◽  
Julian Tejada ◽  
Ramon Fabregat ◽  
Kinshuk ◽  
Juan Guevara
Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4956
Author(s):  
Jose Llanes-Jurado ◽  
Javier Marín-Morales ◽  
Jaime Guixeres ◽  
Mariano Alcañiz

Fixation identification is an essential task in the extraction of relevant information from gaze patterns; various algorithms are used in the identification process. However, the thresholds used in the algorithms greatly affect their sensitivity. Moreover, the application of these algorithm to eye-tracking technologies integrated into head-mounted displays, where the subject’s head position is unrestricted, is still an open issue. Therefore, the adaptation of eye-tracking algorithms and their thresholds to immersive virtual reality frameworks needs to be validated. This study presents the development of a dispersion-threshold identification algorithm applied to data obtained from an eye-tracking system integrated into a head-mounted display. Rules-based criteria are proposed to calibrate the thresholds of the algorithm through different features, such as number of fixations and the percentage of points which belong to a fixation. The results show that distance-dispersion thresholds between 1–1.6° and time windows between 0.25–0.4 s are the acceptable range parameters, with 1° and 0.25 s being the optimum. The work presents a calibrated algorithm to be applied in future experiments with eye-tracking integrated into head-mounted displays and guidelines for calibrating fixation identification algorithms


2016 ◽  
Vol 17 (1) ◽  
pp. 23-31 ◽  
Author(s):  
Pedro Rosa ◽  
Pedro Gamito ◽  
Jorge Oliveira ◽  
Diogo Morais ◽  
Matthew Pavlovic ◽  
...  

Author(s):  
Maria Mikhailenko ◽  
Mikhail Kurushkin

The concept of using eye-tracking in virtual reality for education has been researched in various fields over the past years. With this review, we aim to discuss the recent advancements and applications in this area, explain the technological aspects, highlight the advantages of this approach and inspire interest in the field. Eye-tracking has already been used in science for many decades and now has been substantially reinforced by the addition of virtual and augmented reality technologies. The first part of the review is a general overview of eye-tracking concepts and its applications. In the second part, the focus shifted towards application of eye-tracking in virtual reality. The third part is the description of the recently emerged concept of eye-tracking in virtual reality when applied to education and studying, which has not been thoroughly described before. We describe the main findings, technological aspects and advantages of this approach.


2020 ◽  
Author(s):  
Eleanor Huizeling ◽  
David Peeters ◽  
Peter Hagoort

Traditional experiments indicate that prediction is important for the efficient processing of incoming speech. In three virtual reality (VR) visual world paradigm experiments, we here tested whether such findings hold in naturalistic settings (Experiment 1) and provided novel insights into whether disfluencies in speech (repairs/hesitations) inform one’s predictions in rich environments (Experiments 2-3). In all three experiments, participants’ eye movements were recorded while they listened to sentences spoken by a virtual agent during a virtual tour of eight scenes. Experiment 1 showed that listeners predict upcoming speech in naturalistic environments, with a higher proportion of anticipatory target fixations in Restrictive (predictable) compared to Unrestrictive (unpredictable) trials. Experiments 2-3 provided novel findings that disfluencies reduce anticipatory fixations towards a predicted referent in naturalistic environments, compared to Conjunction sentences (Experiment 2) and Fluent sentences (Experiment 3). Unexpectedly, Experiment 2 provided no evidence that participants made new predictions from a repaired verb – there was no increase in the proportion of fixations towards objects compatible with the repaired verb – thereby supporting an attention rather than a predictive account of effects of repair disfluencies on sentence processing. Experiment 3 provided novel evidence that the proportion of fixations to the speaker increased upon hearing a hesitation, supporting current theories of the effects of hesitations on sentence processing. Together, these findings contribute to a better understanding of how listeners make use of visual (objects, speaker) and auditory (speech, including disfluencies) information to predict upcoming words.


Author(s):  
Amanda J. Haskins ◽  
Jeff Mentch ◽  
Thomas L. Botch ◽  
Caroline E. Robertson

AbstractVision is an active process. Humans actively sample their sensory environment via saccades, head turns, and body movements. Yet, little is known about active visual processing in real-world environments. Here, we exploited recent advances in immersive virtual reality (VR) and in-headset eye-tracking to show that active viewing conditions impact how humans process complex, real-world scenes. Specifically, we used quantitative, model-based analyses to compare which visual features participants prioritize over others while encoding a novel environment in two experimental conditions: active and passive. In the active condition, participants used head-mounted VR displays to explore 360º scenes from a first-person perspective via self-directed motion (saccades and head turns). In the passive condition, 360º scenes were passively displayed to participants within the VR headset while they were head-restricted. Our results show that signatures of top-down attentional guidance increase in active viewing conditions: active viewers disproportionately allocate their attention to semantically relevant scene features, as compared with passive viewers. We also observed increased signatures of exploratory behavior in eye movements, such as quicker, more entropic fixations during active as compared with passive viewing conditions. These results have broad implications for studies of visual cognition, suggesting that active viewing influences every aspect of gaze behavior – from the way we move our eyes to what we choose to attend to – as we construct a sense of place in a real-world environment.Significance StatementEye-tracking in immersive virtual reality offers an unprecedented opportunity to study human gaze behavior under naturalistic viewing conditions without sacrificing experimental control. Here, we advanced this new technique to show how humans deploy attention as they encode a diverse set of 360º, real-world scenes, actively explored from a first-person perspective using head turns and saccades. Our results build on classic studies in psychology, showing that active, as compared with passive, viewing conditions fundamentally alter perceptual processing. Specifically, active viewing conditions increase information-seeking behavior in humans, producing faster, more entropic fixations, which are disproportionately deployed to scene areas that are rich in semantic meaning. In addition, our results offer key benchmark measurements of gaze behavior in 360°, naturalistic environments.


2019 ◽  
Vol 12 (4) ◽  
pp. 376-384 ◽  
Author(s):  
Jeong Hye Park ◽  
Han Jae Jeon ◽  
Eun-Cheon Lim ◽  
Ja-Won Koo ◽  
Hyo-Jeong Lee ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document