Learning to Follow Directions in English Through a Virtual Reality Environment

Author(s):  
Jorge Bacca-Acosta ◽  
Julian Tejada ◽  
Carlos Ospino-Ibañez

Learning how to give and follow directions in English is one of the key topics in regular English as a Foreign Language (EFL) courses. However, this topic is commonly taught in the classroom with pencil and paper exercises. In this chapter, a scaffolded virtual reality (VR) environment for learning the topic of following directions in English is introduced. An eye tracking study was conducted to determine how students perceive the scaffolds for completing the learning task, and an evaluation of acceptance and usability was conducted to identify the students' perceptions. The results show that scaffolds in the form of text and images are both effective for increasing the students' learning performance. The gaze frequency is higher for the textual scaffold, but the duration of gaze fixations is lower for the scaffolds in the form of images. The acceptance and usability of the VR environment were found to be positive.

2020 ◽  
pp. 112067212097604
Author(s):  
Ignacio Martínez-Almeida Nistal ◽  
Paula Lampreave Acebes ◽  
José María Martínez-de-la-Casa ◽  
Patricia Sánchez-González

Aim: The aim was to develop and implement a virtual reality tool based on eye-tracking technologies that allow to evaluate the characteristics of the gaze patterns of glaucoma patients in order to have a better understanding of the limitations that these patients experience in their daily life. Setting: This study took place on the Ophthalmology department of Hospital Clínico San Carlos, Madrid, Spain. Methods: In total, 56 participants collaborated in the study. They were divided in two groups, a group composed of 33 glaucoma patients selected by the Ophthalmology department and a control group composed of 23 healthy individuals. Both groups completed two virtual tasks while their gaze was being monitored. The first task, defined as “static” consisted in two exercises based on the observation of images. The second task, defined as “dynamic,” consisted in a virtual driving simulator. Number of fixations, fixations duration, saccades amplitude and velocity, fixations/saccades ratio, total execution time, and other specific metrics were measured. These are the total search time for the second exercise of the first task and the number of collisions for the dynamic task. In addition, the dispersion of fixations was also discussed. Results: For the two exercises of the static task, patients exhibited significative differences in terms of number of fixations ( p = 0.012 in free observation exercise), mean saccadic velocity ( p = 0.023 and 0.017), fixations/saccades ratio ( p = 0.035 and 0.04), and also the search and total execution times of the visual search exercise ( p = 0.004 and 0.027, respectively). For the dynamic task, significative differences were found on average saccades amplitude ( p = 0.02), average saccades velocity ( p = 0.03), and the number of collisions ( p = 0.02). Conclusion: The results show that eye-tracking technologies can be used as a tool for evaluating the gaze patterns of glaucoma patients and differentiate them of healthy individuals. However, further studies with a larger cohort of participants and additional tasks are needed.


2004 ◽  
Vol 63 (3) ◽  
pp. 143-149 ◽  
Author(s):  
Fred W. Mast ◽  
Charles M. Oman

The role of top-down processing on the horizontal-vertical line length illusion was examined by means of an ambiguous room with dual visual verticals. In one of the test conditions, the subjects were cued to one of the two verticals and were instructed to cognitively reassign the apparent vertical to the cued orientation. When they have mentally adjusted their perception, two lines in a plus sign configuration appeared and the subjects had to evaluate which line was longer. The results showed that the line length appeared longer when it was aligned with the direction of the vertical currently perceived by the subject. This study provides a demonstration that top-down processing influences lower level visual processing mechanisms. In another test condition, the subjects had all perceptual cues available and the influence was even stronger.


2017 ◽  
Vol 5 (3) ◽  
pp. 15
Author(s):  
GANDOTRA SANDEEP ◽  
Pungotra Harish ◽  
Moudgil Prince Kumar ◽  
◽  
◽  
...  

2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


Sign in / Sign up

Export Citation Format

Share Document