scholarly journals Assessing expertise using eye tracking in a Virtual Reality flight simulation

2022 ◽  
Author(s):  
David Harris ◽  
Tom Arthur ◽  
Toby de Burgh ◽  
Mike Duxbury ◽  
Ross Lockett-Kirk ◽  
...  

Objective: The aim of this work was to examine the fidelity and validity of an aviation simulation using eye tracking. Background: Commercial head-mounted virtual reality (VR) systems offer a convenient and cost-effective alternative to existing aviation simulation (e.g., for refresher exercises). We performed pre-implementation testing of a novel aviation simulation, designed for head-mounted VR, to determine its fidelity and validity as a training device. Method: Eighteen airline pilots, with varying levels of flight experience, completed a sequence of training ‘flows’. Self-reported measures of presence and workload and users’ perceptions of fidelity were taken. Pilots’ eye movements and performance were recorded to determine whether more experienced pilots showed distinct performance and eye gaze profiles in the simulation, as they would in the real-world. Results: Real-world expertise correlated with eye gaze patterns characterised by fewer, but longer, fixations and a scan path that was more structured and less random. Multidimensional scaling analyses also indicated differential clustering of strategies in more versus less experienced pilots. Subjective ratings of performance, however, showed little relationship with real-world expertise or eye movements. Conclusion: We adopted an evidence-based approach to assessing the fidelity and validity of a VR flight training tool. Pilot reports indicated the simulation was realistic and potentially useful for training, while direct measurement of eye movements was useful for establishing construct validity and psychological fidelity of the simulation.

2020 ◽  
Vol 6 (1) ◽  
pp. 83-107
Author(s):  
Alison T. Miller Singley ◽  
Jeffrey Lynn Crawford ◽  
Silvia A. Bunge

Learning fractions is notoriously difficult, yet critically important to mathematical and general academic achievement. Eye-tracking studies are beginning to characterize the strategies that adults use when comparing fractions, but we know relatively little about the strategies used by children. We used eye-tracking to analyze how novice children and mathematically-proficient adults approached a well-studied fraction comparison paradigm. Specifically, eye-tracking can provide insights into the nature of differences: whether they are quantitative—reflecting differences in efficiency—or qualitative—reflecting a fundamentally different approach. We found that children who had acquired the basic fraction rules made more eye movements than did either adults or less proficient children, suggesting a thorough but inefficient problem solving approach. Additionally, correct responses were associated with normative gaze patterns, regardless of age or proficiency levels. However, children paid more attention to irrelevant numerical relationships on conditions that were conceptually difficult. An exploratory analysis points to the possibility that children on the verge of making a conceptual leap attend to the relevant relationships even when they respond incorrectly. These findings indicate the potential of eye-tracking methodology to better characterize the behavior associated with different levels of fraction proficiency, as well as to provide insights for educators regarding how to best support novices at different levels of conceptual development.


2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


Author(s):  
Gavindya Jayawardena ◽  
Sampath Jayarathna

Eye-tracking experiments involve areas of interest (AOIs) for the analysis of eye gaze data. While there are tools to delineate AOIs to extract eye movement data, they may require users to manually draw boundaries of AOIs on eye tracking stimuli or use markers to define AOIs. This paper introduces two novel techniques to dynamically filter eye movement data from AOIs for the analysis of eye metrics from multiple levels of granularity. The authors incorporate pre-trained object detectors and object instance segmentation models for offline detection of dynamic AOIs in video streams. This research presents the implementation and evaluation of object detectors and object instance segmentation models to find the best model to be integrated in a real-time eye movement analysis pipeline. The authors filter gaze data that falls within the polygonal boundaries of detected dynamic AOIs and apply object detector to find bounding-boxes in a public dataset. The results indicate that the dynamic AOIs generated by object detectors capture 60% of eye movements & object instance segmentation models capture 30% of eye movements.


2015 ◽  
Vol 9 (4) ◽  
Author(s):  
Songpo Li ◽  
Xiaoli Zhang ◽  
Fernando J. Kim ◽  
Rodrigo Donalisio da Silva ◽  
Diedra Gustafson ◽  
...  

Laparoscopic robots have been widely adopted in modern medical practice. However, explicitly interacting with these robots may increase the physical and cognitive load on the surgeon. An attention-aware robotic laparoscope system has been developed to free the surgeon from the technical limitations of visualization through the laparoscope. This system can implicitly recognize the surgeon's visual attention by interpreting the surgeon's natural eye movements using fuzzy logic and then automatically steer the laparoscope to focus on that viewing target. Experimental results show that this system can make the surgeon–robot interaction more effective, intuitive, and has the potential to make the execution of the surgery smoother and faster.


Author(s):  
Chandni Parikh

Eye movements and gaze direction have been utilized to make inferences about perception and cognition since the 1800s. The driving factor behind recording overt eye movements stem from the fundamental idea that one's gaze provides tremendous insight into the information processing that takes place early on during development. One of the key deficits seen in individuals diagnosed with Autism Spectrum Disorders (ASD) involves eye gaze and social attention processing. The current chapter focuses on the use of eye-tracking technology with high-risk infants who are siblings of children diagnosed with ASD in order to highlight potential bio-behavioral markers that can inform the ascertainment of red flags and atypical behaviors associated with ASD within the first few years of development.


2020 ◽  
Vol 13 (4) ◽  
pp. 31-43
Author(s):  
Seiko Goto ◽  
Yuki Morota ◽  
Congcong Liu ◽  
Minkai Sun ◽  
Bertram Emil Shi ◽  
...  

Aim: To explore people’s visual attention and psychological and physiological responses to viewing a Japanese garden (an asymmetrically designed garden) and an herb garden (a symmetrically designed garden). Background: There are few studies of eye movements when observing different style gardens, and how they are connected to the interpretation of the space, and physiological and psychological responses. Method: Thirty subjects were recruited and their physiological and psychological responses to viewing the garden types were assessed using a heart-rate monitor and questionnaire. Eye movements while viewing projected slide images of the gardens were tracking using an eye-tracking monitor. Results: A significant decrease in heart rate was observed when subjects were viewing the Japanese garden as opposed to viewing the herb garden. Mood was significantly improved in both gardens, but eye-gaze patterns differed. The Japanese garden elicited far more comments about expectations for the coming season; unlike the herb garden, it also induced memories of viewing other landscapes. Conclusion: The physiological and psychological responses to viewing gardens differs based on the quality of landscape design and the prior experience of viewers.


10.2196/20797 ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. e20797
Author(s):  
Nathan Moore ◽  
Soojeong Yoo ◽  
Philip Poronnik ◽  
Martin Brown ◽  
Naseem Ahmadpour

Background Traditional methods of delivering Advanced Life Support (ALS) training and reaccreditation are resource-intensive and costly. Interactive simulations and gameplay using virtual reality (VR) technology can complement traditional training processes as a cost-effective, engaging, and flexible training tool. Objective This exploratory study aimed to determine the specific user needs of clinicians engaging with a new interactive VR ALS simulation (ALS-SimVR) application to inform the ongoing development of such training platforms. Methods Semistructured interviews were conducted with experienced clinicians (n=10, median age=40.9 years) following a single playthrough of the application. All clinicians have been directly involved in the delivery of ALS training in both clinical and educational settings (median years of ALS experience=12.4; all had minimal or no VR experience). Interviews were supplemented with an assessment of usability (using heuristic evaluation) and presence. Results The ALS-SimVR training app was well received. Thematic analysis of the interviews revealed five main areas of user needs that can inform future design efforts for creating engaging VR training apps: affordances, agency, diverse input modalities, mental models, and advanced roles. Conclusions This study was conducted to identify the needs of clinicians engaging with ALS-SimVR. However, our findings revealed broader design considerations that will be crucial in guiding future work in this area. Although aligning the training scenarios with accepted teaching algorithms is important, our findings reveal that improving user experience and engagement requires careful attention to technology-specific issues such as input modalities.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4956
Author(s):  
Jose Llanes-Jurado ◽  
Javier Marín-Morales ◽  
Jaime Guixeres ◽  
Mariano Alcañiz

Fixation identification is an essential task in the extraction of relevant information from gaze patterns; various algorithms are used in the identification process. However, the thresholds used in the algorithms greatly affect their sensitivity. Moreover, the application of these algorithm to eye-tracking technologies integrated into head-mounted displays, where the subject’s head position is unrestricted, is still an open issue. Therefore, the adaptation of eye-tracking algorithms and their thresholds to immersive virtual reality frameworks needs to be validated. This study presents the development of a dispersion-threshold identification algorithm applied to data obtained from an eye-tracking system integrated into a head-mounted display. Rules-based criteria are proposed to calibrate the thresholds of the algorithm through different features, such as number of fixations and the percentage of points which belong to a fixation. The results show that distance-dispersion thresholds between 1–1.6° and time windows between 0.25–0.4 s are the acceptable range parameters, with 1° and 0.25 s being the optimum. The work presents a calibrated algorithm to be applied in future experiments with eye-tracking integrated into head-mounted displays and guidelines for calibrating fixation identification algorithms


Author(s):  
Nathan T. Dorris ◽  
R. Brian Valimont ◽  
Eric J. Boelhouwer

This investigation tested whether heavily degraded warnings affected gaze patterns and resulted in longer viewing times than lightly degraded warnings. The study included sixteen participants who viewed six matched pairs of lightly and heavily degraded warnings. Eye movements were recorded using an eye tracking system while the total time on task for each warning was collected. Fixation times were also collected as participants viewed the various panels of each warning. In the second part of the experiment, legibility and participant comprehension of each warning was tested. Paired t-tests showed that total time on task, total fixation time, and message panel fixation time were consistently significantly different for three of the six pairs of warnings, such that each of the three aforementioned times increased significantly when participants were viewing a highly degraded warning label. Additionally, participants were able to comprehend all warnings presented. This study also provides evidence that eye tracking can be a useful tool in warnings research.


Author(s):  
Eunhee Chang ◽  
Hyun Taek Kim ◽  
Byounghyun Yoo

Abstract Cybersickness refers to a group of uncomfortable symptoms experienced in virtual reality (VR). Among several theories of cybersickness, the subjective vertical mismatch (SVM) theory focuses on an individual’s internal model, which is created and updated through past experiences. Although previous studies have attempted to provide experimental evidence for the theory, most approaches are limited to subjective measures or body sway. In this study, we aimed to demonstrate the SVM theory on the basis of the participant’s eye movements and investigate whether the subjective level of cybersickness can be predicted using eye-related measures. 26 participants experienced roller coaster VR while wearing a head-mounted display with eye tracking. We designed four experimental conditions by changing the orientation of the VR scene (upright vs. inverted) or the controllability of the participant’s body (unrestrained vs. restrained body). The results indicated that participants reported more severe cybersickness when experiencing the upright VR content without controllability. Moreover, distinctive eye movements (e.g. fixation duration and distance between the eye gaze and the object position sequence) were observed according to the experimental conditions. On the basis of these results, we developed a regression model using eye-movement features and found that our model can explain 34.8% of the total variance of cybersickness, indicating a substantial improvement compared to the previous work (4.2%). This study provides empirical data for the SVM theory using both subjective and eye-related measures. In particular, the results suggest that participants’ eye movements can serve as a significant index for predicting cybersickness when considering natural gaze behaviors during a VR experience.


Sign in / Sign up

Export Citation Format

Share Document