scholarly journals Collecting and Analyzing Eye-Tracking Data in Outdoor Environments

2012 ◽  
Vol 5 (2) ◽  
Author(s):  
Karen M. Evans ◽  
Robert A. Jacobs ◽  
John A. Tarduno ◽  
Jeff B. Pelz

Natural outdoor conditions pose unique obstacles for researchers, above and beyond those inherent to all mobile eye-tracking research. During analyses of a large set of eye-tracking data collected on geologists examining outdoor scenes, we have found that the nature of calibration, pupil identification, fixation detection, and gaze analysis all require procedures different from those typically used for indoor studies. Here, we discuss each of these challenges and present solutions, which together define a general method useful for investigations relying on outdoor eye-tracking data. We also discuss recommendations for improving the tools that are available, to further increase the accuracy and utility of outdoor eye-tracking data.

Author(s):  
Chiara Jongerius ◽  
T. Callemein ◽  
T. Goedemé ◽  
K. Van Beeck ◽  
J. A. Romijn ◽  
...  

AbstractThe assessment of gaze behaviour is essential for understanding the psychology of communication. Mobile eye-tracking glasses are useful to measure gaze behaviour during dynamic interactions. Eye-tracking data can be analysed by using manually annotated areas-of-interest. Computer vision algorithms may alternatively be used to reduce the amount of manual effort, but also the subjectivity and complexity of these analyses. Using additional re-identification (Re-ID) algorithms, different participants in the interaction can be distinguished. The aim of this study was to compare the results of manual annotation of mobile eye-tracking data with the results of a computer vision algorithm. We selected the first minute of seven randomly selected eye-tracking videos of consultations between physicians and patients in a Dutch Internal Medicine out-patient clinic. Three human annotators and a computer vision algorithm annotated mobile eye-tracking data, after which interrater reliability was assessed between the areas-of-interest annotated by the annotators and the computer vision algorithm. Additionally, we explored interrater reliability when using lengthy videos and different area-of-interest shapes. In total, we analysed more than 65 min of eye-tracking videos manually and with the algorithm. Overall, the absolute normalized difference between the manual and the algorithm annotations of face-gaze was less than 2%. Our results show high interrater agreements between human annotators and the algorithm with Cohen’s kappa ranging from 0.85 to 0.98. We conclude that computer vision algorithms produce comparable results to those of human annotators. Analyses by the algorithm are not subject to annotator fatigue or subjectivity and can therefore advance eye-tracking analyses.


Author(s):  
James Simpson

The mobilization of eye-tracking for use outside of the laboratory provides new opportunities for the assessment of pedestrian visual engagement with their surroundings. However, the development of data representation techniques that visualize the dynamics of pedestrian gaze distribution upon the environment they are situated within remains limited. The current study addresses this through highlighting how mobile eye-tracking data, which captures where pedestrian gaze is focused upon buildings along urban street edges, can be mapped as three-dimensional gaze projection heat-maps. This data processing and visualization technique is assessed during the current study along with future opportunities and associated challenges discussed. 


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7668
Author(s):  
Niharika Kumari ◽  
Verena Ruf ◽  
Sergey Mukhametov ◽  
Albrecht Schmidt ◽  
Jochen Kuhn ◽  
...  

Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometimes manual analysis of mobile eye-tracking data often hinders the realization of extensive studies, as this is a very time-consuming process and usually not feasible for real-world situations in which participants move or manipulate objects. In this work, we explore the opportunities to use object recognition models to assign mobile eye-tracking data for real objects during an authentic students’ lab course. In a comparison of three different Convolutional Neural Networks (CNN), a Faster Region-Based-CNN, you only look once (YOLO) v3, and YOLO v4, we found that YOLO v4, together with an optical flow estimation, provides the fastest results with the highest accuracy for object detection in this setting. The automatic assignment of the gaze data to real objects simplifies the time-consuming analysis of mobile eye-tracking data and offers an opportunity for real-time system responses to the user’s gaze. Additionally, we identify and discuss several problems in using object detection for mobile eye-tracking data that need to be considered.


2019 ◽  
Vol 31 (3) ◽  
pp. 971-988 ◽  
Author(s):  
Xiaoxue Fu ◽  
Eric E. Nelson ◽  
Marcela Borge ◽  
Kristin A. Buss ◽  
Koraly Pérez-Edgar

AbstractBehavioral Inhibition (BI) is a temperament type that predicts social withdrawal in childhood and anxiety disorders later in life. However, not all BI children develop anxiety. Attention bias (AB) may enhance the vulnerability for anxiety in BI children, and interfere with their development of effective emotion regulation. In order to fully probe attention patterns, we used traditional measures of reaction time (RT), stationary eye-tracking, and recently emerging mobile eye-tracking measures of attention in a sample of 5- to 7-year-olds characterized as BI (N = 23) or non-BI (N = 58) using parent reports. There were no BI-related differences in RT or stationary eye-tracking indices of AB in a dot-probe task. However, findings in a subsample from whom eye-tracking data were collected during a live social interaction indicated that BI children (N = 12) directed fewer gaze shifts to the stranger than non-BI children (N = 25). Moreover, the frequency of gazes toward the stranger was positively associated with stationary AB only in BI, but not in non-BI, children. Hence, BI was characterized by a consistent pattern of attention across stationary and ambulatory measures. We demonstrate the utility of mobile eye-tracking as an effective tool to extend the assessment of attention and regulation to social interactive contexts.


2016 ◽  
Vol 9 (6) ◽  
Author(s):  
Flora Ioannidou ◽  
Frouke Hermens ◽  
Timothy L Hodgson

Eye tracking studies have suggested that, when viewing images centrally presented on a computer screen, observers tend to fixate the middle of the image. This so-called `central bias' was later also observed in mobile eye tracking during outdoors navigation, where observers were found to fixate the middle of the head-centered video image. It is unclear, however, whether the extension of the central bias to mobile eye tracking in outdoors navigation may have been due to the relatively long viewing distances towards objects in this task and the constant turning of the body in the direction of motion, both of which may have reduced the need for large amplitude eye movements. To examine whether the central bias in day-to-day viewing is related to the viewing distances involved, we here compare eye movements in three tasks (indoors navigation, tea making, and card sorting), each associated with interactions with objects at different viewing distances. Analysis of gaze positions showed a central bias for all three tasks that was independent of the task performed. These results confirm earlier observations of the central bias in mobile eye tracking data, and suggest that differences in the typical viewing distance during different tasks have little effect on the bias. The results could have interesting technological applications, in which the bias is used to estimate the direction of gaze from head-centered video images, such as those obtained from wearable technology.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
C. Jongerius ◽  
H. G. van den Boorn ◽  
T. Callemein ◽  
N. T. Boeske ◽  
J. A. Romijn ◽  
...  

AbstractFace gaze is a fundamental non-verbal behaviour and can be assessed using eye-tracking glasses. Methodological guidelines are lacking on which measure to use to determine face gaze. To evaluate face gaze patterns we compared three measures: duration, frequency and dwell time. Furthermore, state of the art face gaze analysis requires time and manual effort. We tested if face gaze patterns in the first 30, 60 and 120 s predict face gaze patterns in the remaining interaction. We performed secondary analyses of mobile eye-tracking data of 16 internal medicine physicians in consultation with 100 of their patients. Duration and frequency of face gaze were unrelated. The lack of association between duration and frequency suggests that research may yield different results depending on which measure of face gaze is used. Dwell time correlates both duration and frequency. Face gaze during the first seconds of the consultations predicted face gaze patterns of the remaining consultation time (R2 0.26 to 0.73). Therefore, face gaze during the first minutes of the consultations can be used to predict face gaze patterns over the complete interaction. Researchers interested to study face gaze may use these findings to make optimal methodological choices.


Author(s):  
Allan Fong ◽  
Daniel Hoffman ◽  
Raj M. Ratwani

Stationary eye-tracking technology has been used extensively in human-computer interaction to both understand how humans interact with computers and as an interaction mechanism. Mobile eye-tracking technology is becoming more prevalent, yet the analysis and annotation of mobile eye-tracking data remains challenging. We present a novel human-in-the-loop approach for mobile eye-tracking data analysis that dramatically reduces resource requirements. This method incorporates human insight in a semi-automatic decision making process, leveraging both computational power and human decision making abilities. We demonstrate the accuracy of this approach with eye movement data from two real-world use cases. Average accuracy across the two environments is 82.3%. Our approach holds tremendous promise and has the potential to open the door to more robust eye movement studies in the real-world.


Sign in / Sign up

Export Citation Format

Share Document