Inferring user tasks in pedestrian navigation from eye movement data in real-world environments

2018 ◽  
Vol 33 (4) ◽  
pp. 739-763 ◽  
Author(s):  
Hua Liao ◽  
Weihua Dong ◽  
Haosheng Huang ◽  
Georg Gartner ◽  
Huiping Liu
Author(s):  
Kim R. Hammel ◽  
Donald L. Fisher ◽  
Anuj K. Pradhan

Driving simulators and eye tracking technology are increasingly being used to evaluate advanced telematics. Many such evaluations are easily generalizable only if drivers' scanning in the virtual environment is similar to their scanning behavior in real world environments. In this study we developed a virtual driving environment designed to replicate the environmental conditions of a previous, real world experiment (Recarte & Nunes, 2000). Our motive was to compare the data collected under three different cognitive loading conditions in an advanced, fixed-base driving simulator with that collected in the real world. In the study that we report, a head mounted eye tracker recorded eye movement data while participants drove the virtual highway in half-mile segments. There were three loading conditions: no loading, verbal loading and spatial loading. Each of the 24 subjects drove in all three conditions. We found that the patterns that characterized eye movement data collected in the simulator were virtually identical to those that characterized eye movement data collected in the real world. In particular, the number of speedometer checks and the functional field of view significantly decreased in the verbal conditions, with even greater effects for the spatial loading conditions.


2021 ◽  
Author(s):  
Candace Elise Peacock ◽  
Elizabeth Hall ◽  
John M. Henderson

Although the physical salience of objects has previously been demonstrated to guide attention in real-world scene perception, it is unknown whether objects are also prioritized based on their meaning. To answer this question, we computed the average meaning and the average physical salience of objects in scenes. Using eye movement data from aesthetic judgment and memorization tasks, we then tested whether fixations are more likely to land on high-meaning objects than low-meaning objects while controlling for object salience. The results demonstrated that fixations are more likely to be directed to high meaning objects than low meaning objects regardless of object salience. Furthermore, the influence of object salience was progressively reduced as object meaning increased and was eliminated at the highest levels of meaning. Overall, these findings provide the first evidence that objects are prioritized by meaning for attentional selection during active scene viewing.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Jianping Gao ◽  
Sijie Zhang ◽  
Yunyong He ◽  
Qi Zhang ◽  
Lu Sun ◽  
...  

A real-world driving experiment was performed in the Wen-Ma section of the G4217 Rong-Chang Freeway situated in the Sichuan Province to investigate the impact law of the pupil diameter of drivers in tunnel groups on the mountainous freeway. The eye-movement data of drivers were collected, and the percentage of pupil diameter variable (PPDV) was used as a visual characteristic index. The analysis of the overall change in the PPDV of drivers in the experimental sections demonstrated that the PPDV in tunnel groups differed significantly between the nontunnel sections and single tunnel sections. Subsequently, a related model for the PPDV of drivers and the length of the connecting zone between tunnels was established, its reliability evaluated, and the smooth mutation value obtained on the basis of the mutation theory. Thereafter, a tunnel group definition standard based on the visual effect of drivers was developed. A six-zone approach was devised for the analysis of tunnel groups, and the result revealed that the different zones in the tunnel group have different impact on PPDV of drivers. The results revealed that the different zones of tunnel group have different impact on PPDV of drivers. Furthermore, lighting transition facilities should be set in the exit section of tunnel. The PPDV of drivers was negatively correlated with the length of the connecting zone of tunnel groups, and 100 m is the recommended safety length threshold for the connecting zone of tunnel groups.


2019 ◽  
Vol 24 (4) ◽  
pp. 297-311
Author(s):  
José David Moreno ◽  
José A. León ◽  
Lorena A. M. Arnal ◽  
Juan Botella

Abstract. We report the results of a meta-analysis of 22 experiments comparing the eye movement data obtained from young ( Mage = 21 years) and old ( Mage = 73 years) readers. The data included six eye movement measures (mean gaze duration, mean fixation duration, total sentence reading time, mean number of fixations, mean number of regressions, and mean length of progressive saccade eye movements). Estimates were obtained of the typified mean difference, d, between the age groups in all six measures. The results showed positive combined effect size estimates in favor of the young adult group (between 0.54 and 3.66 in all measures), although the difference for the mean number of fixations was not significant. Young adults make in a systematic way, shorter gazes, fewer regressions, and shorter saccadic movements during reading than older adults, and they also read faster. The meta-analysis results confirm statistically the most common patterns observed in previous research; therefore, eye movements seem to be a useful tool to measure behavioral changes due to the aging process. Moreover, these results do not allow us to discard either of the two main hypotheses assessed for explaining the observed aging effects, namely neural degenerative problems and the adoption of compensatory strategies.


2014 ◽  
Author(s):  
Bernhard Angele ◽  
Elizabeth R. Schotter ◽  
Timothy Slattery ◽  
Tara L. Chaloukian ◽  
Klinton Bicknell ◽  
...  

Author(s):  
Ayush Kumar ◽  
Prantik Howlader ◽  
Rafael Garcia ◽  
Daniel Weiskopf ◽  
Klaus Mueller

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


1972 ◽  
Vol 35 (1) ◽  
pp. 103-110
Author(s):  
Phillip Kleespies ◽  
Morton Wiener

This study explored (1) for evidence of visual input at so-called “subliminal” exposure durations, and (2) whether the response, if any, was a function of the thematic content of the stimulus. Thematic content (threatening versus non-threatening) and stimulus structure (angular versus curved) were varied independently under “subliminal,” “part-cue,” and “identification” exposure conditions. With Ss' reports and the frequency and latency of first eye movements (“orienting reflex”) as input indicators, there was no evidence of input differences which are a function of thematic content at any exposure duration, and the “report” data were consistent with the eye-movement data.


Array ◽  
2021 ◽  
pp. 100087
Author(s):  
Peter Raatikainen ◽  
Jarkko Hautala ◽  
Otto Loberg ◽  
Tommi Kärkkäinen ◽  
Paavo Leppänen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document