scholarly journals The role of eye movements in perceiving vehicle speed and time-to-arrival at the roadside

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jennifer Sudkamp ◽  
Mateusz Bocian ◽  
David Souto

AbstractTo avoid collisions, pedestrians depend on their ability to perceive and interpret the visual motion of other road users. Eye movements influence motion perception, yet pedestrians’ gaze behavior has been little investigated. In the present study, we ask whether observers sample visual information differently when making two types of judgements based on the same virtual road-crossing scenario and to which extent spontaneous gaze behavior affects those judgements. Participants performed in succession a speed and a time-to-arrival two-interval discrimination task on the same simple traffic scenario—a car approaching at a constant speed (varying from 10 to 90 km/h) on a single-lane road. On average, observers were able to discriminate vehicle speeds of around 18 km/h and times-to-arrival of 0.7 s. In both tasks, observers placed their gaze closely towards the center of the vehicle’s front plane while pursuing the vehicle. Other areas of the visual scene were sampled infrequently. No differences were found in the average gaze behavior between the two tasks and a pattern classifier (Support Vector Machine), trained on trial-level gaze patterns, failed to reliably classify the task from the spontaneous eye movements it elicited. Saccadic gaze behavior could predict time-to-arrival discrimination performance, demonstrating the relevance of gaze behavior for perceptual sensitivity in road-crossing.

2018 ◽  
Vol 120 (4) ◽  
pp. 1602-1615 ◽  
Author(s):  
Anouk J. de Brouwer ◽  
Mohammed Albaghdadi ◽  
J. Randall Flanagan ◽  
Jason P. Gallivan

Successful motor performance relies on our ability to adapt to changes in the environment by learning novel mappings between motor commands and sensory outcomes. Such adaptation is thought to involve two distinct mechanisms: an implicit, error-based component linked to slow learning and an explicit, strategic component linked to fast learning and savings (i.e., faster relearning). Because behavior, at any given moment, is the resultant combination of these two processes, it has remained a challenge to parcellate their relative contributions to performance. The explicit component to visuomotor rotation (VMR) learning has recently been measured by having participants verbally report their aiming strategy used to counteract the rotation. However, this procedure has been shown to magnify the explicit component. Here we tested whether task-specific eye movements, a natural component of reach planning, but poorly studied in motor learning tasks, can provide a direct readout of the state of the explicit component during VMR learning. We show, by placing targets on a visible ring and including a delay between target presentation and reach onset, that individual differences in gaze patterns during sensorimotor learning are linked to participants’ rates of learning and their expression of savings. Specifically, we find that participants who, during reach planning, naturally fixate an aimpoint rotated away from the target location, show faster initial adaptation and readaptation 24 h later. Our results demonstrate that gaze behavior cannot only uniquely identify individuals who implement cognitive strategies during learning but also how their implementation is linked to differences in learning. NEW & NOTEWORTHY Although it is increasingly well appreciated that sensorimotor learning is driven by two separate components, an error-based process and a strategic process, it has remained a challenge to identify their relative contributions to performance. Here we demonstrate that task-specific eye movements provide a direct read-out of explicit strategies during sensorimotor learning in the presence of visual landmarks. We further show that individual differences in gaze behavior are linked to learning rate and savings.


2021 ◽  
Author(s):  
Jennifer Sudkamp ◽  
David Souto

To navigate safely, pedestrians need to accurately perceive and predict other road users’ motion trajectories. Previous research has shown that the way visual information is sampled affects motion perception. Here we asked how overt attention affects time-to-arrival prediction of oncoming vehicles when viewed from a pedestrian’s point of view in a virtual road-crossing scenario. In three online experiments, we tested time-to-arrival prediction accuracies when observers pursued an approaching vehicle, fixated towards the road-crossing area, fixated towards the road close to the vehicle’s trajectory or were free to view the scene. When the observer-vehicle distance was high, participants displayed a central tendency in their predicted arrival times, indicating that vehicle speed was insufficiently taken into account when estimating its time-to-arrival. This was especially the case when participants fixated towards the road-crossing area, resulting in time-to-arrival overestimation of slow-moving vehicles and underestimation of fast-moving vehicles. The central tendency bias decreased when participants pursued the vehicle or when the eccentricity between the fixation location and the vehicle trajectory was reduced. Our results identify an unfavorable visual sampling strategy as a potential risk factor for pedestrians and suggest that overt attention is best directed towards the direction of the approaching traffic to derive accurate time-to-arrival estimates. To support pedestrian safety, we conclude that the promotion of adequate visual sampling strategies should be considered in both traffic planning and safety training measures.


2020 ◽  
pp. 1464-1482
Author(s):  
Kristian Lukander ◽  
Miika Toivanen ◽  
Kai Puolamäki

We constantly move our gaze to gather acute visual information from our environment. Conversely, as originally shown by Yarbus in his seminal work, the elicited gaze patterns hold information over our changing attentional focus while performing a task. Recently, the proliferation of machine learning algorithms has allowed the research community to test the idea of inferring, or even predicting action and intent from gaze behaviour. The on-going miniaturization of gaze tracking technologies toward pervasive wearable solutions allows studying inference also in everyday activities outside research laboratories. This paper scopes the emerging field and reviews studies focusing on the inference of intent and action in naturalistic behaviour. While the task-specific nature of gaze behavior, and the variability in naturalistic setups present challenges, gaze-based inference holds a clear promise for machine-based understanding of human intent and future interactive solutions.


2019 ◽  
Author(s):  
Lara Rösler ◽  
Stefan Göhring ◽  
Michael Strunz ◽  
Matthias Gamer

Much of our current understanding of social anxiety rests on the use of simplistic stimulation material in laboratory settings. Latest technological developments now allow the investigation of eye movements and physiological measures during real interactions with adequate recording quality. Considering the wealth of conflicting findings on gaze behavior in social anxiety, the current study aimed at unraveling the mechanisms contributing to differential gaze patterns in a naturalistic setting in the general population and in social anxiety. We introduced participants with differing social anxiety symptoms to a waiting room situation while recording heart rate and electrodermal activity using mobile sensors and eye movements using mobile eye-tracking glasses. We observed fewer fixations on the head of the confederate in the initial waiting phase of the experiment. These head fixations increased when the confederate was involved in a phone call and head fixations were most pronounced during the actual conversation. In opposition to gaze-avoidance models of social anxiety, we did not observe any correlations between social anxiety and visual attention. Social anxiety was, however, associated with elevated heart rate throughout the entire experiment suggesting that physiological hyperactivity constitutes a cardinal feature of the disorder.


2017 ◽  
Vol 9 (4) ◽  
pp. 41-57 ◽  
Author(s):  
Kristian Lukander ◽  
Miika Toivanen ◽  
Kai Puolamäki

We constantly move our gaze to gather acute visual information from our environment. Conversely, as originally shown by Yarbus in his seminal work, the elicited gaze patterns hold information over our changing attentional focus while performing a task. Recently, the proliferation of machine learning algorithms has allowed the research community to test the idea of inferring, or even predicting action and intent from gaze behaviour. The on-going miniaturization of gaze tracking technologies toward pervasive wearable solutions allows studying inference also in everyday activities outside research laboratories. This paper scopes the emerging field and reviews studies focusing on the inference of intent and action in naturalistic behaviour. While the task-specific nature of gaze behavior, and the variability in naturalistic setups present challenges, gaze-based inference holds a clear promise for machine-based understanding of human intent and future interactive solutions.


2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


Sign in / Sign up

Export Citation Format

Share Document