scholarly journals Using gaze behavior to parcellate the explicit and implicit contributions to visuomotor learning

2018 ◽  
Vol 120 (4) ◽  
pp. 1602-1615 ◽  
Author(s):  
Anouk J. de Brouwer ◽  
Mohammed Albaghdadi ◽  
J. Randall Flanagan ◽  
Jason P. Gallivan

Successful motor performance relies on our ability to adapt to changes in the environment by learning novel mappings between motor commands and sensory outcomes. Such adaptation is thought to involve two distinct mechanisms: an implicit, error-based component linked to slow learning and an explicit, strategic component linked to fast learning and savings (i.e., faster relearning). Because behavior, at any given moment, is the resultant combination of these two processes, it has remained a challenge to parcellate their relative contributions to performance. The explicit component to visuomotor rotation (VMR) learning has recently been measured by having participants verbally report their aiming strategy used to counteract the rotation. However, this procedure has been shown to magnify the explicit component. Here we tested whether task-specific eye movements, a natural component of reach planning, but poorly studied in motor learning tasks, can provide a direct readout of the state of the explicit component during VMR learning. We show, by placing targets on a visible ring and including a delay between target presentation and reach onset, that individual differences in gaze patterns during sensorimotor learning are linked to participants’ rates of learning and their expression of savings. Specifically, we find that participants who, during reach planning, naturally fixate an aimpoint rotated away from the target location, show faster initial adaptation and readaptation 24 h later. Our results demonstrate that gaze behavior cannot only uniquely identify individuals who implement cognitive strategies during learning but also how their implementation is linked to differences in learning. NEW & NOTEWORTHY Although it is increasingly well appreciated that sensorimotor learning is driven by two separate components, an error-based process and a strategic process, it has remained a challenge to identify their relative contributions to performance. Here we demonstrate that task-specific eye movements provide a direct read-out of explicit strategies during sensorimotor learning in the presence of visual landmarks. We further show that individual differences in gaze behavior are linked to learning rate and savings.

2017 ◽  
Author(s):  
Anouk J. de Brouwer ◽  
Mohammed Albaghdadi ◽  
J. Randall Flanagan ◽  
Jason P. Gallivan

AbstractSuccessful motor performance relies on our ability to adapt to changes in the environment by learning novel mappings between motor commands and sensory outcomes. Such adaptation is thought to involve two distinct mechanisms: An implicit, error-based component linked to slow learning and an explicit, strategic component linked to fast learning and savings (i.e., faster relearning). Because behaviour, at any given moment, is the resultant combination of these two processes, it has remained a challenge to parcellate their relative contributions to performance. The explicit component to visuomotor rotation (VMR) learning has recently been measured by having participants verbally report their aiming strategy used to counteract the rotation. However, this procedure has been shown to magnify the explicit component. Here we tested whether task-specific eye movements, a natural component of reach planning—but poorly studied in motor learning tasks—can provide a direct read-out of the state of the explicit component during VMR learning. We show, by placing targets on a visible ring and including a delay between target presentation and reach onset, that individual differences in gaze patterns during sensorimotor adaptation are linked to participants’ rates of learning and can predict the expression of savings. Specifically, we find that participants who, during reach planning, naturally fixate an aimpoint, rotated away from the target location, show faster initial adaptation and readaptation 24 hrs. later. Our results demonstrate that gaze behaviour can not only uniquely identify individuals who implement cognitive strategies during learning, but also how their implementation is linked to differences in learning.


2019 ◽  
Author(s):  
Lara Rösler ◽  
Stefan Göhring ◽  
Michael Strunz ◽  
Matthias Gamer

Much of our current understanding of social anxiety rests on the use of simplistic stimulation material in laboratory settings. Latest technological developments now allow the investigation of eye movements and physiological measures during real interactions with adequate recording quality. Considering the wealth of conflicting findings on gaze behavior in social anxiety, the current study aimed at unraveling the mechanisms contributing to differential gaze patterns in a naturalistic setting in the general population and in social anxiety. We introduced participants with differing social anxiety symptoms to a waiting room situation while recording heart rate and electrodermal activity using mobile sensors and eye movements using mobile eye-tracking glasses. We observed fewer fixations on the head of the confederate in the initial waiting phase of the experiment. These head fixations increased when the confederate was involved in a phone call and head fixations were most pronounced during the actual conversation. In opposition to gaze-avoidance models of social anxiety, we did not observe any correlations between social anxiety and visual attention. Social anxiety was, however, associated with elevated heart rate throughout the entire experiment suggesting that physiological hyperactivity constitutes a cardinal feature of the disorder.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jennifer Sudkamp ◽  
Mateusz Bocian ◽  
David Souto

AbstractTo avoid collisions, pedestrians depend on their ability to perceive and interpret the visual motion of other road users. Eye movements influence motion perception, yet pedestrians’ gaze behavior has been little investigated. In the present study, we ask whether observers sample visual information differently when making two types of judgements based on the same virtual road-crossing scenario and to which extent spontaneous gaze behavior affects those judgements. Participants performed in succession a speed and a time-to-arrival two-interval discrimination task on the same simple traffic scenario—a car approaching at a constant speed (varying from 10 to 90 km/h) on a single-lane road. On average, observers were able to discriminate vehicle speeds of around 18 km/h and times-to-arrival of 0.7 s. In both tasks, observers placed their gaze closely towards the center of the vehicle’s front plane while pursuing the vehicle. Other areas of the visual scene were sampled infrequently. No differences were found in the average gaze behavior between the two tasks and a pattern classifier (Support Vector Machine), trained on trial-level gaze patterns, failed to reliably classify the task from the spontaneous eye movements it elicited. Saccadic gaze behavior could predict time-to-arrival discrimination performance, demonstrating the relevance of gaze behavior for perceptual sensitivity in road-crossing.


Cortex ◽  
2016 ◽  
Vol 85 ◽  
pp. 182-193 ◽  
Author(s):  
Rosanna K. Olsen ◽  
Vinoja Sebanayagam ◽  
Yunjo Lee ◽  
Morris Moscovitch ◽  
Cheryl L. Grady ◽  
...  

2021 ◽  
Vol 11 (7) ◽  
pp. 915
Author(s):  
Marianna Stella ◽  
Paul E. Engelhardt

In this study, we examined eye movements and comprehension in sentences containing a relative clause. To date, few studies have focused on syntactic processing in dyslexia and so one goal of the study is to contribute to this gap in the experimental literature. A second goal is to contribute to theoretical psycholinguistic debate concerning the cause and the location of the processing difficulty associated with object-relative clauses. We compared dyslexic readers (n = 50) to a group of non-dyslexic controls (n = 50). We also assessed two key individual differences variables (working memory and verbal intelligence), which have been theorised to impact reading times and comprehension of subject- and object-relative clauses. The results showed that dyslexics and controls had similar comprehension accuracy. However, reading times showed participants with dyslexia spent significantly longer reading the sentences compared to controls (i.e., a main effect of dyslexia). In general, sentence type did not interact with dyslexia status. With respect to individual differences and the theoretical debate, we found that processing difficulty between the subject and object relatives was no longer significant when individual differences in working memory were controlled. Thus, our findings support theories, which assume that working memory demands are responsible for the processing difficulty incurred by (1) individuals with dyslexia and (2) object-relative clauses as compared to subject relative clauses.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


2018 ◽  
Vol 71 (9) ◽  
pp. 1860-1872 ◽  
Author(s):  
Stephen RH Langton ◽  
Alex H McIntyre ◽  
Peter JB Hancock ◽  
Helmut Leder

Research has established that a perceived eye gaze produces a concomitant shift in a viewer’s spatial attention in the direction of that gaze. The two experiments reported here investigate the extent to which the nature of the eye movement made by the gazer contributes to this orienting effect. On each trial in these experiments, participants were asked to make a speeded response to a target that could appear in a location toward which a centrally presented face had just gazed (a cued target) or in a location that was not the recipient of a gaze (an uncued target). The gaze cues consisted of either fast saccadic eye movements or slower smooth pursuit movements. Cued targets were responded to faster than uncued targets, and this gaze-cued orienting effect was found to be equivalent for each type of gaze shift both when the gazes were un-predictive of target location (Experiment 1) and counterpredictive of target location (Experiment 2). The results offer no support for the hypothesis that motion speed modulates gaze-cued orienting. However, they do suggest that motion of the eyes per se, regardless of the type of movement, may be sufficient to trigger an orienting effect.


2021 ◽  
Vol 4 (1) ◽  
pp. 71-95
Author(s):  
Juha Lång ◽  
Hana Vrzakova ◽  
Lauri Mehtätalo

  One of the main rules of subtitling states that subtitles should be formatted and timed so that viewers have enough time to read and understand the text but also to follow the picture. In this paper we examine the factors that influence the time viewers spend looking at subtitles. We concentrate on the lexical and structural properties of subtitles. The participant group (N = 14) watched a television documentary with Russian narration and Finnish subtitles (the participants’ native language), while their eye movements were tracked. Using a linear mixed-effects model, we identified significant effects of subtitle duration and character count on the time participants spent looking at the subtitles. The model also revealed significant inter-individual differences, despite the fact that the participant group was seemingly homogeneous. The findings underline the complexity of subtitled audiovisual material as a stimulus of cognitive processing. We provide a starting point for more comprehensive modelling of the factors involved in gaze behaviour when watching subtitled content. Lay summary Subtitles have become a popular method for watching foreign series and films even in countries that have traditionally used dubbing in this regard. Because subtitles are visible to the viewer a short, limited time, they should be composed so that they are easy to read, and that the viewer has time to also follow the image. Nevertheless, the factors that have impact the time it takes to read a subtitle is not very well known. We wanted to find out what makes people who are watching subtitled television shows spend more time gazing at the subtitles? To answer this question, we recorded the eye movements of 14 participants when they were watching a short, subtitled television documentary. We created a statistical model of the gaze behavior from the eye movement data and found that both the length of the subtitle and the time the subtitle is visible are separate contributing factors. We also discovered large differences between individual viewers. Our conclusion is that people process subtitled content in very different ways, but there are some common tendencies. Our model can be seen as solid starting point for comprehensive modelling of gaze behavior of people watching subtitled audiovisual material.


2015 ◽  
Vol 9 (4) ◽  
Author(s):  
Songpo Li ◽  
Xiaoli Zhang ◽  
Fernando J. Kim ◽  
Rodrigo Donalisio da Silva ◽  
Diedra Gustafson ◽  
...  

Laparoscopic robots have been widely adopted in modern medical practice. However, explicitly interacting with these robots may increase the physical and cognitive load on the surgeon. An attention-aware robotic laparoscope system has been developed to free the surgeon from the technical limitations of visualization through the laparoscope. This system can implicitly recognize the surgeon's visual attention by interpreting the surgeon's natural eye movements using fuzzy logic and then automatically steer the laparoscope to focus on that viewing target. Experimental results show that this system can make the surgeon–robot interaction more effective, intuitive, and has the potential to make the execution of the surgery smoother and faster.


Sign in / Sign up

Export Citation Format

Share Document