scholarly journals Is there predictive remapping of visual attention across eye movements?

2011 ◽  
Vol 11 (11) ◽  
pp. 242-242
Author(s):  
W. Harrison ◽  
R. Remington ◽  
J. Mattingley
Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


Autism ◽  
2019 ◽  
Vol 24 (3) ◽  
pp. 730-743 ◽  
Author(s):  
Emma Gowen ◽  
Andrius Vabalas ◽  
Alexander J Casson ◽  
Ellen Poliakoff

This study investigated whether reduced visual attention to an observed action might account for altered imitation in autistic adults. A total of 22 autistic and 22 non-autistic adults observed and then imitated videos of a hand producing sequences of movements that differed in vertical elevation while their hand and eye movements were recorded. Participants first performed a block of imitation trials with general instructions to imitate the action. They then performed a second block with explicit instructions to attend closely to the characteristics of the movement. Imitation was quantified according to how much participants modulated their movement between the different heights of the observed movements. In the general instruction condition, the autistic group modulated their movements significantly less compared to the non-autistic group. However, following instructions to attend to the movement, the autistic group showed equivalent imitation modulation to the non-autistic group. Eye movement recording showed that the autistic group spent significantly less time looking at the hand movement for both instruction conditions. These findings show that visual attention contributes to altered voluntary imitation in autistic individuals and have implications for therapies involving imitation as well as for autistic people’s ability to understand the actions of others.


2015 ◽  
Vol 9 (4) ◽  
Author(s):  
Songpo Li ◽  
Xiaoli Zhang ◽  
Fernando J. Kim ◽  
Rodrigo Donalisio da Silva ◽  
Diedra Gustafson ◽  
...  

Laparoscopic robots have been widely adopted in modern medical practice. However, explicitly interacting with these robots may increase the physical and cognitive load on the surgeon. An attention-aware robotic laparoscope system has been developed to free the surgeon from the technical limitations of visualization through the laparoscope. This system can implicitly recognize the surgeon's visual attention by interpreting the surgeon's natural eye movements using fuzzy logic and then automatically steer the laparoscope to focus on that viewing target. Experimental results show that this system can make the surgeon–robot interaction more effective, intuitive, and has the potential to make the execution of the surgery smoother and faster.


Author(s):  
Kai Essig ◽  
Oleg Strogan ◽  
Helge Ritter ◽  
Thomas Schack

Various computational models of visual attention rely on the extraction of salient points or proto-objects, i.e., discrete units of attention, computed from bottom-up image features. In recent years, different solutions integrating top-down mechanisms were implemented, as research has shown that although eye movements initially are solely influenced by bottom-up information, after some time goal driven (high-level) processes dominate the guidance of visual attention towards regions of interest (Hwang, Higgins & Pomplun, 2009). However, even these improved modeling approaches are unlikely to generalize to a broader range of application contexts, because basic principles of visual attention, such as cognitive control, learning and expertise, have thus far not sufficiently been taken into account (Tatler, Hayhoe, Land & Ballard, 2011). In some recent work, the authors showed the functional role and representational nature of long-term memory structures for human perceptual skills and motor control. Based on these findings, the chapter extends a widely applied saliency-based model of visual attention (Walther & Koch, 2006) in two ways: first, it computes the saliency map using the cognitive visual attention approach (CVA) that shows a correspondence between regions of high saliency values and regions of visual interest indicated by participants’ eye movements (Oyekoya & Stentiford, 2004). Second, it adds an expertise-based component (Schack, 2012) to represent the influence of the quality of mental representation structures in long-term memory (LTM) and the roles of learning on the visual perception of objects, events, and motor actions.


2015 ◽  
Vol 69 ◽  
pp. 9-21 ◽  
Author(s):  
Christopher L. Striemer ◽  
Philippe A. Chouinard ◽  
Melvyn A. Goodale ◽  
Sandrine de Ribaupierre

2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Afsheen Khan ◽  
Sally A. McFadden ◽  
Mark Harwood ◽  
Josh Wallman

When saccadic eye movements consistently fail to land on their intended target, saccade accuracy is maintained by gradually adapting the movement size of successive saccades. The proposed error signal for saccade adaptation has been based on the distance between where the eye lands and the visual target (retinal error). We studied whether the error signal could alternatively be based on the distance between the predicted and actual locus of attention after the saccade. Unlike conventional adaptation experiments that surreptitiously displace the target once a saccade is initiated towards it, we instead attempted to draw attention away from the target by briefly presenting salient distractor images on one side of the target after the saccade. To test whether less salient, more predictable distractors would induce less adaptation, we separately used fixed random noise distractors. We found that both visual attention distractors were able to induce a small degree of downward saccade adaptation but significantly more to the more salient distractors. As in conventional adaptation experiments, upward adaptation was less effective and salient distractors did not significantly increase amplitudes. We conclude that the locus of attention after the saccade can act as an error signal for saccade adaptation.


Sign in / Sign up

Export Citation Format

Share Document