Abnormal temporal dynamics of visual attention in spatial neglect patients

Nature ◽  
1997 ◽  
Vol 385 (6612) ◽  
pp. 154-156 ◽  
Author(s):  
Masud Husain ◽  
Kimron Shapiro ◽  
Jesse Martin ◽  
Christopher Kennard
2011 ◽  
pp. 944-962
Author(s):  
Florian Schmidt-Weigand

This chapter introduces eye tracking as a method to observe how the split of visual attention is managed in multimedia learning. The chapter reviews eye tracking literature on multirepresentational material. A special emphasis is devoted to recent studies conducted to explore viewing behavior in learning from dynamic vs. static visualizations and the matter of pacing of presentation. A presented argument is that the learners’ viewing behavior is affected by design characteristics of the learning material. Characteristics like the dynamics of visualization or the pace of presentation only slightly influence the learners’ visual strategy, while user interaction (i.e., learner controlled pace of presentation) leads to a different visual strategy compared to system-paced presentation. Taking viewing behavior as an indicator of how split attention is managed the harms of a split source format in multimedia learning can be overcome by implementing a user interaction that allows the learner to adapt the material to perceptual and individual characteristics.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Jongmin Moon ◽  
Seonggyu Choe ◽  
Seul Lee ◽  
Oh-Sang Kwon

Neurocase ◽  
1996 ◽  
Vol 2 (5) ◽  
pp. 441a-447
Author(s):  
A. Worthington

2012 ◽  
Vol 33 (5) ◽  
pp. 1012.e1-1012.e10 ◽  
Author(s):  
Frédéric Peters ◽  
Anne-Marie Ergis ◽  
Serge Gauthier ◽  
Bénédicte Dieudonné ◽  
Marc Verny ◽  
...  

2017 ◽  
Vol 29 (5) ◽  
pp. 911-918 ◽  
Author(s):  
Dongyun Li ◽  
Christopher Rorden ◽  
Hans-Otto Karnath

A widely debated question concerns whether or not spatial and nonspatial components of visual attention interact in attentional performance. Spatial neglect is a common consequence of brain injury where individuals fail to respond to stimuli presented on their contralesional side. It has been argued that, beyond the spatial bias, these individuals also tend to exhibit nonspatial perceptual deficits. Here we demonstrate that the “nonspatial” deficits affecting the temporal dynamics of attentional deployment are in fact modulated by spatial position. Specifically, we observed that the pathological attentional blink of chronic neglect is enhanced when stimuli are presented on the contralesional side of the trunk while keeping retinal and head-centered coordinates constant. We did not find this pattern in right brain-damaged patients without neglect or in patients who had recovered from neglect. Our work suggests that the nonspatial attentional deficits observed in neglect are heavily modulated by egocentric spatial position. This provides strong evidence against models that suggest independent modules for spatial and nonspatial attentional functions while also providing strong evidence that trunk position plays an important role in neglect.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Dario Zanca ◽  
Marco Gori ◽  
Stefano Melacci ◽  
Alessandra Rufa

Abstract Visual attention refers to the human brain’s ability to select relevant sensory information for preferential processing, improving performance in visual and cognitive tasks. It proceeds in two phases. One in which visual feature maps are acquired and processed in parallel. Another where the information from these maps is merged in order to select a single location to be attended for further and more complex computations and reasoning. Its computational description is challenging, especially if the temporal dynamics of the process are taken into account. Numerous methods to estimate saliency have been proposed in the last 3 decades. They achieve almost perfect performance in estimating saliency at the pixel level, but the way they generate shifts in visual attention fully depends on winner-take-all (WTA) circuitry. WTA is implemented by the biological hardware in order to select a location with maximum saliency, towards which to direct overt attention. In this paper we propose a gravitational model to describe the attentional shifts. Every single feature acts as an attractor and the shifts are the result of the joint effects of the attractors. In the current framework, the assumption of a single, centralized saliency map is no longer necessary, though still plausible. Quantitative results on two large image datasets show that this model predicts shifts more accurately than winner-take-all.


2007 ◽  
Vol 15 (1) ◽  
pp. 115-122 ◽  
Author(s):  
Muriel Boucart ◽  
Nawal Waucquier ◽  
George-Andrew Michael ◽  
Christian Libersa

Sign in / Sign up

Export Citation Format

Share Document