scholarly journals Sounds are remapped across saccades

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Martin Szinte ◽  
David Aagten-Murphy ◽  
Donatas Jonikaitis ◽  
Luca Wollenberg ◽  
Heiner Deubel

AbstractTo achieve visual space constancy, our brain remaps eye-centered projections of visual objects across saccades. Here, we measured saccade trajectory curvature following the presentation of visual, auditory, and audiovisual distractors in a double-step saccade task to investigate if this stability mechanism also accounts for localized sounds. We found that saccade trajectories systematically curved away from the position at which either a light or a sound was presented, suggesting that both modalities are represented in eye-centered oculomotor centers. Importantly, the same effect was observed when the distractor preceded the execution of the first saccade. These results suggest that oculomotor centers keep track of visual, auditory and audiovisual objects by remapping their eye-centered representations across saccades. Furthermore, they argue for the existence of a supra-modal map which keeps track of multi-sensory object locations across our movements to create an impression of space constancy.

2019 ◽  
Author(s):  
Martin Szinte ◽  
David Aagten-Murphy ◽  
Donatas Jonikaitis ◽  
Luca Wollenberg ◽  
Heiner Deubel

AbstractTo achieve visual space constancy, our brain remaps eye-centered projections of visual objects across saccades. Here, we measured saccade trajectory curvature following the presentation of visual, auditory, and audiovisual distractors in a double-step saccade task to investigate if this stability mechanism also accounts for localized sounds. We found that saccade trajectories systematically curved away from the position at which either a light or a sound was presented, suggesting that both modalities are represented in eye-centered oculomotor centers. Importantly, the same effect was observed when the distractor preceded the execution of the first saccade. These results suggest that oculomotor centers keep track of visual, auditory and audiovisual objects by remapping their eye-centered representations across saccades. Furthermore, they argue for the existence of a supra-modal map which keeps track of multi-sensory object locations across our movements to create an impression of space constancy.


2006 ◽  
Vol 17 (10) ◽  
pp. 2364-2374 ◽  
Author(s):  
W. P. Medendorp ◽  
G. F. I. Kramer ◽  
O. Jensen ◽  
R. Oostenveld ◽  
J.-M. Schoffelen ◽  
...  

2017 ◽  
Vol 32 (6) ◽  
pp. 1347-1354
Author(s):  
Zhenlan Jin ◽  
Shulin Yue ◽  
Junjun Zhang ◽  
Ling Li

2005 ◽  
Vol 94 (5) ◽  
pp. 3228-3248 ◽  
Author(s):  
Rebecca A. Berman ◽  
Laura M. Heiser ◽  
Richard C. Saunders ◽  
Carol L. Colby

Internal representations of the sensory world must be constantly adjusted to take movements into account. In the visual system, spatial updating provides a mechanism for maintaining a coherent map of salient locations as the eyes move. Little is known, however, about the pathways that produce updated spatial representations. In the present study, we asked whether direct cortico-cortical links are required for spatial updating. We addressed this question by investigating whether the forebrain commissures—the direct path between the two cortical hemispheres—are necessary for updating visual representations from one hemifield to the other. We assessed spatial updating in two split-brain monkeys using the double-step task, which involves saccades to two sequentially appearing targets. Accurate performance requires that the representation of the second target be updated to take the first saccade into account. We made two central discoveries regarding the pathways that underlie spatial updating. First, we found that split-brain monkeys exhibited a selective initial impairment on double-step sequences that required updating across visual hemifields. Second, and most surprisingly, these impairments were neither universal nor permanent: the monkeys were ultimately able to perform the across-hemifield sequences and, in some cases, this ability emerged rapidly. These findings indicate that direct cortical links provide the main substrate for updating visual representations, but they are not the sole substrate. Rather, a unified and stable representation of visual space is supported by a redundant cortico-subcortical network with a striking capacity for reorganization.


2020 ◽  
pp. 174702182096069
Author(s):  
Christina B Reimer ◽  
Luke Tudge ◽  
Torsten Schubert

In the target–distractor saccade task, a target and an irrelevant distractor are simultaneously presented and the task itself consists of a target-directed saccade. Findings usually show that as saccade latency increases, saccade trajectory deviation towards the distractor decreases. We presented this saccade task in two dual-task experiments to address the open question of whether performance of an auditory–manual task simply delays the temporal execution of a saccade, or whether it also interferes with the spatial planning of the saccade trajectory. We measured saccade latency, as a measure of a delay in execution, and saccade trajectory deviation, as a measure of the spatial planning. In Experiment 1, the auditory–manual task was a two-choice reaction time (two-CRT) task, and in Experiment 2, it was a go-no-go task. Performing the two tasks in close temporal succession shortly delayed the temporal execution of the saccade, but did not influence the spatial planning of the saccade trajectory. This result pattern was more pronounced when the auditory–manual task required the selection and execution of one of two possible manual responses (Experiment 1), less pronounced when the auditory–manual task required the decision to execute a button press (go condition, Experiment 2), and absent when the auditory–manual task required the decision to inhibit a button press (no-go condition, Experiment 2). Taken together, the manual response rather than the response selection process of the auditory–manual task led to a delay of saccade execution, but not to an impairment of the spatial planning of the saccade trajectory.


Perception ◽  
10.1068/p3298 ◽  
2003 ◽  
Vol 32 (1) ◽  
pp. 41-52 ◽  
Author(s):  
Doug J K Barrett ◽  
Mark F Bradshaw ◽  
David Rose

The locations of visual objects and events in the world are represented in a number of different coordinate frameworks. For example, a visual transient is known to attract (exogenous) attention and facilitate performance within an egocentric framework. However, when attention is allocated voluntarily to a particular visual feature (ie endogenous attention), the location of that feature appears to be variously encoded either within an allocentric framework or in a spatially invariant manner. In three experiments we investigated the importance of location for the allocation of endogenous attention and whether egocentric and/or allocentric spatial frameworks are involved. Primes and targets were presented in four conditions designed to vary systematically their spatial relationships in egocentric and allocentric coordinates. A reliable effect of egocentric priming was found in all three experiments, which suggests that endogenous shifts of attention towards targets defined by a particular feature operate in an egocentric representation of visual space. In addition, allocentric priming was also found for targets primed by their colour or shape. This suggests that attending to targets primed by nonspatial attributes results in facilitation that is localised in more than one coordinate frame of spatial reference.


2016 ◽  
Vol 16 (14) ◽  
pp. 12 ◽  
Author(s):  
Anouk J. de Brouwer ◽  
W. Pieter Medendorp ◽  
Jeroen B. J. Smeets
Keyword(s):  

2020 ◽  
Vol 6 (1) ◽  
pp. 469-489
Author(s):  
Larry N. Thibos

In this review, I develop an empirically based model of optical image formation by the human eye, followed by neural sampling by retinal ganglion cells, to demonstrate the perceptual effects of blur, aliasing, and distortion of visual space in the brain. The optical model takes account of ocular aberrations and their variation across the visual field, in addition to variations of defocus due to variation of target vergence in three-dimensional scenes. Neural sampling by retinal ganglion cells with receptive field size and spacing that increases with eccentricity is used to visualize the neural image carried by the optic nerve to the brain. Anatomical parameters are derived from psychophysical studies of sampling-limited visual resolution of sinusoidal interference fringes. Retinotopic projection of the neural image onto brainstem nuclei reveals features of the neural image in a perceptually uniform brain space where location and size of visual objects may be measured by counting neurons.


Sign in / Sign up

Export Citation Format

Share Document