scholarly journals Spatial updating of attention across eye movements: A neuro-computational approach

2018 ◽  
Author(s):  
Julia Bergelt ◽  
Fred H. Hamker

While scanning our environment, the retinal image changes with every saccade. Nevertheless, the visual system anticipates where an attended target will be next and attention is updated to the new location. Recently, two different types of perisaccadic attentional updates were discovered: Predictive remapping of attention before saccade onset (Rolfs, Jonikaitis, Deubel, & Cavanagh, 2011) as well as lingering of attention after saccade (Golomb, Chun, & Mazer, 2008; Golomb, Pulido, Albrecht, Chun, & Mazer, 2010). We here propose a neuro-computational model located in LIP based on a previous model of perisaccadic space perception (Ziesche & Hamker, 2011, 2014). Our model can account for both types of updating of attention at a neural systems level. The lingering effect originates from the late updating of the proprioceptive eye position signal and the remapping from the early corollary discharge signal. We put these results in relationship to predictive remapping of receptive fields and show that both phenomena arise from the same simple, recurrent neural circuit. Thus, together with the previously published results, the model provides a comprehensive framework to discuss multiple experimental observations that occur around saccades.


2008 ◽  
Vol 100 (4) ◽  
pp. 1848-1867 ◽  
Author(s):  
Sigrid M. C. I. van Wetter ◽  
A. John van Opstal

Such perisaccadic mislocalization is maximal in the direction of the saccade and varies systematically with the target-saccade onset delay. We have recently shown that under head-fixed conditions perisaccadic errors do not follow the quantitative predictions of current visuomotor models that explain these mislocalizations in terms of spatial updating. These models all assume sluggish eye-movement feedback and therefore predict that errors should vary systematically with the amplitude and kinematics of the intervening saccade. Instead, we reported that errors depend only weakly on the saccade amplitude. An alternative explanation for the data is that around the saccade the perceived target location undergoes a uniform transient shift in the saccade direction, but that the oculomotor feedback is, on average, accurate. This “ visual shift” hypothesis predicts that errors will also remain insensitive to kinematic variability within much larger head-free gaze shifts. Here we test this prediction by presenting a brief visual probe near the onset of gaze saccades between 40 and 70° amplitude. According to models with inaccurate gaze-motor feedback, the expected perisaccadic errors for such gaze shifts should be as large as 30° and depend heavily on the kinematics of the gaze shift. In contrast, we found that the actual peak errors were similar to those reported for much smaller saccadic eye movements, i.e., on average about 10°, and that neither gaze-shift amplitude nor kinematics plays a systematic role. Our data further corroborate the visual origin of perisaccadic mislocalization under open-loop conditions and strengthen the idea that efferent feedback signals in the gaze-control system are fast and accurate.



2018 ◽  
Author(s):  
Tao He ◽  
Matthias Fritsche ◽  
Floris P. de Lange

AbstractVisual stability is thought to be mediated by predictive remapping of the relevant object information from its current, pre-saccadic locations to its future, post-saccadic location on the retina. However, it is heavily debated whether and what feature information is predictively remapped during the pre-saccadic interval. Using an orientation adaptation paradigm, we investigated whether predictive remapping occurs for stimulus features and whether adaptation itself is remapped. We found strong evidence for predictive remapping of a stimulus presented shortly before saccade onset, but no remapping of adaptation. Furthermore, we establish that predictive remapping also occurs for stimuli that are not saccade targets, pointing toward a ‘forward remapping’ process operating across the whole visual field. Together, our findings suggest that predictive feature remapping of object information plays an important role in mediating visual stability.



2010 ◽  
Vol 104 (5) ◽  
pp. 2624-2633 ◽  
Author(s):  
Catherine A. Dunn ◽  
Carol L. Colby

Our eyes are constantly moving, allowing us to attend to different visual objects in the environment. With each eye movement, a given object activates an entirely new set of visual neurons, yet we perceive a stable scene. One neural mechanism that may contribute to visual stability is remapping. Neurons in several brain regions respond to visual stimuli presented outside the receptive field when an eye movement brings the stimulated location into the receptive field. The stored representation of a visual stimulus is remapped, or updated, in conjunction with the saccade. Remapping depends on neurons being able to receive visual information from outside the classic receptive field. In previous studies, we asked whether remapping across hemifields depends on the forebrain commissures. We found that, when the forebrain commissures are transected, behavior dependent on accurate spatial updating is initially impaired but recovers over time. Moreover, neurons in lateral intraparietal cortex (LIP) continue to remap information across hemifields in the absence of the forebrain commissures. One possible explanation for the preserved across-hemifield remapping in split-brain animals is that neurons in a single hemisphere could represent visual information from both visual fields. In the present study, we measured receptive fields of LIP neurons in split-brain monkeys and compared them with receptive fields in intact monkeys. We found a small number of neurons with bilateral receptive fields in the intact monkeys. In contrast, we found no such neurons in the split-brain animals. We conclude that bilateral representations in area LIP following forebrain commissures transection cannot account for remapping across hemifields.



2018 ◽  
Vol 34 (1) ◽  
pp. 471-493 ◽  
Author(s):  
George Mountoufaris ◽  
Daniele Canzio ◽  
Chiamaka L. Nwakeze ◽  
Weisheng V. Chen ◽  
Tom Maniatis

The ability of neurites of individual neurons to distinguish between themselves and neurites from other neurons and to avoid self (self-avoidance) plays a key role in neural circuit assembly in both invertebrates and vertebrates. Similarly, when individual neurons of the same type project into receptive fields of the brain, they must avoid each other to maximize target coverage (tiling). Counterintuitively, these processes are driven by highly specific homophilic interactions between cell surface proteins that lead to neurite repulsion rather than adhesion. Among these proteins in vertebrates are the clustered protocadherins (Pcdhs), and key to their function is the generation of enormous cell surface structural diversity. Here we review recent advances in understanding how a Pcdh cell surface code is generated by stochastic promoter choice; how this code is amplified and read by homophilic interactions between Pcdh complexes at the surface of neurons; and, finally, how the Pcdh code is translated to cellular function, which mediates self-avoidance and tiling and thus plays a central role in the development of complex neural circuits. Not surprisingly, Pcdh mutations that diminish homophilic interactions lead to wiring defects and abnormal behavior in mice, and sequence variants in the Pcdh gene cluster are associated with autism spectrum disorders in family-based genetic studies in humans.



1979 ◽  
Vol 204 (1157) ◽  
pp. 477-484 ◽  

It is argued that those neural systems (such as that responsible for stereoscopic vision) that have the greatest precision of operation are the most likely, during their developmental construction, to take advantage of information supplied by their own input. There is evidence that binocuarly driven neurons in the kitten’s visual cortex do indeed become modified in their synaptic organization during early visual experience in a manner that enhances the specificity of binocular interaction and ensures that the ranges of positional and orientational disparities of the receptive fields, within limits, become matched to the nature of the actual stimulation encountered by the animal.



2004 ◽  
Vol 16 (8) ◽  
pp. 1579-1600 ◽  
Author(s):  
Eric K. C. Tsang ◽  
Bertram E. Shi

The relative depth of objects causes small shifts in the left and right retinal positions of these objects, called binocular disparity. This letter describes an electronic implementation of a single binocularly tuned complex cell based on the binocular energy model, which has been proposed to model disparity-tuned complex cells in the mammalian primary visual cortex. Our system consists of two silicon retinas representing the left and right eyes, two silicon chips containing retinotopic arrays of spiking neurons with monocular Gabor-type spatial receptive fields, and logic circuits that combine the spike outputs to compute a disparity-selective complex cell response. The tuned disparity can be adjusted electronically by introducing either position or phase shifts between the monocular receptive field profiles. Mismatch between the monocular receptive field profiles caused by transistor mismatch can degrade the relative responses of neurons tuned to different disparities. In our system, the relative responses between neurons tuned by phase encoding are better matched than neurons tuned by position encoding. Our numerical sensitivity analysis indicates that the relative responses of phase-encoded neurons that are least sensitive to the receptive field parameters vary the most in our system. We conjecture that this robustness may be one reason for the existence of phase-encoded disparity-tuned neurons in biological neural systems.



2019 ◽  
Vol 31 (6) ◽  
pp. 1015-1047 ◽  
Author(s):  
John A. Berkowitz ◽  
Tatyana O. Sharpee

Quantifying mutual information between inputs and outputs of a large neural circuit is an important open problem in both machine learning and neuroscience. However, evaluation of the mutual information is known to be generally intractable for large systems due to the exponential growth in the number of terms that need to be evaluated. Here we show how information contained in the responses of large neural populations can be effectively computed provided the input-output functions of individual neurons can be measured and approximated by a logistic function applied to a potentially nonlinear function of the stimulus. Neural responses in this model can remain sensitive to multiple stimulus components. We show that the mutual information in this model can be effectively approximated as a sum of lower-dimensional conditional mutual information terms. The approximations become exact in the limit of large neural populations and for certain conditions on the distribution of receptive fields across the neural population. We empirically find that these approximations continue to work well even when the conditions on the receptive field distributions are not fulfilled. The computing cost for the proposed methods grows linearly in the dimension of the input and compares favorably with other approximations.



2021 ◽  
Author(s):  
Ifedayo-EmmanuEL Adeyefa-Olasupo ◽  
Zixuan Xiao ◽  
Anirvan S. Nandy

ABSTRACTSaccadic eye-movements allow us to bring visual objects of interest to high-acuity central vision. Although saccades cause large displacements of retinal images, our percept of the visual world remains stable. Predictive remapping — the ability of cells in retinotopic brain areas to transiently exhibit spatio-temporal retinotopic shifts beyond the spatial extent of their classical receptive fields — has been proposed as a primary mechanism that mediates this seamless visual percept. Despite the well documented effects of predictive remapping, no study to date has been able to provide a mechanistic account of the neural computations and architecture that actively mediate this ubiquitous phenomenon. Borne out by the spatio-temporal dynamics of peri-saccadic sensitivity to probe stimuli in human subjects, we propose a novel neurobiologically inspired phenomenological model in which the underlying peri-saccadic attentional and oculomotor signals manifest as three temporally overlapping forces that act on retinotopic brain areas. These three forces – a compressive one toward the center of gaze, a convergent one toward the saccade target and a translational one parallel to the saccade trajectory – act in an inverse force field and specify the spatio-temporal window of predictive remapping of population receptive fields.



2019 ◽  
Vol 31 (10) ◽  
pp. 1964-1984
Author(s):  
Yuxiu Shao ◽  
Binxu Wang ◽  
Andrew T. Sornborger ◽  
Louis Tao

Cortical oscillations are central to information transfer in neural systems. Significant evidence supports the idea that coincident spike input can allow the neural threshold to be overcome and spikes to be propagated downstream in a circuit. Thus, an observation of oscillations in neural circuits would be an indication that repeated synchronous spiking may be enabling information transfer. However, for memory transfer, in which synaptic weights must be being transferred from one neural circuit (region) to another, what is the mechanism? Here, we present a synaptic transfer mechanism whose structure provides some understanding of the phenomena that have been implicated in memory transfer, including nested oscillations at various frequencies. The circuit is based on the principle of pulse-gated, graded information transfer between neural populations.



2012 ◽  
Vol 24 (11) ◽  
pp. 2946-2963 ◽  
Author(s):  
N. Andrew Browning

Time-to-contact (TTC) estimation is beneficial for visual navigation. It can be estimated from an image projection, either in a camera or on the retina, by looking at the rate of expansion of an object. When expansion rate (E) is properly defined, TTC = 1/E. Primate dorsal MST cells have receptive field structures suited to the estimation of expansion and TTC. However, the role of MST cells in TTC estimation has been discounted because of large receptive fields, the fact that neither they nor preceding brain areas appear to decompose the motion field to estimate divergence, and a lack of experimental data. This letter demonstrates mathematically that template models of dorsal MST cells can be constructed such that the output of the template match provides an accurate and robust estimate of TTC. The template match extracts the relevant components of the motion field and scales them such that the output of each component of the template match is an estimate of expansion. It then combines these component estimates to provide a mean estimate of expansion across the object. The output of model MST provides a direct measure of TTC. The ViSTARS model of primate visual navigation was updated to incorporate the modified templates. In ViSTARS and in primates, speed is represented as a population code in V1 and MT. A population code for speed complicates TTC estimation from a template match. Results presented in this letter demonstrate that the updated template model of MST accurately codes TTC across a population of model MST cells. We conclude that the updated template model of dorsal MST simultaneously and accurately codes TTC and heading regardless of receptive field size, object size, or motion representation. It is possible that a subpopulation of MST cells in primates represents expansion in this way.



Sign in / Sign up

Export Citation Format

Share Document