scholarly journals Spatial tuning shifts increase the discriminability and fidelity of population codes in visual cortex

2016 ◽  
Author(s):  
Vy A. Vo ◽  
Thomas C. Sprague ◽  
John T. Serences

ABSTRACTSelective visual attention enables organisms to enhance the representation of behaviorally relevant stimuli by altering the encoding properties of single receptive fields (RFs). Yet we know little about how the attentional modulations of single RFs contribute to the encoding of an entire visual scene. Addressing this issue requires (1) measuring a group of RFs that tile a continuous portion of visual space, (2) constructing a population-level measurement of spatial representations based on these RFs, and (3) linking how different types of RF attentional modulations change the population-level representation. To accomplish these aims, we used fMRI to characterize the responses of thousands of voxels in retinotopically organized human cortex. First, we found that the response modulations of voxel RFs (vRFs) depend on the spatial relationship between the RF center and the visual location of the attended target. Second, we used two analyses to assess the spatial encoding quality of a population of voxels. We found that attention increased fine spatial discriminability and representational fidelity near the attended target. Third, we linked these findings by manipulating the observed vRF attentional modulations and recomputing our measures of the fidelity of population codes. Surprisingly, we discovered that attentional enhancements of population-level representations largely depend on position shifts of vRFs, rather than changes in size or gain. Our data suggest that position shifts of single RFs are a principal mechanism by which attention enhances population-level representations in visual cortex.SIGNIFICANCE STATEMENTWhile changes in the gain and size of RFs have dominated our view of how attention modulates information codes of visual space, such hypotheses have largely relied on the extrapolation of single-cell responses to population responses. Here we use fMRI to relate changes in single voxel receptive fields (vRFs) to changes in the precision of representations based on larger populations of voxels. We find that vRF position shifts contribute more to population-level enhancements of visual information than changes in vRF size or gain. This finding suggests that position shifts are a principal mechanism by which spatial attention enhances population codes for relevant visual information in sensory cortex. This poses challenges for labeled line theories of information processing, suggesting that downstream regions likely rely on distributed inputs rather than single neuron-to-neuron mappings.


1998 ◽  
Vol 78 (2) ◽  
pp. 467-485 ◽  
Author(s):  
CHARLES D. GILBERT

Gilbert, Charles D. Adult Cortical Dynamics. Physiol. Rev. 78: 467–485, 1998. — There are many influences on our perception of local features. What we see is not strictly a reflection of the physical characteristics of a scene but instead is highly dependent on the processes by which our brain attempts to interpret the scene. As a result, our percepts are shaped by the context within which local features are presented, by our previous visual experiences, operating over a wide range of time scales, and by our expectation of what is before us. The substrate for these influences is likely to be found in the lateral interactions operating within individual areas of the cerebral cortex and in the feedback from higher to lower order cortical areas. Even at early stages in the visual pathway, cells are far more flexible in their functional properties than previously thought. It had long been assumed that cells in primary visual cortex had fixed properties, passing along the product of a stereotyped operation to the next stage in the visual pathway. Any plasticity dependent on visual experience was thought to be restricted to a period early in the life of the animal, the critical period. Furthermore, the assembly of contours and surfaces into unified percepts was assumed to take place at high levels in the visual pathway, whereas the receptive fields of cells in primary visual cortex represented very small windows on the visual scene. These concepts of spatial integration and plasticity have been radically modified in the past few years. The emerging view is that even at the earliest stages in the cortical processing of visual information, cells are highly mutable in their functional properties and are capable of integrating information over a much larger part of visual space than originally believed.



eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Jen-Chun Hsiang ◽  
Keith P Johnson ◽  
Linda Madisen ◽  
Hongkui Zeng ◽  
Daniel Kerschensteiner

Neurons receive synaptic inputs on extensive neurite arbors. How information is organized across arbors and how local processing in neurites contributes to circuit function is mostly unknown. Here, we used two-photon Ca2+ imaging to study visual processing in VGluT3-expressing amacrine cells (VG3-ACs) in the mouse retina. Contrast preferences (ON vs. OFF) varied across VG3-AC arbors depending on the laminar position of neurites, with ON responses preferring larger stimuli than OFF responses. Although arbors of neighboring cells overlap extensively, imaging population activity revealed continuous topographic maps of visual space in the VG3-AC plexus. All VG3-AC neurites responded strongly to object motion, but remained silent during global image motion. Thus, VG3-AC arbors limit vertical and lateral integration of contrast and location information, respectively. We propose that this local processing enables the dense VG3-AC plexus to contribute precise object motion signals to diverse targets without distorting target-specific contrast preferences and spatial receptive fields.



2018 ◽  
Author(s):  
Adam P. Morris ◽  
Bart Krekelberg

SummaryHumans and other primates rely on eye movements to explore visual scenes and to track moving objects. As a result, the image that is projected onto the retina – and propagated throughout the visual cortical hierarchy – is almost constantly changing and makes little sense without taking into account the momentary direction of gaze. How is this achieved in the visual system? Here we show that in primary visual cortex (V1), the earliest stage of cortical vision, neural representations carry an embedded “eye tracker” that signals the direction of gaze associated with each image. Using chronically implanted multi-electrode arrays, we recorded the activity of neurons in V1 during tasks requiring fast (exploratory) and slow (pursuit) eye movements. Neurons were stimulated with flickering, full-field luminance noise at all times. As in previous studies 1-4, we observed neurons that were sensitive to gaze direction during fixation, despite comparable stimulation of their receptive fields. We trained a decoder to translate neural activity into metric estimates of (stationary) gaze direction. This decoded signal not only tracked the eye accurately during fixation, but also during fast and slow eye movements, even though the decoder had not been exposed to data from these behavioural states. Moreover, this signal lagged the real eye by approximately the time it took for new visual information to travel from the retina to cortex. Using simulations, we show that this V1 eye position signal could be used to take into account the sensory consequences of eye movements and map the fleeting positions of objects on the retina onto their stable position in the world.



Author(s):  
Brian Rogers

‘The physiology and anatomy of the visual system’ describes what we have learned from neurophysiology and anatomy over the past eighty years and what this tells us about the meaning of the circuits involved in visual information processing. It explains how psychologists and physiologists use the terms ‘mechanism’ and ‘process’. For physiologists, a mechanism is linked to the actions of individual neurons, neural pathways, and the ways in which the neurons are connected up. For psychologists, the term is typically used to describe the processes the neural circuits may carry out. The human retina is described with explanations of lateral inhibition, receptive fields, and feature detectors as well as the visual cortex and different visual pathways.



2012 ◽  
Vol 29 (6) ◽  
pp. 283-299 ◽  
Author(s):  
VICTORIA R.A. HORE ◽  
JOHN B. TROY ◽  
STEPHEN J. EGLEN

AbstractThe receptive fields of on- and off-center parasol cell mosaics independently tile the retina to ensure efficient sampling of visual space. A recent theoretical model represented the on- and off-center mosaics by noisy hexagonal lattices of slightly different density. When the two lattices are overlaid, long-range Moiré interference patterns are generated. These Moiré interference patterns have been suggested to drive the formation of highly structured orientation maps in visual cortex. Here, we show that noisy hexagonal lattices do not capture the spatial statistics of parasol cell mosaics. An alternative model based upon local exclusion zones, termed as the pairwise interaction point process (PIPP) model, generates patterns that are statistically indistinguishable from parasol cell mosaics. A key difference between the PIPP model and the hexagonal lattice model is that the PIPP model does not generate Moiré interference patterns, and hence stimulated orientation maps do not show any hexagonal structure. Finally, we estimate the spatial extent of spatial correlations in parasol cell mosaics to be only 200–350 μm, far less than that required to generate Moiré interference. We conclude that parasol cell mosaics are too disordered to drive the formation of highly structured orientation maps in visual cortex.



Author(s):  
Haixin Zhong ◽  
Rubin Wang

AbstractThe information processing mechanisms of the visual nervous system remain to be unsolved scientific issues in neuroscience field, owing to a lack of unified and widely accepted theory for explanation. It has been well documented that approximately 80% of the rich and complicated perceptual information from the real world is transmitted to the visual cortex, and only a small fraction of visual information reaches the primary visual cortex (V1). This, nevertheless, does not affect our visual perception. Furthermore, how neurons in the secondary visual cortex (V2) encode such a small amount of visual information has yet to be addressed. To this end, the current paper established a visual network model for retina-lateral geniculate nucleus (LGN)-V1–V2 and quantitatively accounted for that response to the scarcity of visual information and encoding rules, based on the principle of neural mapping from V1 to V2. The results demonstrated that the visual information has a small degree of dynamic degradation when it is mapped from V1 to V2, during which there is a convolution calculation occurring. Therefore, visual information dynamic degradation mainly manifests itself along the pathway of the retina to V1, rather than V1 to V2. The slight changes in the visual information are attributable to the fact that the receptive fields (RFs) of V2 cannot further extract the image features. Meanwhile, despite the scarcity of visual information mapped from the retina, the RFs of V2 can still accurately respond to and encode “corner” information, due to the effects of synaptic plasticity, but the similar function does not exist in V1. This is a new discovery that has never been noticed before. To sum up, the coding of the “contour” feature (edge and corner) is achieved in the pathway of retina-LGN-V1–V2.



2017 ◽  
Vol 34 ◽  
Author(s):  
REECE MAZADE ◽  
JOSE MANUEL ALONSO

AbstractVisual information reaches the cerebral cortex through a major thalamocortical pathway that connects the lateral geniculate nucleus (LGN) of the thalamus with the primary visual area of the cortex (area V1). In humans, ∼3.4 million afferents from the LGN are distributed within a V1 surface of ∼2400 mm2, an afferent number that is reduced by half in the macaque and by more than two orders of magnitude in the mouse. Thalamocortical afferents are sorted in visual cortex based on the spatial position of their receptive fields to form a map of visual space. The visual resolution within this map is strongly correlated with total number of thalamic afferents that V1 receives and the area available to sort them. The ∼20,000 afferents of the mouse are only sorted by spatial position because they have to cover a large visual field (∼300 deg) within just 4 mm2 of V1 area. By contrast, the ∼500,000 afferents of the cat are also sorted by eye input and light/dark polarity because they cover a smaller visual field (∼200 deg) within a much larger V1 area (∼400 mm2), a sorting principle that is likely to apply also to macaques and humans. The increased precision of thalamic sorting allows building multiple copies of the V1 visual map for left/right eyes and light/dark polarities, which become interlaced to keep neurons representing the same visual point close together. In turn, this interlaced arrangement makes cortical neurons with different preferences for stimulus orientation to rotate around single cortical points forming a pinwheel pattern that allows more efficient processing of objects and visual textures.



2019 ◽  
Author(s):  
Lukas F. Fischer ◽  
Raul Mojica Soto-Albors ◽  
Friederike Buck ◽  
Mark T. Harnett

AbstractThe process by which visual information is incorporated into the brain’s spatial framework to represent landmarks is poorly understood. Studies in humans and rodents suggest that retrosplenial cortex (RSC) plays a key role in these computations. We developed an RSC-dependent behavioral task in which head-fixed mice learned the spatial relationship between visual landmark cues and hidden reward locations. Two-photon imaging revealed that these cues served as dominant reference points for most task-active neurons and anchored the spatial code in RSC. Presenting the same environment but decoupled from mouse behavior degraded encoding fidelity. Analyzing visual and motor responses showed that landmark codes were the result of supralinear integration. Surprisingly, V1 axons recorded in RSC showed similar receptive fields. However, they were less modulated by task engagement, indicating that landmark representations in RSC are the result of local computations. Our data provide cellular- and network-level insight into how RSC represents landmarks.



2020 ◽  
Author(s):  
Mahmood S. Hoseini ◽  
Bryan Higashikubo ◽  
Frances S. Cho ◽  
Andrew H. Chang ◽  
Alexandra Clemente-Perez ◽  
...  

ABSTRACTVisual perception in natural environments depends on the ability to focus on salient stimuli while ignoring distractions. This kind of selective visual attention is associated with gamma activity in the visual cortex. While the nucleus reticularis thalami (nRT) has been implicated in selective attention, its role in modulating visual perception remains unknown. Here we show that somatostatin-(SOM) but not parvalbumin-expressing (PV) neurons in the nRT preferentially project to visual thalamic nuclei. In freely behaving mice, single-unit and field recordings reveal powerful modulation of both visual information transmission and gamma activity in primary visual cortex (V1), as well as in the dorsal lateral geniculate nucleus (dLGN). These findings pinpoint the SOM neurons in nRT as powerful modulators of the visual information encoding accuracy in V1, and represent a novel circuit through which the nRT can influence representation of visual information.



Author(s):  
Andreas J Keller ◽  
Morgane M Roth ◽  
Massimo Scanziani

We sense our environment through pathways linking sensory organs to the brain. In the visual system, these feedforward pathways define the classical feedforward receptive field (ffRF), the area in space where visual stimuli excite a neuron1. The visual system also uses visual context, the visual scene surrounding a stimulus, to predict the content of the stimulus2, and accordingly, neurons have been found that are excited by stimuli outside their ffRF3–8. The mechanisms generating excitation to stimuli outside the ffRF are, however, unclear. Here we show that feedback projections onto excitatory neurons in mouse primary visual cortex (V1) generate a second receptive field driven by stimuli outside the ffRF. Stimulating this feedback receptive field (fbRF) elicits slow and delayed responses compared to ffRF stimulation. These responses are preferentially reduced by anesthesia and, importantly, by silencing higher visual areas (HVAs). Feedback inputs from HVAs have scattered receptive fields relative to their putative V1 targets enabling the generation of the fbRF. Neurons with fbRFs are located in cortical layers receiving strong feedback projections and are absent in the main input layer, consistent with a laminar processing hierarchy. The fbRF and the ffRF are mutually antagonistic since large, uniform stimuli, covering both, suppress responses. While somatostatin-expressing inhibitory neurons are driven by these large stimuli, parvalbumin and vasoactive-intestinal-peptide-expressing inhibitory neurons have antagonistic fbRF and ffRF, similar to excitatory neurons. Therefore, feedback projections may enable neurons to use context to predict information missing from the ffRF and to report differences in stimulus features across visual space, regardless if excitation occurs inside or outside the ffRF. We have identified a fbRF which, by complementing the ffRF, may contribute to predictive processing.



Sign in / Sign up

Export Citation Format

Share Document