scholarly journals The geometry of masking in neural populations

2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Dario L. Ringach

Abstract The normalization model provides an elegant account of contextual modulation in individual neurons of primary visual cortex. Understanding the implications of normalization at the population level is hindered by the heterogeneity of cortical neurons, which differ in the composition of their normalization pools and semi-saturation constants. Here we introduce a geometric approach to investigate contextual modulation in neural populations and study how the representation of stimulus orientation is transformed by the presence of a mask. We find that population responses can be embedded in a low-dimensional space and that an affine transform can account for the effects of masking. The geometric analysis further reveals a link between changes in discriminability and bias induced by the mask. We propose the geometric approach can yield new insights into the image processing computations taking place in early visual cortex at the population level while coping with the heterogeneity of single cell behavior.

2019 ◽  
Author(s):  
Dario L. Ringach

We introduce a geometric approach to study the representation of orientation by populations of neurons in primary visual cortex in the presence and absence of an additive mask. Despite heterogeneous effects at the single cell level, a simple geometric model explains how population responses are transformed by the mask and reveals how changes in discriminability and bias relate to each other. We propose that studying the geometry of neural populations can yield insights into the role of contextual modulation in the processing of sensory signals.


2015 ◽  
Vol 113 (9) ◽  
pp. 3159-3171 ◽  
Author(s):  
Caroline D. B. Luft ◽  
Alan Meeson ◽  
Andrew E. Welchman ◽  
Zoe Kourtzi

Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex.


2007 ◽  
Vol 24 (1) ◽  
pp. 65-77 ◽  
Author(s):  
YUNING SONG ◽  
CURTIS L. BAKER

Natural scenes contain a variety of visual cues that facilitate boundary perception (e.g., luminance, contrast, and texture). Here we explore whether single neurons in early visual cortex can process both contrast and texture cues. We recorded neural responses in cat A18 to both illusory contours formed by abutting gratings (ICs, texture-defined) and contrast-modulated gratings (CMs, contrast-defined). We found that if a neuron responded to one of the two stimuli, it also responded to the other. These neurons signaled similar contour orientation, spatial frequency, and movement direction of the two stimuli. A given neuron also exhibited similar selectivity for spatial frequency of the fine, stationary grating components (carriers) of the stimuli. These results suggest that the cue-invariance of early cortical neurons extends to different kinds of texture or contrast cues, and might arise from a common nonlinear mechanism.


2016 ◽  
Author(s):  
Andrew T Morgan ◽  
Lucy S Petro ◽  
Lars Muckli

Human behaviour is dependent on the ability of neuronal circuits to predict the outside world. Neuronal circuits make these predictions based on internal models. Despite our extensive knowledge of the sensory features that drive cortical neurons, we have a limited grasp on the structure of the brain's internal models. Substantial progress in neuroscience therefore depends on our ability to replicate the models that the brain creates internally. Here we record human fMRI data while presenting partially occluded visual scenes. Visual occlusion controls sensory input to subregions of visual cortex while internal models continue to influence activity in these regions. Since the observed activity is dependent on internal models, but not on sensory input, we have the opportunity to map the features of the brain's internal models. Our results show that internal models in early visual cortex are both categorical and scene-specific. We further demonstrate that behavioural line drawings provide a good description of internal model structure. These findings extend our understanding of internal models by showing that line drawings, which have been effectively used by humans to convey information about the world for thousands of years, provide a window into our brains' internal models of vision.


2021 ◽  
Author(s):  
Paolo Papale ◽  
Wietske Zuiderbaan ◽  
Rob R.M. Teeuwen ◽  
Amparo Gilhuis ◽  
Matthew W. Self ◽  
...  

Neurons in early visual cortex are not only sensitive to the image elements in their receptive field but also to the context determining whether the elements are part of an object or background. We here assessed the effect of objecthood in natural images on neuronal activity in early visual cortex, with fMRI in humans and electrophysiology in monkeys. We report that boundaries and interiors of objects elicit more activity than the background. Boundary effects occur remarkably early, implying that visual cortical neurons are tuned to features characterizing object boundaries in natural images. When a new image is presented the influence of the object interiors on neuronal activity occurs during a late phase of neuronal response and earlier when eye movements shift the image representation, implying that object representations are remapped across eye-movements. Our results reveal how object perception shapes the representation of natural images in early visual cortex.


2007 ◽  
Vol 98 (6) ◽  
pp. 3436-3449 ◽  
Author(s):  
Matthew A. Smith ◽  
Ryan C. Kelly ◽  
Tai Sing Lee

Contextual modulation due to feature contrast between the receptive field and surrounding region has been reported for numerous stimuli in primary visual cortex. One type of this modulation, iso-orientation surround suppression, has been studied extensively. The degree to which surround suppression is related to other forms of contextual modulation remains unknown. We used shape-from-shading stimuli in a field of distractors to test the latency and magnitude of contextual modulation to a stimulus that cannot be distinguished with an orientation-selective mechanism. This stimulus configuration readily elicits perceptual pop-out in human observers and induces a long-latency contextual modulation response in neurons in macaque early visual cortex. We found that animals trained to detect the location of a pop-out stimulus were better at finding a sphere that appeared to be lit from below in the presence of distractors that were lit from above. Furthermore, neuronal responses were stronger and had shorter latency in the condition where behavioral performance was best. This asymmetry is compatible with earlier psychophysical findings in human observers. In the population of V1 neurons, the latency of the contextual modulation response is 145 ms on average (ranging from 70 to 230 ms). This is much longer than the latency for iso-orientation surround suppression, indicating that the underlying circuitry is distinct. Our results support the idea that a feature-specific feedback signal generates the pop-out responses we observe and suggest that V1 neurons actively participate in the computation of perceptual salience.


2015 ◽  
Vol 113 (5) ◽  
pp. 1453-1458 ◽  
Author(s):  
Edmund Chong ◽  
Ariana M. Familiar ◽  
Won Mok Shim

As raw sensory data are partial, our visual system extensively fills in missing details, creating enriched percepts based on incomplete bottom-up information. Despite evidence for internally generated representations at early stages of cortical processing, it is not known whether these representations include missing information of dynamically transforming objects. Long-range apparent motion (AM) provides a unique test case because objects in AM can undergo changes both in position and in features. Using fMRI and encoding methods, we found that the “intermediate” orientation of an apparently rotating grating, never presented in the retinal input but interpolated during AM, is reconstructed in population-level, feature-selective tuning responses in the region of early visual cortex (V1) that corresponds to the retinotopic location of the AM path. This neural representation is absent when AM inducers are presented simultaneously and when AM is visually imagined. Our results demonstrate dynamic filling-in in V1 for object features that are interpolated during kinetic transformations.


2020 ◽  
Vol 6 (1) ◽  
pp. 287-311 ◽  
Author(s):  
Gregory D. Horwitz

Visual images can be described in terms of the illuminants and objects that are causal to the light reaching the eye, the retinal image, its neural representation, or how the image is perceived. Respecting the differences among these distinct levels of description can be challenging but is crucial for a clear understanding of color vision. This article approaches color by reviewing what is known about its neural representation in the early visual cortex, with a brief description of signals in the eye and the thalamus for context. The review focuses on the properties of single neurons and advances the general theme that experimental approaches based on knowledge of feedforward signals have promoted greater understanding of the neural code for color than approaches based on correlating single-unit responses with color perception. New data from area V1 illustrate the strength of the feedforward approach. Future directions for progress in color neurophysiology are discussed: techniques for improved single-neuron characterization, for investigations of neural populations and small circuits, and for the analysis of natural image statistics.


2021 ◽  
Vol 118 (43) ◽  
pp. e2105276118
Author(s):  
Andreea Lazar ◽  
Christopher Lewis ◽  
Pascal Fries ◽  
Wolf Singer ◽  
Danko Nikolic

The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in postexposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.


2018 ◽  
Author(s):  
R. L. Rademaker ◽  
C. Chunharas ◽  
J. T. Serences

Traversing sensory environments requires keeping relevant information in mind while simultaneously processing new inputs. Visual information is kept in working memory via feature selective responses in early visual cortex, but recent work had suggested that new sensory inputs wipe out this information. Here we show region-wide multiplexing abilities in classic sensory areas, with population-level response patterns in visual cortex representing the contents of working memory concurrently with new sensory inputs.


Sign in / Sign up

Export Citation Format

Share Document