Two distinct neural mechanisms in early visual cortex determine subsequent visual processing

Cortex ◽  
2014 ◽  
Vol 59 ◽  
pp. 1-11 ◽  
Author(s):  
Christianne Jacobs ◽  
Tom A. de Graaf ◽  
Alexander T. Sack
2018 ◽  
Author(s):  
Andreea Lazar ◽  
Chris Lewis ◽  
Pascal Fries ◽  
Wolf Singer ◽  
Danko Nikolić

SummarySensory exposure alters the response properties of individual neurons in primary sensory cortices. However, it remains unclear how these changes affect stimulus encoding by populations of sensory cells. Here, recording from populations of neurons in cat primary visual cortex, we demonstrate that visual exposure enhances stimulus encoding and discrimination. We find that repeated presentation of brief, high-contrast shapes results in a stereotyped, biphasic population response consisting of a short-latency transient, followed by a late and extended period of reverberatory activity. Visual exposure selectively improves the stimulus specificity of the reverberatory activity, by increasing the magnitude and decreasing the trial-to-trial variability of the neuronal response. Critically, this improved stimulus encoding is distributed across the population and depends on precise temporal coordination. Our findings provide evidence for the existence of an exposure-driven optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.


2010 ◽  
Vol 22 (11) ◽  
pp. 2417-2426 ◽  
Author(s):  
Stephanie A. McMains ◽  
Sabine Kastner

Multiple stimuli that are present simultaneously in the visual field compete for neural representation. At the same time, however, multiple stimuli in cluttered scenes also undergo perceptual organization according to certain rules originally defined by the Gestalt psychologists such as similarity or proximity, thereby segmenting scenes into candidate objects. How can these two seemingly orthogonal neural processes that occur early in the visual processing stream be reconciled? One possibility is that competition occurs among perceptual groups rather than at the level of elements within a group. We probed this idea using fMRI by assessing competitive interactions across visual cortex in displays containing varying degrees of perceptual organization or perceptual grouping (Grp). In strong Grp displays, elements were arranged such that either an illusory figure or a group of collinear elements were present, whereas in weak Grp displays the same elements were arranged randomly. Competitive interactions among stimuli were overcome throughout early visual cortex and V4, when elements were grouped regardless of Grp type. Our findings suggest that context-dependent grouping mechanisms and competitive interactions are linked to provide a bottom–up bias toward candidate objects in cluttered scenes.


2015 ◽  
Author(s):  
Claudia Lunghi

In this research binocular rivalry is used as a tool to investigate different aspects of visual and multisensory perception. Several experiments presented here demonstrated that touch specifically interacts with vision during binocular rivalry and that the interaction likely occurs at early stages of visual processing, probably V1 or V2. Another line of research also presented here demonstrated that human adult visual cortex retains an unexpected high degree of experience-dependent plasticity by showing that a brief period of monocular deprivation produced important perceptual consequences on the dynamics of binocular rivalry, reflecting a homeostatic plasticity. In summary, this work shows that binocular rivalry is a powerful tool to investigate different aspects of visual perception and can be used to reveal unexpected properties of early visual cortex.


2021 ◽  
Author(s):  
Miles Wischnewski ◽  
Marius V. Peelen

Objects can be recognized based on their intrinsic features, including shape, color, and texture. In daily life, however, such features are often not clearly visible, for example when objects appear in the periphery, in clutter, or at a distance. Interestingly, object recognition can still be highly accurate under these conditions when objects are seen within their typical scene context. What are the neural mechanisms of context-based object recognition? According to parallel processing accounts, context-based object recognition is supported by the parallel processing of object and scene information in separate pathways. Output of these pathways is then combined in downstream regions, leading to contextual benefits in object recognition. Alternatively, according to feedback accounts, context-based object recognition is supported by feedback from scene-selective to object-selective regions. Here, in three pre-registered transcranial magnetic stimulation (TMS) experiments, we tested a key prediction of the feedback hypothesis: that scene-selective cortex causally and selectively supports context-based object recognition before object-selective cortex does. Early visual cortex (EVC), object-selective lateral occipital cortex (LOC), and scene-selective occipital place area (OPA) were stimulated at three time points relative to stimulus onset while participants categorized degraded objects in scenes and intact objects in isolation, in different trials. Results confirmed our predictions: relative to isolated object recognition, context-based object recognition was selectively and causally supported by OPA at 160-200 ms after onset, followed by LOC at 260-300 ms after onset. These results indicate that context-based expectations facilitate object recognition by disambiguating object representations in visual cortex.


2017 ◽  
Vol 118 (6) ◽  
pp. 3194-3214 ◽  
Author(s):  
Rosemary A. Cowell ◽  
Krystal R. Leger ◽  
John T. Serences

Identifying an object and distinguishing it from similar items depends upon the ability to perceive its component parts as conjoined into a cohesive whole, but the brain mechanisms underlying this ability remain elusive. The ventral visual processing pathway in primates is organized hierarchically: Neuronal responses in early stages are sensitive to the manipulation of simple visual features, whereas neuronal responses in subsequent stages are tuned to increasingly complex stimulus attributes. It is widely assumed that feature-coding dominates in early visual cortex whereas later visual regions employ conjunction-coding in which object representations are different from the sum of their simple feature parts. However, no study in humans has demonstrated that putative object-level codes in higher visual cortex cannot be accounted for by feature-coding and that putative feature codes in regions prior to ventral temporal cortex are not equally well characterized as object-level codes. Thus the existence of a transition from feature- to conjunction-coding in human visual cortex remains unconfirmed, and if a transition does occur its location remains unknown. By employing multivariate analysis of functional imaging data, we measure both feature-coding and conjunction-coding directly, using the same set of visual stimuli, and pit them against each other to reveal the relative dominance of one vs. the other throughout cortex. Our results reveal a transition from feature-coding in early visual cortex to conjunction-coding in both inferior temporal and posterior parietal cortices. This novel method enables the use of experimentally controlled stimulus features to investigate population-level feature and conjunction codes throughout human cortex. NEW & NOTEWORTHY We use a novel analysis of neuroimaging data to assess representations throughout visual cortex, revealing a transition from feature-coding to conjunction-coding along both ventral and dorsal pathways. Occipital cortex contains more information about spatial frequency and contour than about conjunctions of those features, whereas inferotemporal and parietal cortices contain conjunction coding sites in which there is more information about the whole stimulus than its component parts.


2020 ◽  
Vol 31 (1) ◽  
pp. 138-146
Author(s):  
Dean Shmuel ◽  
Sebastian M Frank ◽  
Haggai Sharon ◽  
Yuka Sasaki ◽  
Takeo Watanabe ◽  
...  

Abstract Perception thresholds can improve through repeated practice with visual tasks. Can an already acquired and well-consolidated perceptual skill be noninvasively neuromodulated, unfolding the neural mechanisms involved? Here, leveraging the susceptibility of reactivated memories ranging from synaptic to systems levels across learning and memory domains and animal models, we used noninvasive brain stimulation to neuromodulate well-consolidated reactivated visual perceptual learning and reveal the underlying neural mechanisms. Subjects first encoded and consolidated the visual skill memory by performing daily practice sessions with the task. On a separate day, the consolidated visual memory was briefly reactivated, followed by low-frequency, inhibitory 1 Hz repetitive transcranial magnetic stimulation over early visual cortex, which was individually localized using functional magnetic resonance imaging. Poststimulation perceptual thresholds were measured on the final session. The results show modulation of perceptual thresholds following early visual cortex stimulation, relative to control stimulation. Consistently, resting state functional connectivity between trained and untrained parts of early visual cortex prior to training predicted the magnitude of perceptual threshold modulation. Together, these results indicate that even previously consolidated human perceptual memories are susceptible to neuromodulation, involving early visual cortical processing. Moreover, the opportunity to noninvasively neuromodulate reactivated perceptual learning may have important clinical implications.


NeuroImage ◽  
2020 ◽  
Vol 218 ◽  
pp. 116981 ◽  
Author(s):  
Simona Monaco ◽  
Giulia Malfatti ◽  
Jody C. Culham ◽  
Luigi Cattaneo ◽  
Luca Turella

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Miles Wischnewski ◽  
Marius V Peelen

Objects can be recognized based on their intrinsic features, including shape, color, and texture. In daily life, however, such features are often not clearly visible, for example when objects appear in the periphery, in clutter, or at a distance. Interestingly, object recognition can still be highly accurate under these conditions when objects are seen within their typical scene context. What are the neural mechanisms of context-based object recognition? According to parallel processing accounts, context-based object recognition is supported by the parallel processing of object and scene information in separate pathways. Output of these pathways is then combined in downstream regions, leading to contextual benefits in object recognition. Alternatively, according to feedback accounts, context-based object recognition is supported by (direct or indirect) feedback from scene-selective to object-selective regions. Here, in three pre-registered transcranial magnetic stimulation (TMS) experiments, we tested a key prediction of the feedback hypothesis: that scene-selective cortex causally and selectively supports context-based object recognition before object-selective cortex does. Early visual cortex (EVC), object-selective lateral occipital cortex (LOC), and scene-selective occipital place area (OPA) were stimulated at three time points relative to stimulus onset while participants categorized degraded objects in scenes and intact objects in isolation, in different trials. Results confirmed our predictions: relative to isolated object recognition, context-based object recognition was selectively and causally supported by OPA at 160–200 ms after onset, followed by LOC at 260–300 ms after onset. These results indicate that context-based expectations facilitate object recognition by disambiguating object representations in the visual cortex.


2012 ◽  
Vol 24 (2) ◽  
pp. 367-377 ◽  
Author(s):  
Vincent van de Ven ◽  
Bert Jans ◽  
Rainer Goebel ◽  
Peter De Weerd

Visual scene perception owes greatly to surface features such as color and brightness. Yet, early visual cortical areas predominantly encode surface boundaries rather than surface interiors. Whether human early visual cortex may nevertheless carry a small signal relevant for surface perception is a topic of debate. We induced brightness changes in a physically constant surface by temporally modulating the luminance of surrounding surfaces in seven human participants. We found that fMRI activity in the V2 representation of the constant surface was in antiphase to luminance changes of surrounding surfaces (i.e., activity was in-phase with perceived brightness changes). Moreover, the amplitude of the antiphase fMRI activity in V2 predicted the strength of illusory brightness perception. We interpret our findings as evidence for a surface-related signal in early visual cortex and discuss the neural mechanisms that may underlie that signal in concurrence with its possible interaction with the properties of the fMRI signal.


Sign in / Sign up

Export Citation Format

Share Document