Equalization and decorrelation in primary visual cortex

2014 ◽  
Vol 112 (3) ◽  
pp. 501-503 ◽  
Author(s):  
Koen V. Haak ◽  
Elizabeth Fast ◽  
Yihwa Baek ◽  
Juraj Mesik

There are many theories on the purpose of neural adaptation, but evidence remains elusive. Here, we discuss the recent work by Benucci et al. ( Nat Neurosci 16: 724–729, 2013), who measured for the first time the immediate effects of adaptation on the overall activity of a neuronal population. These measurements confirm two long-standing hypotheses about the purpose of adaptation, namely that adaptation counteracts biases in the statistics of the environment, and that it maintains decorrelation in neuronal stimulus selectivity.

2019 ◽  
Vol 5 (6) ◽  
pp. eaaw0807 ◽  
Author(s):  
Ming Li ◽  
Xue Mei Song ◽  
Tao Xu ◽  
Dewen Hu ◽  
Anna Wang Roe ◽  
...  

In the mammalian visual system, early stages of visual form processing begin with orientation-selective neurons in primary visual cortex (V1). In many species (including humans, monkeys, tree shrews, cats, and ferrets), these neurons are organized in a beautifully arrayed pinwheel-like orientation columns, which shift in orientation preference across V1. However, to date, the relationship of orientation architecture to the encoding of multiple elemental aspects of visual contours is still unknown. Here, using a novel, highly accurate method of targeting electrode position, we report for the first time the presence of three subdomains within single orientation domains. We suggest that these zones subserve computation of distinct aspects of visual contours and propose a novel tripartite pinwheel-centered view of an orientation hypercolumn.


2021 ◽  
Vol 15 ◽  
Author(s):  
Max Garagnani ◽  
Evgeniya Kirilina ◽  
Friedemann Pulvermüller

Embodied theories of grounded semantics postulate that, when word meaning is first acquired, a link is established between symbol (word form) and corresponding semantic information present in modality-specific—including primary—sensorimotor cortices of the brain. Direct experimental evidence documenting the emergence of such a link (i.e., showing that presentation of a previously unknown, meaningless word sound induces, after learning, category-specific reactivation of relevant primary sensory or motor brain areas), however, is still missing. Here, we present new neuroimaging results that provide such evidence. We taught participants aspects of the referential meaning of previously unknown, senseless novel spoken words (such as “Shruba” or “Flipe”) by associating them with either a familiar action or a familiar object. After training, we used functional magnetic resonance imaging to analyze the participants’ brain responses to the new speech items. We found that hearing the newly learnt object-related word sounds selectively triggered activity in the primary visual cortex, as well as secondary and higher visual areas.These results for the first time directly document the formation of a link between the novel, previously meaningless spoken items and corresponding semantic information in primary sensory areas in a category-specific manner, providing experimental support for perceptual accounts of word-meaning acquisition in the brain.


2014 ◽  
Vol 17 (10) ◽  
pp. 1380-1387 ◽  
Author(s):  
Yin Yan ◽  
Malte J Rasch ◽  
Minggui Chen ◽  
Xiaoping Xiang ◽  
Min Huang ◽  
...  

2018 ◽  
Vol 120 (5) ◽  
pp. 2296-2310 ◽  
Author(s):  
Douglas A. Ruff ◽  
David H. Brainard ◽  
Marlene R. Cohen

The way that humans and animals perceive the lightness of an object depends on its physical luminance as well as its surrounding context. While neuronal responses throughout the visual pathway are modulated by context, the relationship between neuronal responses and lightness perception is poorly understood. We searched for a neuronal mechanism of lightness by recording responses of neuronal populations in monkey primary visual cortex (V1) and area V4 to stimuli that produce a lightness illusion in humans, in which the lightness of a disk depends on the context in which it is embedded. We found that the way individual units encode the luminance (or equivalently for our stimuli, contrast) of the disk and its context is extremely heterogeneous. This motivated us to ask whether the population representation in either V1 or V4 satisfies three criteria: 1) disk luminance is represented with high fidelity, 2) the context surrounding the disk is also represented, and 3) the representations of disk luminance and context interact to create a representation of lightness that depends on these factors in a manner consistent with human psychophysical judgments of disk lightness. We found that populations of units in both V1 and V4 fulfill the first two criteria but that we cannot conclude that the two types of information in either area interact in a manner that clearly predicts human psychophysical measurements: the interpretation of our population measurements depends on how subsequent areas read out lightness from the population responses. NEW & NOTEWORTHY A core question in visual neuroscience is how the brain extracts stable representations of object properties from the retinal image. We searched for a neuronal mechanism of lightness perception by determining whether the responses of neuronal populations in primary visual cortex and area V4 could account for a lightness illusion measured using human psychophysics. Our results suggest that comparing psychophysics with population recordings will yield insight into neuronal mechanisms underlying a variety of perceptual phenomena.


2018 ◽  
Author(s):  
Douglas A. Ruff ◽  
David H. Brainard ◽  
Marlene R. Cohen

AbstractThe way that humans and animals perceive the lightness of an object depends on its physical luminance as well as its surrounding context. While neuronal responses throughout the visual pathway are modulated by context, the relationship between neuronal responses and lightness perception is poorly understood. We searched for a neuronal mechanism of lightness by recording responses of neuronal populations in monkey primary visual cortex (V1) and area V4 to stimuli that produce a lightness illusion in humans, in which the lightness of a disk depends on the context in which it is embedded. We found that the way individual units encode the luminance (or equivalently for our stimuli, contrast) of the disk and its context is extremely heterogeneous. This motivated us to ask whether the population representation in either V1 or V4 satisfies three criteria: 1) disk luminance is represented with high fidelity, 2) the context surrounding the disk is also represented, and 3) the representations of disk luminance and context interact to create a representation of lightness that depends on these factors in a manner consistent with human psychophysical judgments of disk lightness. We found that populations of units in both V1 and V4 fulfill the first two criteria, but that we cannot conclude that the two types of information in either area interact in a manner that clearly predicts human psychophysical measurements: the interpretation of our population measurements depends on how subsequent areas read out lightness from the population responses.New & NoteworthyA core question in visual neuroscience is how the brain extracts stable representations of object properties from the retinal image. We searched for a neuronal mechanism of lightness perception by determining whether the responses of neuronal populations in primary visual cortex and area V4 could account for a lightness illusion measured using human psychophysics. Our results suggest that comparing psychophysics with population recordings will yield insight into neuronal mechanisms underlying a variety of perceptual phenomena.


2013 ◽  
Vol 110 (23) ◽  
pp. 9517-9522 ◽  
Author(s):  
Douglas Zhou ◽  
Aaditya V. Rangan ◽  
David W. McLaughlin ◽  
David Cai

2018 ◽  
Author(s):  
Michele A. Cox ◽  
Kacie Dougherty ◽  
Jacob A. Westerberg ◽  
Michelle S. Schall ◽  
Alexander Maier

AbstractResearch throughout the past decades revealed that neurons in primate primary visual cortex (V1) rapidly integrate the two eyes’ separate signals into a combined binocular response. The exact mechanisms giving underlying this binocular integration remain elusive. One open question is whether binocular integration occurs at a single stage of sensory processing or in a sequence of computational steps. To address this question, we examined the temporal dynamics of binocular integration across V1’s laminar microcircuit of awake behaving monkeys. We find that V1 processes binocular stimuli in a dynamic sequence that comprises at least two distinct phases: A transient phase, lasting 50-150ms from stimulus onset, in which neuronal population responses are significantly enhanced for binocular stimulation compared to monocular stimulation, followed by a sustained phase characterized by widespread suppression in which feature-specific computations emerge. In the sustained phase, incongruent binocular stimulation resulted in response reduction relative to monocular stimulation across the V1 population. By contrast, sustained responses for binocular congruent stimulation were either reduced or enhanced relative to monocular responses depending on the neurons’ selectivity for one or both eyes (i.e., ocularity). These results suggest that binocular integration in V1 occurs in at least two sequential steps, with an initial additive combination of the two eyes’ signals followed by the establishment of interocular concordance and discordance.Significance StatementOur two eyes provide two separate streams of visual information that are merged in the primary visual cortex (V1). Previous work showed that stimulating both eyes rather than one eye may either increase or decrease activity in V1, depending on the nature of the stimuli. Here we show that V1 binocular responses change over time, with an early phase of general excitation and followed by stimulus-dependent response suppression. These results provide important new insights into the neural machinery that supports the combination of the two eye’s perspectives into a single coherent view.


2019 ◽  
Author(s):  
John P McClure ◽  
Pierre-Olivier Polack

Multimodal sensory integration facilitates the generation of a unified and coherent perception of the environment. It is now well established that unimodal sensory perceptions, such as vision, are improved in multisensory contexts. While multimodal integration is primarily performed by dedicated multisensory brain regions such as the association cortices or the superior colliculus, recent studies have shown that multisensory interactions also occur in primary sensory cortices. In particular, sounds were shown to modulate the responses of neurons located in layers 2/3 (L2/3) of the mouse primary visual cortex (V1). Yet, the net effect of sound modulation at the V1 population level remained unclear. Here, we performed two-photon calcium imaging in awake mice to compare the representation of the orientation and the direction of drifting gratings by V1 L2/3 neurons in unimodal (visual only) or multi-modal (audiovisual) conditions. We found that sound modulation depended on the tuning properties (orientation and direction selectivity) and response amplitudes of V1 L2/3 neurons. Sounds potentiated the responses of neurons that were highly tuned to the cue orientation and direction but weakly active in the unimodal context, following the principle of inverse effectiveness of multimodal integration. Moreover, sound suppressed the responses of neurons untuned for the orientation and/or the direction of the visual cue. Altogether, sound modulation improved the representation of the orientation and direction of the visual stimulus in V1 L2/3. Namely, visual stimuli presented with auditory stimuli recruited a neuronal population better tuned to the visual stimulus orientation and direction than when presented alone.


2020 ◽  
Vol 132 (6) ◽  
pp. 2000-2007 ◽  
Author(s):  
Soroush Niketeghad ◽  
Abirami Muralidharan ◽  
Uday Patel ◽  
Jessy D. Dorn ◽  
Laura Bonelli ◽  
...  

Stimulation of primary visual cortices has the potential to restore some degree of vision to blind individuals. Developing safe and reliable visual cortical prostheses requires assessment of the long-term stability, feasibility, and safety of generating stimulation-evoked perceptions.A NeuroPace responsive neurostimulation system was implanted in a blind individual with an 8-year history of bare light perception, and stimulation-evoked phosphenes were evaluated over 19 months (41 test sessions). Electrical stimulation was delivered via two four-contact subdural electrode strips implanted over the right medial occipital cortex. Current and charge thresholds for eliciting visual perception (phosphenes) were measured, as were the shape, size, location, and intensity of the phosphenes. Adverse events were also assessed.Stimulation of all contacts resulted in phosphene perception. Phosphenes appeared completely or partially in the left hemifield. Stimulation of the electrodes below the calcarine sulcus elicited phosphenes in the superior hemifield and vice versa. Changing the stimulation parameters of frequency, pulse width, and burst duration affected current thresholds for eliciting phosphenes, and increasing the amplitude or frequency of stimulation resulted in brighter perceptions. While stimulation thresholds decreased between an average of 5% and 12% after 19 months, spatial mapping of phosphenes remained consistent over time. Although no serious adverse events were observed, the subject experienced mild headaches and dizziness in three instances, symptoms that did not persist for more than a few hours and for which no clinical intervention was required.Using an off-the-shelf neurostimulator, the authors were able to reliably generate phosphenes in different areas of the visual field over 19 months with no serious adverse events, providing preliminary proof of feasibility and safety to proceed with visual epicortical prosthetic clinical trials. Moreover, they systematically explored the relationship between stimulation parameters and phosphene thresholds and discovered the direct relation of perception thresholds based on primary visual cortex (V1) neuronal population excitation thresholds.


Sign in / Sign up

Export Citation Format

Share Document