The encoding of complex visual stimuli by a canonical model of the primary visual cortex: Temporal population code for face recognition on the iCub robot

Author(s):  
Andre Luvizotto ◽  
Cesar Renno-Costa ◽  
Ugo Pattacini ◽  
Paul Verschure
2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Jan C. Frankowski ◽  
Andrzej T. Foik ◽  
Alexa Tierno ◽  
Jiana R. Machhor ◽  
David C. Lyon ◽  
...  

AbstractPrimary sensory areas of the mammalian neocortex have a remarkable degree of plasticity, allowing neural circuits to adapt to dynamic environments. However, little is known about the effects of traumatic brain injury on visual circuit function. Here we used anatomy and in vivo electrophysiological recordings in adult mice to quantify neuron responses to visual stimuli two weeks and three months after mild controlled cortical impact injury to primary visual cortex (V1). We found that, although V1 remained largely intact in brain-injured mice, there was ~35% reduction in the number of neurons that affected inhibitory cells more broadly than excitatory neurons. V1 neurons showed dramatically reduced activity, impaired responses to visual stimuli and weaker size selectivity and orientation tuning in vivo. Our results show a single, mild contusion injury produces profound and long-lasting impairments in the way V1 neurons encode visual input. These findings provide initial insight into cortical circuit dysfunction following central visual system neurotrauma.


Stroke ◽  
2001 ◽  
Vol 32 (suppl_1) ◽  
pp. 334-334
Author(s):  
Gereon Nelles ◽  
Guido Widmann ◽  
Joachim Esser ◽  
Anette Meistrowitz ◽  
Johannes Weber ◽  
...  

102 Introduction: Restitution of unilateral visual field defects following occipital cortex lesions occurs rarely. Partial recovery, however, can be observed in patients with incomplete lesion of the visual cortex. Our objective was to study the neuroplastic changes in the visual system that underlie such recovery. Methods and Results: Six patients with a left PCA-territory cortical stroke and 6 healthy control subjects were studied during rest and during visual stimulation using a 1.5 T fMRI with a 40 mT gradient. Visual stimuli were projected with a laptop computer onto a 154 x 115 cm screen, placed 90 cm in front of the gantry. Subjects were asked to fixate a red point in the center of the screen during both conditions. During stimulation, a black-and-white checkerboard pattern reversal was presented in each hemifield. For each side, 120 volumes of 48 contiguous axial fMRI images were obtained during rest and during hemifield stimulation in alternating order (60 volumes for each condition). Significant differences of rCBF between stimulation and rest were assessed as group analyses using statistical parametric mapping (SPM 99; p<0.01, corrected for multiple comparison). In controls, strong increases of rCBF (Z=7.6) occurred in the contralateral primary visual cortex V1 (area 17) and in V3a (area 18) and V5 (area 19). No differences were found between the right and left side in controls. During stimulation of the unaffected (left) visual field in hemianopic patients, activation occurred in contralateral V1, but the strongest increases of rCBF (Z>10) were seen in contralateral V3a (area 18) and V5 (area 19). During stimulation of the hemianopic (right) visual field, no activation was found in the primary visual cortex of either hemisphere. The most significant activation (Z=9.2) was seen in the ipsilateral V3a and V5 areas, and contralateral (left) V3a. Conclusions: Partial recovery from hemianopia is associated with strong ipsilateral activation of the visual system. Processing of visual stimuli in the hemianopic side spares the primary visual cortex and may involve recruitment of neurons in ipsilateral (contralesional) areas V3a and V5.


2019 ◽  
Vol 121 (6) ◽  
pp. 2202-2214 ◽  
Author(s):  
John P. McClure ◽  
Pierre-Olivier Polack

Multimodal sensory integration facilitates the generation of a unified and coherent perception of the environment. It is now well established that unimodal sensory perceptions, such as vision, are improved in multisensory contexts. Whereas multimodal integration is primarily performed by dedicated multisensory brain regions such as the association cortices or the superior colliculus, recent studies have shown that multisensory interactions also occur in primary sensory cortices. In particular, sounds were shown to modulate the responses of neurons located in layers 2/3 (L2/3) of the mouse primary visual cortex (V1). Yet, the net effect of sound modulation at the V1 population level remained unclear. In the present study, we performed two-photon calcium imaging in awake mice to compare the representation of the orientation and the direction of drifting gratings by V1 L2/3 neurons in unimodal (visual only) or multimodal (audiovisual) conditions. We found that sound modulation depended on the tuning properties (orientation and direction selectivity) and response amplitudes of V1 L2/3 neurons. Sounds potentiated the responses of neurons that were highly tuned to the cue’s orientation and direction but weakly active in the unimodal context, following the principle of inverse effectiveness of multimodal integration. Moreover, sound suppressed the responses of neurons untuned for the orientation and/or the direction of the visual cue. Altogether, sound modulation improved the representation of the orientation and direction of the visual stimulus in V1 L2/3. Namely, visual stimuli presented with auditory stimuli recruited a neuronal population better tuned to the visual stimulus orientation and direction than when presented alone. NEW & NOTEWORTHY The primary visual cortex (V1) receives direct inputs from the primary auditory cortex. Yet, the impact of sounds on visual processing in V1 remains controverted. We show that the modulation by pure tones of V1 visual responses depends on the orientation selectivity, direction selectivity, and response amplitudes of V1 neurons. Hence, audiovisual stimuli recruit a population of V1 neurons better tuned to the orientation and direction of the visual stimulus than unimodal visual stimuli.


2017 ◽  
Author(s):  
Maria C. Dadarlat ◽  
Michael P. Stryker

AbstractNeurons in mouse primary visual cortex (V1) are selective for particular properties of visual stimuli. Locomotion causes a change in cortical state that leaves their selectivity unchanged but strengthens their responses. Both locomotion and the change in cortical state are initiated by projections from the mesencephalic locomotor region (MLR), the latter through a disinhibitory circuit in V1. The function served by this change in cortical state is unknown. By recording simultaneously from a large number of single neurons in alert mice viewing moving gratings, we investigated the relationship between locomotion and the information contained within the neural population. We found that locomotion improved encoding of visual stimuli in V1 by two mechanisms. First, locomotion-induced increases in firing rates enhanced the mutual information between visual stimuli and single neuron responses over a fixed window of time. Second, stimulus discriminability was improved, even for fixed population firing rates, because of a decrease in noise correlations across the population during locomotion. These two mechanisms contributed differently to improvements in discriminability across cortical layers, with changes in firing rates most important in the upper layers and changes in noise correlations most important in layer V. Together, these changes resulted in a three- to five-fold reduction in the time needed to precisely encode grating direction and orientation. These results support the hypothesis that cortical state shifts during locomotion to accommodate an increased load on the visual system when mice are moving.Significance StatementThis paper contains three novel findings about the representation of information in neurons within the primary visual cortex of the mouse. First, we show that locomotion reduces by at least a factor of three the time needed for information to accumulate in the visual cortex that allows the distinction of different visual stimuli. Second, we show that the effect of locomotion is to increase information in cells of all layers of the visual cortex. Third we show that the means by which information is enhanced by locomotion differs between the upper layers, where the major effect is the increasing of firing rates, and in layer V, where the major effect is the reduction in noise correlations.


Author(s):  
R. Oz ◽  
H. Edelman-Klapper ◽  
S. Nivinsky-Margalit ◽  
H. Slovin

AbstractIntra cortical microstimulation (ICMS) in the primary visual cortex (V1) can generate the visual perception of phosphenes and evoke saccades directed to the stimulated location in the retinotopic map. Although ICMS is widely used, little is known about the evoked spatio-temporal patterns of neural activity and their relation to neural responses evoked by visual stimuli or saccade generation. To investigate this, we combined ICMS with Voltage Sensitive Dye Imaging in V1 of behaving monkeys and measured neural activity at high spatial (meso-scale) and temporal resolution. Small visual stimuli and ICMS evoked population activity spreading over few mm that propagated to extrastriate areas. The population responses evoked by ICMS showed faster dynamics and different spatial propagation patterns. Neural activity was higher in trials w/saccades compared with trials w/o saccades. In conclusion, our results uncover the spatio-temporal patterns evoked by ICMS and their relation to visual processing and saccade generation.


2021 ◽  
Author(s):  
Marton Albert Hajnal ◽  
Duy Tran ◽  
Michael Einstein ◽  
Mauricio Vallejo Martelo ◽  
Karen Safaryan ◽  
...  

Primary visual cortex (V1) neurons integrate motor and multisensory information with visual inputs during sensory processing. However, whether V1 neurons also integrate and encode higher-order cognitive variables is less understood. We trained mice to perform a context-dependent cross-modal decision task where the interpretation of identical audio-visual stimuli depends on task context. We performed silicon probe population recordings of neuronal activity in V1 during task performance and showed that task context (whether the animal should base its decision on visual or auditory stimuli) can be decoded during both intertrial intervals and stimulus presentations. Context and visual stimuli were represented in overlapping populations but were orthogonal in the population activity space. Context representation was not static but displayed distinctive dynamics upon stimulus onset and offset. Thus, activity patterns in V1 independently represent visual stimuli and cognitive variables relevant to task execution.


2016 ◽  
Author(s):  
Dylan R Muir ◽  
Patricia Molina-Luna ◽  
Morgane M Roth ◽  
Fritjof Helmchen ◽  
Björn M Kampa

AbstractLocal excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a ‘like-to-like’ scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a ‘feature binding’ scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by a assuming ‘feature binding’ connectivity. Unlike under the ‘like-to-like’ scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and that such a mechanism is consistent with visual responses and cortical anatomy in mouse V1.Author summaryThe brain is a highly complex structure, with abundant connectivity between nearby neurons in the neocortex, the outermost and evolutionarily most recent part of the brain. Although the network architecture of the neocortex can appear disordered, connections between neurons seem to follow certain rules. These rules most likely determine how information flows through the neural circuits of the brain, but the relationship between particular connectivity rules and the function of the cortical network is not known. We built models of visual cortex in the mouse, assuming distinct rules for connectivity, and examined how the various rules changed the way the models responded to visual stimuli. We also recorded responses to visual stimuli of populations of neurons in anaesthetised mice, and compared these responses with our model predictions. We found that connections in neocortex probably follow a connectivity rule that groups together neurons that differ in simple visual properties, to build more complex representations of visual stimuli. This finding is surprising because primary visual cortex is assumed to support mainly simple visual representations. We show that including specific rules for non-random connectivity in cortical models, and precisely measuring those rules in cortical tissue, is essential to understanding how information is processed by the brain.


2021 ◽  
Author(s):  
Liming Tan ◽  
Dario L. Ringach ◽  
S. Lawrence Zipursky ◽  
Joshua T. Trachtenberg

Depth perception emerges from the development of binocular neurons in primary visual cortex. Vision is required for these neurons to acquire their mature responses to visual stimuli. A prevalent view is that vision does not influence binocular circuitry until the onset of the critical period, about a week after eye opening, and that this relies on inhibition. Here, we show that vision is required to form binocular neurons and to improve binocular tuning and matching from eye opening until critical period closure. Inhibition is not required for this process, but rather antagonizes it. Vision improves the tuning properties of binocular neurons by strengthening and sharpening ipsilateral eye cortical responses. This progressively changes the population of neurons in the binocular pool and this plasticity is sensitive to interocular differences prior to the critical period. Thus, vision guides binocular plasticity from eye opening and prior to the classically defined critical period.


Sign in / Sign up

Export Citation Format

Share Document