scholarly journals Unsupervised Models of Mouse Visual Cortex

2021 ◽  
Author(s):  
Aran Nayebi ◽  
Nathan C. L. Kong ◽  
Chengxu Zhuang ◽  
Justin L. Gardner ◽  
Anthony M. Norcia ◽  
...  

Task-optimized deep convolutional neural networks are the most quantitatively accurate models of the primate ventral visual stream. However, such networks are implausible as a model of the mouse visual system because mouse visual cortex has a known shallower hierarchy and the supervised objectives these networks are typically trained with are likely neither ethologically relevant in content nor in quantity. Here we develop shallow network architectures that are more consistent with anatomical and physiological studies of mouse visual cortex than current models. We demonstrate that hierarchically shallow architectures trained using contrastive objective functions applied to visual-acuity-adapted images achieve neural prediction performance that exceed those of the same architectures trained in a supervised manner and result in the most quantitatively accurate models of the mouse visual system. Moreover, these models' neural predictivity significantly surpasses those of supervised, deep architectures that are known to correspond well to the primate ventral visual stream. Finally, we derive a novel measure of inter-animal consistency, and show that the best models closely match this quantity across visual areas. Taken together, our results suggest that contrastive objectives operating on shallow architectures with ethologically-motivated image transformations may be a biologically-plausible computational theory of visual coding in mice.

2018 ◽  
Author(s):  
Miaomiao Jin ◽  
Jeffrey M. Beck ◽  
Lindsey L. Glickfeld

AbstractSensory information is encoded by populations of cortical neurons. Yet, it is unknown how this information is used for even simple perceptual choices such as discriminating orientation. To determine the computation underlying this perceptual choice, we took advantage of the robust adaptation in the mouse visual system. We find that adaptation increases animals’ thresholds for orientation discrimination. This was unexpected since optimal computations that take advantage of all available sensory information predict that the shift in tuning and increase in signal-to-noise ratio in the adapted condition should improve discrimination. Instead, we find that the effects of adaptation on behavior can be explained by the appropriate reliance of the perceptual choice circuits on target preferring neurons, but the failure to discount neurons that prefer the distractor. This suggests that to solve this task the circuit has adopted a suboptimal strategy that discards important task-related information to implement a feed-forward visual computation.


2018 ◽  
Author(s):  
Balaji Sriram ◽  
Alberto Cruz-Martin ◽  
Lillian Li ◽  
Pamela Reinagel ◽  
Anirvan Ghosh

ABSTRACTThe cortical code that underlies perception must enable subjects to perceive the world at timescales relevant for behavior. We find that mice can integrate visual stimuli very quickly (<100 ms) to reach plateau performance in an orientation discrimination task. To define features of cortical activity that underlie performance at these timescales, we measured single unit responses in the mouse visual cortex at timescales relevant to this task. In contrast to high contrast stimuli of longer duration, which elicit reliable activity in individual neurons, stimuli at the threshold of perception elicit extremely sparse and unreliable responses in V1 such that the activity of individual neurons do not reliably report orientation. Integrating information across neurons, however, quickly improves performance. Using a linear decoding model, we estimate that integrating information over 50-100 neurons is sufficient to account for behavioral performance. Thus, at the limits of perception the visual system is able to integrate information across a relatively small number of highly unreliable single units to generate reliable behavior.


2016 ◽  
Author(s):  
Inbal Ayzenshtat ◽  
Jesse Jackson ◽  
Rafael Yuste

AbstractThe response properties of neurons to sensory stimuli have been used to identify their receptive fields and functionally map sensory systems. In primary visual cortex, most neurons are selective to a particular orientation and spatial frequency of the visual stimulus. Using two-photon calcium imaging of neuronal populations from the primary visual cortex of mice, we have characterized the response properties of neurons to various orientations and spatial frequencies. Surprisingly, we found that the orientation selectivity of neurons actually depends on the spatial frequency of the stimulus. This dependence can be easily explained if one assumed spatially asymmetric Gabor-type receptive fields. We propose that receptive fields of neurons in layer 2/3 of visual cortex are indeed spatially asymmetric, and that this asymmetry could be used effectively by the visual system to encode natural scenes.Significance StatementIn this manuscript we demonstrate that the orientation selectivity of neurons in primary visual cortex of mouse is highly dependent on the stimulus SF. This dependence is realized quantitatively in a decrease in the selectivity strength of cells in non-optimum SF, and more importantly, it is also evident qualitatively in a shift in the preferred orientation of cells in non-optimum SF. We show that a receptive-field model of a 2D asymmetric Gabor, rather than a symmetric one, can explain this surprising observation. Therefore, we propose that the receptive fields of neurons in layer 2/3 of mouse visual cortex are spatially asymmetric and this asymmetry could be used effectively by the visual system to encode natural scenes.Highlights–Orientation selectivity is dependent on spatial frequency.–Asymmetric Gabor model can explain this dependence.


2017 ◽  
Author(s):  
Jesse Gomez ◽  
Vaidehi Natu ◽  
Brianna Jeska ◽  
Michael Barnett ◽  
Kalanit Grill-Spector

ABSTRACTReceptive fields (RFs) processing information in restricted parts of the visual field are a key property of neurons in the visual system. However, how RFs develop in humans is unknown. Using fMRI and population receptive field (pRF) modeling in children and adults, we determined where and how pRFs develop across the ventral visual stream. We find that pRF properties in visual field maps, V1 through VO1, are adult-like by age 5. However, pRF properties in face- and word-selective regions develop into adulthood, increasing the foveal representation and the visual field coverage for faces in the right hemisphere and words in the left hemisphere. Eye-tracking indicates that pRF changes are related to changing fixation patterns on words and faces across development. These findings suggest a link between viewing behavior of faces and words and the differential development of pRFs across visual cortex, potentially due to competition on foveal coverage.


2020 ◽  
Author(s):  
Franziska Geiger ◽  
Martin Schrimpf ◽  
Tiago Marques ◽  
James J. DiCarlo

AbstractAfter training on large datasets, certain deep neural networks are surprisingly good models of the neural mechanisms of adult primate visual object recognition. Nevertheless, these models are poor models of the development of the visual system because they posit millions of sequential, precisely coordinated synaptic updates, each based on a labeled image. While ongoing research is pursuing the use of unsupervised proxies for labels, we here explore a complementary strategy of reducing the required number of supervised synaptic updates to produce an adult-like ventral visual stream (as judged by the match to V1, V2, V4, IT, and behavior). Such models might require less precise machinery and energy expenditure to coordinate these updates and would thus move us closer to viable neuroscientific hypotheses about how the visual system wires itself up. Relative to the current leading model of the adult ventral stream, we here demonstrate that the total number of supervised weight updates can be substantially reduced using three complementary strategies: First, we find that only 2% of supervised updates (epochs and images) are needed to achieve ~80% of the match to adult ventral stream. Second, by improving the random distribution of synaptic connectivity, we find that 54% of the brain match can already be achieved “at birth” (i.e. no training at all). Third, we find that, by training only ~5% of model synapses, we can still achieve nearly 80% of the match to the ventral stream. When these three strategies are applied in combination, we find that these new models achieve ~80% of a fully trained model’s match to the brain, while using two orders of magnitude fewer supervised synaptic updates. These results reflect first steps in modeling not just primate adult visual processing during inference, but also how the ventral visual stream might be “wired up” by evolution (a model’s “birth” state) and by developmental learning (a model’s updates based on visual experience).


2017 ◽  
Author(s):  
David Richter ◽  
Matthias Ekman ◽  
Floris P. de Lange

AbstractPrediction plays a crucial role in perception, as prominently suggested by predictive coding theories. However, the exact form and mechanism of predictive modulations of sensory processing remain unclear, with some studies reporting a downregulation of the sensory response for predictable input, while others observed an enhanced response. In a similar vein, downregulation of the sensory response for predictable input has been linked to either sharpening or dampening of the sensory representation, which are opposite in nature. In the present study we set out to investigate the neural consequences of perceptual expectation of object stimuli throughout the visual hierarchy, using fMRI in human volunteers. Participants (n=24) were exposed to pairs of sequentially presented object images in a statistical learning paradigm, in which the first object predicted the identity of the second object. Image transitions were not task relevant; thus all learning of statistical regularities was incidental. We found strong suppression of neural responses to expected compared to unexpected stimuli throughout the ventral visual stream, including primary visual cortex (V1), lateral occipital complex (LOC), and anterior ventral visual areas. Expectation suppression in LOC, but not V1, scaled positively with image preference, lending support to the dampening account of expectation suppression in object perception.Significance StatementStatistical regularities permeate our world and help us to perceive and understand our surroundings. It has been suggested that the brain fundamentally relies on predictions and constructs models of the world in order to make sense of sensory information. Previous research on the neural basis of prediction has documented expectation suppression, i.e. suppressed responses to expected compared to unexpected stimuli. In the present study we queried the presence and characteristics of expectation suppression throughout the ventral visual stream. We demonstrate robust expectation suppression in the entire ventral visual pathway, and underlying this suppression a dampening of the sensory representation in object-selective visual cortex, but not in primary visual cortex. Taken together, our results provide novel evidence in support of theories conceptualizing perception as an active inference process, which selectively dampens cortical representations of predictable objects. This dampening may support our ability to automatically filter out irrelevant, predictable objects.


2015 ◽  
Vol 113 (5) ◽  
pp. 1656-1669 ◽  
Author(s):  
Jedediah M. Singer ◽  
Joseph R. Madsen ◽  
William S. Anderson ◽  
Gabriel Kreiman

Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds.


2020 ◽  
Vol 10 (9) ◽  
pp. 602
Author(s):  
Yibo Cui ◽  
Chi Zhang ◽  
Kai Qiao ◽  
Linyuan Wang ◽  
Bin Yan ◽  
...  

Representation invariance plays a significant role in the performance of deep convolutional neural networks (CNNs) and human visual information processing in various complicated image-based tasks. However, there has been abounding confusion concerning the representation invariance mechanisms of the two sophisticated systems. To investigate their relationship under common conditions, we proposed a representation invariance analysis approach based on data augmentation technology. Firstly, the original image library was expanded by data augmentation. The representation invariances of CNNs and the ventral visual stream were then studied by comparing the similarities of the corresponding layer features of CNNs and the prediction performance of visual encoding models based on functional magnetic resonance imaging (fMRI) before and after data augmentation. Our experimental results suggest that the architecture of CNNs, combinations of convolutional and fully-connected layers, developed representation invariance of CNNs. Remarkably, we found representation invariance belongs to all successive stages of the ventral visual stream. Hence, the internal correlation between CNNs and the human visual system in representation invariance was revealed. Our study promotes the advancement of invariant representation of computer vision and deeper comprehension of the representation invariance mechanism of human visual information processing.


2018 ◽  
Vol 120 (3) ◽  
pp. 926-941 ◽  
Author(s):  
Dzmitry A. Kaliukhovich ◽  
Hans Op de Beeck

Similar to primates, visual cortex in rodents appears to be organized in two distinct hierarchical streams. However, there is still little known about how visual information is processed along those streams in rodents. In this study, we examined how repetition suppression and position and clutter tolerance of the neuronal representations evolve along the putative ventral visual stream in rats. To address this question, we recorded multiunit spiking activity in primary visual cortex (V1) and the more downstream visual laterointermediate (LI) area of head-restrained Long-Evans rats. We employed a paradigm reminiscent of the continuous carry-over design used in human neuroimaging. In both areas, stimulus repetition attenuated the early phase of the neuronal response to the repeated stimulus, with this response suppression being greater in area LI. Furthermore, stimulus preferences were more similar across positions (position tolerance) in area LI than in V1, even though the absolute responses in both areas were very sensitive to changes in position. In contrast, the neuronal representations in both areas were equally good at tolerating the presence of limited visual clutter, as modeled by the presentation of a single flank stimulus. When probing tolerance of the neuronal representations with stimulus-specific adaptation, we detected no position tolerance in either examined brain area, whereas, on the contrary, we revealed clutter tolerance in both areas. Overall, our data demonstrate similarities and discrepancies in processing of visual information along the ventral visual stream of rodents and primates. Moreover, our results stress caution in using neuronal adaptation to probe tolerance of the neuronal representations. NEW & NOTEWORTHY Rodents are emerging as a popular animal model that complement primates for studying higher level visual functions. Similar to findings in primates, we demonstrate a greater repetition suppression and position tolerance of the neuronal representations in the downstream laterointermediate area of Long-Evans rats compared with primary visual cortex. However, we report no difference in the degree of clutter tolerance between the areas. These findings provide additional evidence for hierarchical processing of visual stimuli in rodents.


2020 ◽  
Author(s):  
Jianghong Shi ◽  
Michael A. Buice ◽  
Eric Shea-Brown ◽  
Stefan Mihalas ◽  
Bryan Tripp

Convolutional neural networks trained on object recognition derive some inspiration from the neuroscience of the visual system in primates, and have been used as models of the feedforward computation performed in the primate ventral stream. In contrast to the hierarchical organization of primates, the visual system of the mouse has flatter hierarchy. Since mice are capable of visually guided behavior, this raises questions about the role of architecture in neural computation. In this work, we introduce a framework for building a biologically constrained convolutional neural network model of lateral areas of the mouse visual cortex. The structural parameters of the network are derived from experimental measurements, specifically estimates of numbers of neurons in each area and cortical layer, the interareal connec-tome, and the statistics of connections between cortical layers. This network is constructed to support detailed task-optimized models of mouse visual cortex, with neural populations that can be compared to specific corresponding populations in the mouse brain. The code is freely available to support such research.


Sign in / Sign up

Export Citation Format

Share Document