scholarly journals Population Codes Enable Learning from Few Examples By Shaping Inductive Bias

2021 ◽  
Author(s):  
Blake Bordelon ◽  
Cengiz Pehlevan

The brain can learn from a limited number of experiences, an ability which requires suitable built in assumptions about the nature of the tasks which must be learned, or inductive biases. While inductive biases are central components of intelligence, how they are reflected in and shaped by population codes are not well-understood. To address this question, we consider biologically-plausible reading out of an arbitrary stimulus-response pattern from an arbitrary population code, and develop an analytical theory that predicts the generalization error of the readout as a function of the number of samples. We find that learning performance is controlled by the eigenspectrum of the population code's inner-product kernel, which measures the similarity of neural responses to two different input stimuli. Many different codes can realize the same kernel; by analyzing recordings from the mouse primary visual cortex, we demonstrate that biological codes are metabolically more efficient than other codes with identical kernels. We demonstrate that the spectral properties of the kernel introduce an inductive bias toward explaining stimulus-response samples with simple functions and determine compatibility of the population code with learning task, and hence the sample-efficiency of learning. While the tail of the spectrum is important for large sample size behavior of learning, for small sample sizes, the bulk of the spectrum governs generalization. We apply our theory to experimental recordings of mouse primary visual cortex neural responses, elucidating a bias towards sample-efficient learning of low frequency orientation discrimination tasks. We demonstrate this emergence of this bias in a simple model of primary visual cortex, and further show how invariances in the code to stimulus variations affect learning performance. Finally, we demonstrate that our methods are applicable to time-dependent neural codes. Overall, our study suggests sample-efficient learning as a general normative coding principle.

2003 ◽  
Vol 20 (1) ◽  
pp. 77-84 ◽  
Author(s):  
AN CAO ◽  
PETER H. SCHILLER

Relative motion information, especially relative speed between different input patterns, is required for solving many complex tasks of the visual system, such as depth perception by motion parallax and motion-induced figure/ground segmentation. However, little is known about the neural substrate for processing relative speed information. To explore the neural mechanisms for relative speed, we recorded single-unit responses to relative motion in the primary visual cortex (area V1) of rhesus monkeys while presenting sets of random-dot arrays moving at different speeds. We found that most V1 neurons were sensitive to the existence of a discontinuity in speed, that is, they showed higher responses when relative motion was presented compared to homogenous field motion. Seventy percent of the neurons in our sample responded predominantly to relative rather than to absolute speed. Relative speed tuning curves were similar at different center–surround velocity combinations. These relative motion-sensitive neurons in macaque area V1 probably contribute to figure/ground segmentation and motion discontinuity detection.


2017 ◽  
Author(s):  
Amelia J. Christensen ◽  
Jonathan W. Pillow

Running profoundly alters stimulus-response properties in mouse primary visual cortex (V1), but its effects in higher-order visual cortex remain unknown. Here we systematically investigated how locomotion modulates visual responses across six visual areas and three cortical layers using a massive dataset from the Allen Brain Institute. Although running has been shown to increase firing in V1, we found that it suppressed firing in higher-order visual areas. Despite this reduction in gain, visual responses during running could be decoded more accurately than visual responses during stationary periods. We show that this effect was not attributable to changes in noise correlations, and propose that it instead arises from increased reliability of single neuron responses during running.


2017 ◽  
Author(s):  
Santiago A. Cadena ◽  
George H. Denfield ◽  
Edgar Y. Walker ◽  
Leon A. Gatys ◽  
Andreas S. Tolias ◽  
...  

AbstractDespite great efforts over several decades, our best models of primary visual cortex (V1) still predict spiking activity quite poorly when probed with natural stimuli, highlighting our limited understanding of the nonlinear computations in V1. Recently, two approaches based on deep learning have been successfully applied to neural data: On the one hand, transfer learning from networks trained on object recognition worked remarkably well for predicting neural responses in higher areas of the primate ventral stream, but has not yet been used to model spiking activity in early stages such as V1. On the other hand, data-driven models have been used to predict neural responses in the early visual system (retina and V1) of mice, but not primates. Here, we test the ability of both approaches to predict spiking activity in response to natural images in V1 of awake monkeys. Even though V1 is rather at an early to intermediate stage of the visual system, we found that the transfer learning approach performed similarly well to the data-driven approach and both outperformed classical linear-nonlinear and wavelet-based feature representations that build on existing theories of V1. Notably, transfer learning using a pre-trained feature space required substantially less experimental time to achieve the same performance. In conclusion, multi-layer convolutional neural networks (CNNs) set the new state of the art for predicting neural responses to natural images in primate V1 and deep features learned for object recognition are better explanations for V1 computation than all previous filter bank theories. This finding strengthens the necessity of V1 models that are multiple nonlinearities away from the image domain and it supports the idea of explaining early visual cortex based on high-level functional goals.Author summaryPredicting the responses of sensory neurons to arbitrary natural stimuli is of major importance for understanding their function. Arguably the most studied cortical area is primary visual cortex (V1), where many models have been developed to explain its function. However, the most successful models built on neurophysiologists’ intuitions still fail to account for spiking responses to natural images. Here, we model spiking activity in primary visual cortex (V1) of monkeys using deep convolutional neural networks (CNNs), which have been successful in computer vision. We both trained CNNs directly to fit the data, and used CNNs trained to solve a high-level task (object categorization). With these approaches, we are able to outperform previous models and improve the state of the art in predicting the responses of early visual neurons to natural images. Our results have two important implications. First, since V1 is the result of several nonlinear stages, it should be modeled as such. Second, functional models of entire visual pathways, of which V1 is an early stage, do not only account for higher areas of such pathways, but also provide useful representations for V1 predictions.


2008 ◽  
Vol 28 (39) ◽  
pp. 9890-9894 ◽  
Author(s):  
M. A. Williams ◽  
T. A. W. Visser ◽  
R. Cunnington ◽  
J. B. Mattingley

2020 ◽  
Author(s):  
Stewart Heitmann ◽  
G. Bard Ermentrout

AbstractThe majority of neurons in primary visual cortex respond selectively to bars of light that have a specific orientation and move in a specific direction. The spatial and temporal responses of such neurons are non-separable. How neurons accomplish that computational feat without resort to explicit time delays is unknown. We propose a novel neural mechanism whereby visual cortex computes non-separable responses by generating endogenous traveling waves of neural activity that resonate with the space-time signature of the visual stimulus. The spatiotemporal characteristics of the response are defined by the local topology of excitatory and inhibitory lateral connections in the cortex. We simulated the interaction between endogenous traveling waves and the visual stimulus using spatially distributed populations of excitatory and inhibitory neurons with Wilson-Cowan dynamics and inhibitory-surround coupling. Our model reliably detected visual gratings that moved with a given speed and direction provided that we incorporated neural competition to suppress false motion signals in the opposite direction. The findings suggest that endogenous traveling waves in visual cortex can impart direction-selectivity on neural responses without resort to explicit time delays. They also suggest a functional role for motion opponency in eliminating false motion signals.Author summaryIt is well established that the so-called ‘simple cells’ of the primary visual cortex respond preferentially to oriented bars of light that move across the visual field with a particular speed and direction. The spatiotemporal responses of such neurons are said to be non-separable because they cannot be constructed from independent spatial and temporal neural mechanisms. Contemporary theories of how neurons compute non-separable responses typically rely on finely tuned transmission delays between signals from disparate regions of the visual field. However the existence of such delays is controversial. We propose an alternative neural mechanism for computing non-separable responses that does not require transmission delays. It instead relies on the predisposition of the cortical tissue to spontaneously generate spatiotemporal waves of neural activity that travel with a particular speed and direction. We propose that the endogenous wave activity resonates with the visual stimulus to elicit direction-selective neural responses to visual motion. We demonstrate the principle in computer models and show that competition between opposing neurons robustly enhances their ability to discriminate between visual gratings that move in opposite directions.


2014 ◽  
Vol 17 (10) ◽  
pp. 1380-1387 ◽  
Author(s):  
Yin Yan ◽  
Malte J Rasch ◽  
Minggui Chen ◽  
Xiaoping Xiang ◽  
Min Huang ◽  
...  

2011 ◽  
Vol 122 ◽  
pp. S157
Author(s):  
T. Bocci ◽  
M. Caleo ◽  
E. Giorli ◽  
D. Barloscio ◽  
S. Tognazzi ◽  
...  

2018 ◽  
Author(s):  
Zvi N. Roth ◽  
David J. Heeger ◽  
Elisha P. Merriam

AbstractNeural selectivity to orientation is one of the simplest and most thoroughly-studied cortical sensory features. Here, we show that a large body of research that purported to measure orientation tuning may have in fact been inadvertently measuring sensitivity to second-order changes in luminance, a phenomenon we term ‘vignetting’. Using a computational model of neural responses in primary visual cortex (V1), we demonstrate the impact of vignetting on simulated V1 responses. We then used the model to generate a set of predictions, which we confirmed with functional MRI experiments in human observers. Our results demonstrate that stimulus vignetting can wholly determine the orientation selectivity of responses in visual cortex measured at a macroscopic scale, and suggest a reinterpretation of a well-established literature on orientation processing in visual cortex.


2020 ◽  
Author(s):  
Tyler D. Marks ◽  
Michael J. Goard

ABSTRACTTo produce consistent sensory perception, neurons must maintain stable representations of sensory input. However, neurons in many regions exhibit progressive drift across days. Longitudinal studies have found stable responses to artificial stimuli across sessions in primary sensory areas, but it is unclear whether this stability extends to naturalistic stimuli. We performed chronic 2-photon imaging of mouse V1 populations to directly compare the representational stability of artificial versus naturalistic visual stimuli over weeks. Responses to gratings were highly stable across sessions. However, neural responses to naturalistic movies exhibited progressive representational drift across sessions. Differential drift was present across cortical layers, in inhibitory interneurons, and could not be explained by differential response magnitude or higher order stimulus statistics. However, representational drift was accompanied by similar differential changes in local population correlation structure. These results suggest representational stability in V1 is stimulus-dependent and related to differences in preexisting circuit architecture of co-tuned neurons.


Sign in / Sign up

Export Citation Format

Share Document