population codes
Recently Published Documents


TOTAL DOCUMENTS

185
(FIVE YEARS 36)

H-INDEX

35
(FIVE YEARS 2)

2022 ◽  
pp. 1-24
Author(s):  
Kohei Ichikawa ◽  
Asaki Kataoka

Abstract Animals make efficient probabilistic inferences based on uncertain and noisy information from the outside environment. It is known that probabilistic population codes, which have been proposed as a neural basis for encoding probability distributions, allow general neural networks (NNs) to perform near-optimal point estimation. However, the mechanism of sampling-based probabilistic inference has not been clarified. In this study, we trained two types of artificial NNs, feedforward NN (FFNN) and recurrent NN (RNN), to perform sampling-based probabilistic inference. Then we analyzed and compared their mechanisms of sampling. We found that sampling in RNN was performed by a mechanism that efficiently uses the properties of dynamical systems, unlike FFNN. In addition, we found that sampling in RNNs acted as an inductive bias, enabling a more accurate estimation than in maximum a posteriori estimation. These results provide important arguments for discussing the relationship between dynamical systems and information processing in NNs.


2021 ◽  
Author(s):  
Jeremy S Biane ◽  
Max A Ladow ◽  
Fabio Stefanini ◽  
Sayi P Boddu ◽  
Austin Fan ◽  
...  

Memories are multifaceted and layered, incorporating external stimuli and internal states, and at multiple levels of resolution. Although the hippocampus is essential for memory, it remains unclear if distinct aspects of experience are encoded within different hippocampal subnetworks during learning. By tracking the same dCA1 or vCA1 neurons across cue-outcome learning, we find detailed and externally based (stimulus identity) representations in dCA1, and broad and internally based (stimulus relevance) signals in vCA1 that emerge with learning. These dorsoventral differences were observed regardless of cue modality or outcome valence, and representations within each region were largely stable for days after learning. These results identify how the hippocampus encodes associative memories, and show that hippocampal ensembles not only link experiences, but also imbue relationships with meaning and highlight behaviorally relevant information. Together, these complementary dynamics across hippocampal subnetworks allow for rich, diverse representation of experiences.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Qianli Yang ◽  
Edgar Walker ◽  
R. James Cotton ◽  
Andreas S. Tolias ◽  
Xaq Pitkow

AbstractSensory data about most natural task-relevant variables are entangled with task-irrelevant nuisance variables. The neurons that encode these relevant signals typically constitute a nonlinear population code. Here we present a theoretical framework for quantifying how the brain uses or decodes its nonlinear information. Our theory obeys fundamental mathematical limitations on information content inherited from the sensory periphery, describing redundant codes when there are many more cortical neurons than primary sensory neurons. The theory predicts that if the brain uses its nonlinear population codes optimally, then more informative patterns should be more correlated with choices. More specifically, the theory predicts a simple, easily computed quantitative relationship between fluctuating neural activity and behavioral choices that reveals the decoding efficiency. This relationship holds for optimal feedforward networks of modest complexity, when experiments are performed under natural nuisance variation. We analyze recordings from primary visual cortex of monkeys discriminating the distribution from which oriented stimuli were drawn, and find these data are consistent with the hypothesis of near-optimal nonlinear decoding.


2021 ◽  
Author(s):  
Shanshan Qin ◽  
Shiva Farashahi ◽  
David Lipshutz ◽  
Anirvan M Sengupta ◽  
Dmitri B Chklovskii ◽  
...  

Long-term memories and learned behavior are conventionally associated with stable neuronal representations. However, recent experiments showed that neural population codes in many brain areas continuously change even when animals have fully learned and stably perform their tasks. This representational "drift" naturally leads to questions about its causes, dynamics, and functions. Here, we explore the hypothesis that neural representations optimize a representational objective with a degenerate solution space, and noisy synaptic updates drive the network to explore this (near-)optimal space causing representational drift. We illustrate this idea in simple, biologically plausible Hebbian/anti-Hebbian network models of representation learning, which optimize similarity matching objectives, and, when neural outputs are constrained to be nonnegative, learn localized receptive fields (RFs) that tile the stimulus manifold. We find that the drifting RFs of individual neurons can be characterized by a coordinated random walk, with the effective diffusion constants depending on various parameters such as learning rate, noise amplitude, and input statistics. Despite such drift, the representational similarity of population codes is stable over time. Our model recapitulates recent experimental observations in hippocampus and posterior parietal cortex, and makes testable predictions that can be probed in future experiments.


2021 ◽  
Author(s):  
C. Daniel Greenidge ◽  
Benjamin Scholl ◽  
Jacob Yates ◽  
Jonathan W. Pillow

Neural decoding methods provide a powerful tool for quantifying the information content of neural population codes and the limits imposed by correlations in neural activity. However, standard decoding methods are prone to overfitting and scale poorly to high-dimensional settings. Here, we introduce a novel decoding method to overcome these limitations. Our approach, the Gaussian process multi-class decoder (GPMD), is well-suited to decoding a continuous low-dimensional variable from high-dimensional population activity, and provides a platform for assessing the importance of correlations in neural population codes. The GPMD is a multinomial logistic regression model with a Gaussian process prior over the decoding weights. The prior includes hyperparameters that govern the smoothness of each neuron's decoding weights, allowing automatic pruning of uninformative neurons during inference. We provide a variational inference method for fitting the GPMD to data, which scales to hundreds or thousands of neurons and performs well even in datasets with more neurons than trials. We apply the GPMD to recordings from primary visual cortex in three different species: monkey, ferret, and mouse. Our decoder achieves state-of-the-art accuracy on all three datasets, and substantially outperforms independent Bayesian decoding, showing that knowledge of the correlation structure is essential for optimal decoding in all three species.


2021 ◽  
Author(s):  
Jacob L. Yates ◽  
Benjamin Scholl

Abstract The synaptic inputs to single cortical neurons exhibit substantial diversity in their sensory-driven activity. What this diversity reflects is unclear, and appears counter-productive in generating selective somatic responses to specific stimuli. We propose that synaptic diversity arises because neurons decode information from upstream populations. Focusing on a single sensory variable, orientation, we construct a probabilistic decoder that estimates the stimulus orientation from the responses of a realistic, hypothetical input population of neurons. We provide a straightforward mapping from the decoder weights to real excitatory synapses, and find that optimal decoding requires diverse input weights. Analytically derived weights exhibit diversity whenever upstream input populations consist of noisy, correlated, and heterogeneous neurons, as is typically found in vivo. In fact, in silico weight diversity was necessary to accurately decode orientation and matched the functional heterogeneity of dendritic spines imaged in vivo. Our results indicate that synaptic diversity is a necessary component of information transmission and reframes studies of connectivity through the lens of probabilistic population codes. These results suggest that the mapping from synaptic inputs to somatic selectivity may not be directly interpretable without considering input covariance and highlights the importance of population codes in pursuit of the cortical connectome.


2021 ◽  
Vol 21 (8) ◽  
pp. 15
Author(s):  
Connor J. Parde ◽  
Y. Ivette Colón ◽  
Matthew Q. Hill ◽  
Carlos D. Castillo ◽  
Prithviraj Dhar ◽  
...  

2021 ◽  
Author(s):  
David J Maisson ◽  
Justin M Fine ◽  
Seng Bum Michael Yoo ◽  
Tyler Daniel Cash-Padgett ◽  
Maya Zhe Wang ◽  
...  

Our ability to effectively choose between dissimilar options implies that information regarding the options values must be available, either explicitly or implicitly, in the brain. Explicit realizations of value involve single neurons whose responses depend on value and not on the specific features that determine it. Implicit realizations, by contrast, come from the coordinated action of neurons that encode specific features. One signature of implicit value coding is that population responses to offers with the same value but different features should occupy semi- or fully orthogonal neural subspaces that are nonetheless linked. Here, we examined responses of neurons in six core value-coding areas in a choice task with risky and safe options. Using stricter criteria than some past studies have used, we find, surprisingly, no evidence for abstract value neurons (i.e., neurons with the response to equally valued risky and safe options) in any of these regions. Moreover, population codes for value resided in orthogonal subspaces; these subspaces were linked through a linear transform of each of their constituent subspaces. These results suggest that in all six regions, populations of neurons embed value implicitly in a distributed population.


2021 ◽  
Author(s):  
Samuel W Failor ◽  
Matteo Carandini ◽  
Kenneth D Harris

The response of a neuronal population to a stimulus can be summarized by a vector in a high-dimensional space. Learning theory suggests that the brain should be most able to produce distinct behavioral responses to two stimuli when the rate vectors they evoke are close to orthogonal. To investigate how learning modifies population codes, we measured the orientation tuning of 4,000-neuron populations in visual cortex before and after training on a visual discrimination task. Learning suppressed responses to the task-informative stimuli, most strongly amongst weakly-tuned neurons. This suppression reflected a simple change at the population level: sparsening of population responses to relevant stimuli, resulting in orthogonalization of their rate vectors. A model of F-I curve modulation, requiring no synaptic plasticity, quantitatively predicted the learning effect.


Sign in / Sign up

Export Citation Format

Share Document