scholarly journals Inference in the Brain: Statistics Flowing in Redundant Population Codes

Neuron ◽  
2017 ◽  
Vol 94 (5) ◽  
pp. 943-953 ◽  
Author(s):  
Xaq Pitkow ◽  
Dora E. Angelaki
Keyword(s):  
2013 ◽  
Vol 25 (6) ◽  
pp. 1371-1407 ◽  
Author(s):  
Stefan Habenschuss ◽  
Helmut Puhr ◽  
Wolfgang Maass

The brain faces the problem of inferring reliable hidden causes from large populations of noisy neurons, for example, the direction of a moving object from spikes in area MT. It is known that a theoretically optimal likelihood decoding could be carried out by simple linear readout neurons if weights of synaptic connections were set to certain values that depend on the tuning functions of sensory neurons. We show here that such theoretically optimal readout weights emerge autonomously through STDP in conjunction with lateral inhibition between readout neurons. In particular, we identify a class of optimal STDP learning rules with homeostatic plasticity, for which the autonomous emergence of optimal readouts can be explained on the basis of a rigorous learning theory. This theory shows that the network motif we consider approximates expectation-maximization for creating internal generative models for hidden causes of high-dimensional spike inputs. Notably, we find that this optimal functionality can be well approximated by a variety of STDP rules beyond those predicted by theory. Furthermore, we show that this learning process is very stable and automatically adjusts weights to changes in the number of readout neurons, the tuning functions of sensory neurons, and the statistics of external stimuli.


2020 ◽  
Vol 43 (1) ◽  
pp. 277-295
Author(s):  
David H. Brann ◽  
Sandeep Robert Datta

Olfaction is fundamentally distinct from other sensory modalities. Natural odor stimuli are complex mixtures of volatile chemicals that interact in the nose with a receptor array that, in rodents, is built from more than 1,000 unique receptors. These interactions dictate a peripheral olfactory code, which in the brain is transformed and reformatted as it is broadcast across a set of highly interconnected olfactory regions. Here we discuss the problems of characterizing peripheral population codes for olfactory stimuli, of inferring the specific functions of different higher olfactory areas given their extensive recurrence, and of ultimately understanding how odor representations are linked to perception and action. We argue that, despite the differences between olfaction and other sensory modalities, addressing these specific questions will reveal general principles underlying brain function.


2021 ◽  
Author(s):  
Samuel W Failor ◽  
Matteo Carandini ◽  
Kenneth D Harris

The response of a neuronal population to a stimulus can be summarized by a vector in a high-dimensional space. Learning theory suggests that the brain should be most able to produce distinct behavioral responses to two stimuli when the rate vectors they evoke are close to orthogonal. To investigate how learning modifies population codes, we measured the orientation tuning of 4,000-neuron populations in visual cortex before and after training on a visual discrimination task. Learning suppressed responses to the task-informative stimuli, most strongly amongst weakly-tuned neurons. This suppression reflected a simple change at the population level: sparsening of population responses to relevant stimuli, resulting in orthogonalization of their rate vectors. A model of F-I curve modulation, requiring no synaptic plasticity, quantitatively predicted the learning effect.


2015 ◽  
Author(s):  
Ian Charest ◽  
Nikolaus Kriegeskorte

In the early days of neuroimaging, brain function was investigated by averaging across voxels within a region, stimuli within a category, and individuals within a group. These three forms of averaging discard important neuroscientific information. Recent studies have explored analyses that combine the evidence in better-motivated ways. Multivariate pattern analyses enable researchers to reveal representations in distributed population codes, honouring the unique information contributed by different voxels (or neurons). Condition-rich designs more richly sample the stimulus space and can treat each stimulus as a unique entity. Finally, each individual’s brain is unique and recent studies have found ways to model and analyse the interindividual representational variability. Here we review our field’s journey towards more sophisticated analyses that honour these important idiosyncrasies of brain representations. We describe an emerging framework for investigating individually unique pattern representations of particular stimuli in the brain. The framework models stimuli, responses and individuals multivariately and relates representations by means of representational dissimilarity matrices. Important components are computational models and multivariate descriptions of brain and behavioural responses. These recent developments promise a new paradigm for studying the individually unique brain at unprecedented levels of representational detail.


2020 ◽  
Author(s):  
Matthew R. Nassar ◽  
Apoorva Bhandari

AbstractDistributed population codes are ubiquitous in the brain and pose a challenge to downstream neurons that must learn an appropriate readout. Here we explore the possibility that this learning problem is simplified through inductive biases implemented by stimulus-independent noise correlations that constrain learning to task-relevant dimensions. We test this idea in a set of neural networks that learn to perform a perceptual discrimination task. Correlations among similarly tuned units were manipulated independently of overall population signal-to-noise ratio in order to test how the format of stored information affects learning. Higher noise correlations among similarly tuned units led to faster and more robust learning, favoring homogenous weights assigned to neurons within a functionally similar pool, and could emerge through Hebbian learning. When multiple discriminations were learned simultaneously, noise correlations across relevant feature dimensions sped learning whereas those across irrelevant feature dimensions slowed it. Our results complement existing theory on noise correlations by demonstrating that when such correlations are produced without degradation of signal-to-noise ratio, they can improve readout learning by constraining it to appropriate dimensions.


2017 ◽  
Vol 29 (3) ◽  
pp. 716-734 ◽  
Author(s):  
Yongseok Yoo ◽  
Woori Kim

Neural systems are inherently noisy. One well-studied example of a noise reduction mechanism in the brain is the population code, where representing a variable with multiple neurons allows the encoded variable to be recovered with fewer errors. Studies have assumed ideal observer models for decoding population codes, and the manner in which information in the neural population can be retrieved remains elusive. This letter addresses a mechanism by which realistic neural circuits can recover encoded variables. Specifically, the decoding problem of recovering a spatial location from populations of grid cells is studied using belief propagation. We extend the belief propagation decoding algorithm in two aspects. First, beliefs are approximated rather than being calculated exactly. Second, decoding noises are introduced into the decoding circuits. Numerical simulations demonstrate that beliefs can be effectively approximated by combining polynomial nonlinearities with divisive normalization. This approximate belief propagation algorithm is tolerant to decoding noises. Thus, this letter presents a realistic model for decoding neural population codes and investigates fault-tolerant information retrieval mechanisms in the brain.


2021 ◽  
Author(s):  
M. E. Rule ◽  
T. O’Leary

AbstractNeural representations change, even in the absence of overt learning. To preserve stable behavior and memories, the brain must track these changes. Here, we explore homeostatic mechanisms that could allow neural populations to track drift in continuous representations without external error feedback. We build on existing models of Hebbian homeostasis, which have been shown to stabilize representations against synaptic turnover and allow discrete neuronal assemblies to track representational drift. We show that a downstream readout can use its own activity to detect and correct drift, and that such a self-healing code could be implemented by plausible synaptic rules. Population response normalization and recurrent dynamics could stabilize codes further. Our model reproduces aspects of drift observed in experiments, and posits neurally plausible mechanisms for long-term stable readouts from drifting population codes.


2021 ◽  
Author(s):  
David J Maisson ◽  
Justin M Fine ◽  
Seng Bum Michael Yoo ◽  
Tyler Daniel Cash-Padgett ◽  
Maya Zhe Wang ◽  
...  

Our ability to effectively choose between dissimilar options implies that information regarding the options values must be available, either explicitly or implicitly, in the brain. Explicit realizations of value involve single neurons whose responses depend on value and not on the specific features that determine it. Implicit realizations, by contrast, come from the coordinated action of neurons that encode specific features. One signature of implicit value coding is that population responses to offers with the same value but different features should occupy semi- or fully orthogonal neural subspaces that are nonetheless linked. Here, we examined responses of neurons in six core value-coding areas in a choice task with risky and safe options. Using stricter criteria than some past studies have used, we find, surprisingly, no evidence for abstract value neurons (i.e., neurons with the response to equally valued risky and safe options) in any of these regions. Moreover, population codes for value resided in orthogonal subspaces; these subspaces were linked through a linear transform of each of their constituent subspaces. These results suggest that in all six regions, populations of neurons embed value implicitly in a distributed population.


2019 ◽  
Vol 58 ◽  
pp. 30-36 ◽  
Author(s):  
J Andrew Pruszynski ◽  
Joel Zylberberg

2018 ◽  
Author(s):  
Qianli Yang ◽  
Edgar Walker ◽  
R. James Cotton ◽  
Andreas S. Tolias ◽  
Xaq Pitkow

Sensory data about most natural task-relevant variables are entangled with task-irrelevant nuisance variables. The neurons that encode these relevant signals typically constitute a nonlinear population code. Here we present a theoretical framework for quantifying how the brain uses or decodes its nonlinear information. Our theory obeys fundamental mathematical limitations on information content inherited from the sensory periphery, identifying redundant codes when there are many more cortical neurons than primary sensory neurons. The theory predicts that if the brain uses its nonlinear population codes optimally, then more informative patterns should be more correlated with choices. More specifically, the theory predicts a simple, easily computed quantitative relationship between fluctuating neural activity and behavioral choices that reveals the decoding efficiency. We analyze recordings from primary visual cortex of monkeys discriminating the distribution from which oriented stimuli were drawn, and find these data are consistent with the hypothesis of near-optimal nonlinear decoding.


Sign in / Sign up

Export Citation Format

Share Document