scholarly journals Learning probabilistic representations with randomly connected neural circuits

2018 ◽  
Author(s):  
Ori Maoz ◽  
Gašper Tkacčik ◽  
Mohamad Saleh Esteki ◽  
Roozbeh Kiani ◽  
Elad Schneidman

AbstractThe brain represents and reasons probabilistically about complex stimuli and motor actions using a noisy, spike-based neural code. A key building block for such neural computations, as well as the basis for supervised and unsupervised learning, is the ability to estimate the surprise or likelihood of incoming high-dimensional neural activity patterns. Despite progress in statistical modeling of neural responses and deep learning, current approaches either do not scale to large neural populations or cannot be implemented using biologically realistic mechanisms. Inspired by the sparse and random connectivity of real neuronal circuits, we present a new model for neural codes that accurately estimates the likelihood of individual spiking patterns and has a straightforward, scalable, efficiently learnable, and realistic neural implementation. This model’s performance on simultaneously recorded spiking activity of >100 neurons in the monkey visual and prefrontal cortices is comparable or better than that of current models. Importantly, the model can be learned using a small number of samples, and using a local learning rule that utilizes noise intrinsic to neural circuits. Slower, structural changes in random connectivity, consistent with rewiring and pruning processes, further improve the efficiency and sparseness of the resulting neural representations. Our results merge insights from neuroanatomy, machine learning, and theoretical neuroscience to suggest random sparse connectivity as a key design principle for neuronal computation.

2020 ◽  
Vol 117 (40) ◽  
pp. 25066-25073 ◽  
Author(s):  
Ori Maoz ◽  
Gašper Tkačik ◽  
Mohamad Saleh Esteki ◽  
Roozbeh Kiani ◽  
Elad Schneidman

The brain represents and reasons probabilistically about complex stimuli and motor actions using a noisy, spike-based neural code. A key building block for such neural computations, as well as the basis for supervised and unsupervised learning, is the ability to estimate the surprise or likelihood of incoming high-dimensional neural activity patterns. Despite progress in statistical modeling of neural responses and deep learning, current approaches either do not scale to large neural populations or cannot be implemented using biologically realistic mechanisms. Inspired by the sparse and random connectivity of real neuronal circuits, we present a model for neural codes that accurately estimates the likelihood of individual spiking patterns and has a straightforward, scalable, efficient, learnable, and realistic neural implementation. This model’s performance on simultaneously recorded spiking activity of >100 neurons in the monkey visual and prefrontal cortices is comparable with or better than that of state-of-the-art models. Importantly, the model can be learned using a small number of samples and using a local learning rule that utilizes noise intrinsic to neural circuits. Slower, structural changes in random connectivity, consistent with rewiring and pruning processes, further improve the efficiency and sparseness of the resulting neural representations. Our results merge insights from neuroanatomy, machine learning, and theoretical neuroscience to suggest random sparse connectivity as a key design principle for neuronal computation.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Kevin A Bolding ◽  
Shivathmihai Nagappan ◽  
Bao-Xia Han ◽  
Fan Wang ◽  
Kevin M Franks

Pattern completion, or the ability to retrieve stable neural activity patterns from noisy or partial cues, is a fundamental feature of memory. Theoretical studies indicate that recurrently connected auto-associative or discrete attractor networks can perform this process. Although pattern completion and attractor dynamics have been observed in various recurrent neural circuits, the role recurrent circuitry plays in implementing these processes remains unclear. In recordings from head-fixed mice, we found that odor responses in olfactory bulb degrade under ketamine/xylazine anesthesia while responses immediately downstream, in piriform cortex, remain robust. Recurrent connections are required to stabilize cortical odor representations across states. Moreover, piriform odor representations exhibit attractor dynamics, both within and across trials, and these are also abolished when recurrent circuitry is eliminated. Here, we present converging evidence that recurrently-connected piriform populations stabilize sensory representations in response to degraded inputs, consistent with an auto-associative function for piriform cortex supported by recurrent circuitry.


Author(s):  
Samantha Hughes ◽  
Tansu Celikel

From single-cell organisms to complex neural networks, all evolved to provide control solutions to generate context and goal-specific actions. Neural circuits performing sensorimotor computation to drive navigation employ inhibitory control as a gating mechanism, as they hierarchically transform (multi)sensory information into motor actions. Here, we focus on this literature to critically discuss the proposition that prominent inhibitory projections form sensorimotor circuits. After reviewing the neural circuits of navigation across various invertebrate species, we argue that with increased neural circuit complexity and the emergence of parallel computations inhibitory circuits acquire new functions. The contribution of inhibitory neurotransmission for navigation goes beyond shaping the communication that drives motor neurons, instead, include encoding of emergent sensorimotor representations. A mechanistic understanding of the neural circuits performing sensorimotor computations in invertebrates will unravel the minimum circuit requirements driving adaptive navigation.


2016 ◽  
Vol 113 (5) ◽  
pp. 1441-1446 ◽  
Author(s):  
Andrei S. Kozlov ◽  
Timothy Q. Gentner

High-level neurons processing complex, behaviorally relevant signals are sensitive to conjunctions of features. Characterizing the receptive fields of such neurons is difficult with standard statistical tools, however, and the principles governing their organization remain poorly understood. Here, we demonstrate multiple distinct receptive-field features in individual high-level auditory neurons in a songbird, European starling, in response to natural vocal signals (songs). We then show that receptive fields with similar characteristics can be reproduced by an unsupervised neural network trained to represent starling songs with a single learning rule that enforces sparseness and divisive normalization. We conclude that central auditory neurons have composite receptive fields that can arise through a combination of sparseness and normalization in neural circuits. Our results, along with descriptions of random, discontinuous receptive fields in the central olfactory neurons in mammals and insects, suggest general principles of neural computation across sensory systems and animal classes.


2017 ◽  
Author(s):  
Grace W. Lindsay ◽  
Mattia Rigotti ◽  
Melissa R. Warden ◽  
Earl K. Miller ◽  
Stefano Fusi

AbstractComplex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by prefrontal cortex (PFC). Neural activity in PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear ‘mixed’ selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which PFC exhibits computationally relevant properties such as mixed selectivity and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data shows significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and allows the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results give intuition about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training.Significance StatementPrefrontal cortex (PFC) is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (”mixed selectivity”)—is a topic of interest. Despite the fact that models with random feedforward connectivity are capable of creating computationally-relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training.


2017 ◽  
Author(s):  
Nima Dehghani

Success in the fine control of the nervous system depends on a deeper understanding of how neural circuits control behavior. There is, however, a wide gap between the components of neural circuits and behavior. We advance the idea that a suitable approach for narrowing this gap has to be based on a multiscale information-theoretic description of the system. We evaluate the possibility that brain-wide complex neural computations can be dissected into a hierarchy of computational motifs that rely on smaller circuit modules interacting at multiple scales. In doing so, we draw attention to the importance of formalizing the goals of stimulation in terms of neural computations so that the possible implementations are matched in scale to the underlying circuit modules.


e-Neuroforum ◽  
2013 ◽  
Vol 19 (2) ◽  
Author(s):  
F. Helmchen ◽  
M. Hübener

AbstractThe brain’s astounding achievements regard­ing movement control and sensory process­ing are based on complex spatiotemporal ac­tivity patterns in the relevant neuronal net­works. Our understanding of neuronal net­work activity is, however, still poor, not least because of the experimental difficulties in di­rectly observing neural circuits at work in the living brain (in vivo). Over the last decade, new opportunities have emerged-especial­ly utilizing two-photon microscopy-to in­vestigate neuronal networks in action. Cen­tral to this progress was the development of fluorescent proteins that change their emis­sion depending on cell activity, enabling the visualization of dynamic activity patterns in local neuronal populations. Currently, genet­ically encoded calcium indicators, proteins that indicate neuronal activity based on ac­tion potential-evoked calcium influx, are be­ing increasingly used. Long-term expression of these indicators allows repeated moni­toring of the same neurons over weeks and months, such that the stability and plastici­ty of their functional properties can be char­acterized. Furthermore, permanent indicator expression facilitates the correlation of cel­lular activity patterns and behavior in awake animals. Using examples from recent studies of information processing in the mouse neo­cortex, we review in this article these fasci­nating new possibilities and discuss the great potential of the fluorescent proteins to eluci­date the mysteries of neural circuits.


2021 ◽  
Author(s):  
Marilyn Gatica ◽  
Fernando E. Rosas ◽  
Pedro A.M. Mediano ◽  
Ibai Diez ◽  
Stephan P. Swinnen ◽  
...  

The human brain generates a rich repertoire of spatio-temporal activity patterns, which support a wide variety of motor and cognitive functions. These patterns of activity change with age in a multi-factorial manner. One of these factors is the variations in the brain's connectomics that occurs along the lifespan. However, the precise relationship between high-order functional interactions and connnectomics, as well as their variations with age are largely unknown, in part due to the absence of mechanistic models that can efficiently map brain connnectomics to functional connectivity in aging. To investigate this issue, we have built a neurobiologically-realistic whole-brain computational model using both anatomical and functional MRI data from 161 participants ranging from 10 to 80 years old. We show that the age differences in high-order functional interactions can be largely explained by variations in the connectome. Based on this finding, we propose a simple neurodegeneration model that is representative of normal physiological aging. As such, when applied to connectomes of young participant it reproduces the age-variations that occur in the high-order structure of the functional data. Overall, these results begin to disentangle the mechanisms by which structural changes in the connectome lead to functional differences in the ageing brain. Our model can also serve as a starting point for modelling more complex forms of pathological ageing or cognitive deficits.


2018 ◽  
Author(s):  
Daniel Acker ◽  
Suzanne Paradis ◽  
Paul Miller

AbstractOur brains must maintain a representation of the world over a period of time much longer than the typical lifetime of the biological components producing that representation. For example, recent research suggests that dendritic spines in the adult mouse hippocampus are transient with an average lifetime of approximately 10 days. If this is true, and if turnover is equally likely for all spines, approximately 95-percent of excitatory synapses onto a particular neuron will turn over within 30 days; however, a neuron’s receptive field can be relatively stable over this period. Here, we use computational modeling to ask how memories can persist in neural circuits such as the hippocampus and visual cortex in the face of synapse turnover. We demonstrate that Hebbian learning during replay of pre-synaptic activity patterns can integrate newly formed synapses into pre-existing memories. Further, we find that Hebbian learning during replay is sufficient to stabilize the receptive fields of hippocampal place cells in a model of the grid-cell-to-place-cell transformation in CA1 and of orientation-selective cells in a model of the center-surround-to-simple-cell transformation in V1. We also ask how synapse turnover affects memory in Hopfield networks with CA3-like, auto-associative properties. We find that attractors of Hopfield networks are remarkably stable if learning occurs during network reactivations. Together, these data suggest that a simple learning rule, correlative Hebbian plasticity of synaptic strengths, is sufficient to preserve neural representations in the face of synapse turnover, even in the absence of Hebbian structural plasticity.


Sign in / Sign up

Export Citation Format

Share Document