scholarly journals Stable task information from an unstable neural population

2019 ◽  
Author(s):  
Michael E. Rule ◽  
Adrianna R. Loback ◽  
Dhruva V. Raman ◽  
Laura Driscoll ◽  
Christopher D. Harvey ◽  
...  

AbstractOver days and weeks, neural activity representing an animal’s position and movement in sensorimotor cortex has been found to continually reconfigure or ‘drift’ during repeated trials of learned tasks, with no obvious change in behavior. This challenges classical theories which assume stable engrams underlie stable behavior. However, it is not known whether this drift occurs systematically, allowing downstream circuits to extract consistent information. We show that drift is systematically constrained far above chance, facilitating a linear weighted readout of behavioural variables. However, a significant component of drift continually degrades a fixed readout, implying that drift is not confined to a null coding space. We calculate the amount of plasticity required to compensate drift independently of any learning rule, and find that this is within physiologically achievable bounds. We demonstrate that a simple, biologically plausible local learning rule can achieve these bounds, accurately decoding behavior over many days.

eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Michael E Rule ◽  
Adrianna R Loback ◽  
Dhruva V Raman ◽  
Laura N Driscoll ◽  
Christopher D Harvey ◽  
...  

Over days and weeks, neural activity representing an animal’s position and movement in sensorimotor cortex has been found to continually reconfigure or ‘drift’ during repeated trials of learned tasks, with no obvious change in behavior. This challenges classical theories, which assume stable engrams underlie stable behavior. However, it is not known whether this drift occurs systematically, allowing downstream circuits to extract consistent information. Analyzing long-term calcium imaging recordings from posterior parietal cortex in mice (Mus musculus), we show that drift is systematically constrained far above chance, facilitating a linear weighted readout of behavioral variables. However, a significant component of drift continually degrades a fixed readout, implying that drift is not confined to a null coding space. We calculate the amount of plasticity required to compensate drift independently of any learning rule, and find that this is within physiologically achievable bounds. We demonstrate that a simple, biologically plausible local learning rule can achieve these bounds, accurately decoding behavior over many days.


Cell Reports ◽  
2020 ◽  
Vol 32 (6) ◽  
pp. 108006 ◽  
Author(s):  
Xiyuan Jiang ◽  
Hemant Saggar ◽  
Stephen I. Ryu ◽  
Krishna V. Shenoy ◽  
Jonathan C. Kao

2019 ◽  
Vol 31 (10) ◽  
pp. 1985-2003 ◽  
Author(s):  
Chen Beer ◽  
Omri Barak

Artificial neural networks, trained to perform cognitive tasks, have recently been used as models for neural recordings from animals performing these tasks. While some progress has been made in performing such comparisons, the evolution of network dynamics throughout learning remains unexplored. This is paralleled by an experimental focus on recording from trained animals, with few studies following neural activity throughout training. In this work, we address this gap in the realm of artificial networks by analyzing networks that are trained to perform memory and pattern generation tasks. The functional aspect of these tasks corresponds to dynamical objects in the fully trained network—a line attractor or a set of limit cycles for the two respective tasks. We use these dynamical objects as anchors to study the effect of learning on their emergence. We find that the sequential nature of learning—one trial at a time—has major consequences for the learning trajectory and its final outcome. Specifically, we show that least mean squares (LMS), a simple gradient descent suggested as a biologically plausible version of the FORCE algorithm, is constantly obstructed by forgetting, which is manifested as the destruction of dynamical objects from previous trials. The degree of interference is determined by the correlation between different trials. We show which specific ingredients of FORCE avoid this phenomenon. Overall, this difference results in convergence that is orders of magnitude slower for LMS. Learning implies accumulating information across multiple trials to form the overall concept of the task. Our results show that interference between trials can greatly affect learning in a learning-rule-dependent manner. These insights can help design experimental protocols that minimize such interference, and possibly infer underlying learning rules by observing behavior and neural activity throughout learning.


2017 ◽  
Vol 29 (12) ◽  
pp. 3119-3180 ◽  
Author(s):  
Adrianna Loback ◽  
Jason Prentice ◽  
Mark Ioffe ◽  
Michael Berry II

An appealing new principle for neural population codes is that correlations among neurons organize neural activity patterns into a discrete set of clusters, which can each be viewed as a noise-robust population codeword. Previous studies assumed that these codewords corresponded geometrically with local peaks in the probability landscape of neural population responses. Here, we analyze multiple data sets of the responses of approximately 150 retinal ganglion cells and show that local probability peaks are absent under broad, nonrepeated stimulus ensembles, which are characteristic of natural behavior. However, we find that neural activity still forms noise-robust clusters in this regime, albeit clusters with a different geometry. We start by defining a soft local maximum, which is a local probability maximum when constrained to a fixed spike count. Next, we show that soft local maxima are robustly present and can, moreover, be linked across different spike count levels in the probability landscape to form a ridge. We found that these ridges comprise combinations of spiking and silence in the neural population such that all of the spiking neurons are members of the same neuronal community, a notion from network theory. We argue that a neuronal community shares many of the properties of Donald Hebb's classic cell assembly and show that a simple, biologically plausible decoding algorithm can recognize the presence of a specific neuronal community.


2015 ◽  
Vol 5 (1) ◽  
Author(s):  
Hideaki Shimazaki ◽  
Kolia Sadeghi ◽  
Tomoe Ishikawa ◽  
Yuji Ikegaya ◽  
Taro Toyoizumi

Abstract Activity patterns of neural population are constrained by underlying biological mechanisms. These patterns are characterized not only by individual activity rates and pairwise correlations but also by statistical dependencies among groups of neurons larger than two, known as higher-order interactions (HOIs). While HOIs are ubiquitous in neural activity, primary characteristics of HOIs remain unknown. Here, we report that simultaneous silence (SS) of neurons concisely summarizes neural HOIs. Spontaneously active neurons in cultured hippocampal slices express SS that is more frequent than predicted by their individual activity rates and pairwise correlations. The SS explains structured HOIs seen in the data, namely, alternating signs at successive interaction orders. Inhibitory neurons are necessary to maintain significant SS. The structured HOIs predicted by SS were observed in a simple neural population model characterized by spiking nonlinearity and correlated input. These results suggest that SS is a ubiquitous feature of HOIs that constrain neural activity patterns and can influence information processing.


2021 ◽  
Author(s):  
Maurizio De Pitta ◽  
Nicolas Brunel

Competing accounts propose that working memory (WM) is subserved either by persistent activity in single neurons, or by time-varying activity across a neural population, or by activity-silent mechanisms carried out by hidden internal states of the neural population. While WM is traditionally regarded to originate exclusively from neuronal interactions, cortical networks also include astrocytes that can modulate neural activity. We propose that different mechanisms of WM can be brought forth by astrocyte-mediated modulations of synaptic transmitter release. In this account, the emergence of different mechanisms depends on the network's spontaneous activity and the geometry of the connections between synapses and astrocytes.


2018 ◽  
Author(s):  
Adrianna R. Loback ◽  
Michael J. Berry

When correlations within a neural population are strong enough, neural activity in early visual areas is organized into a discrete set of clusters. Here, we show that a simple, biologically plausible circuit can learn and then readout in real-time the identity of experimentally measured clusters of retinal ganglion cell population activity. After learning, individual readout neurons develop cluster tuning, meaning that they respond strongly to any neural activity pattern in one cluster and weakly to all other inputs. Different readout neurons specialize for different clusters, and all input clusters can be learned, as long as the number of readout units is mildly larger than the number of input clusters. We argue that this operation can be repeated as signals flow up the cortical hierarchy.


eLife ◽  
2015 ◽  
Vol 4 ◽  
Author(s):  
Chethan Pandarinath ◽  
Vikash Gilja ◽  
Christine H Blabe ◽  
Paul Nuyujukian ◽  
Anish A Sarma ◽  
...  

The prevailing view of motor cortex holds that motor cortical neural activity represents muscle or movement parameters. However, recent studies in non-human primates have shown that neural activity does not simply represent muscle or movement parameters; instead, its temporal structure is well-described by a dynamical system where activity during movement evolves lawfully from an initial pre-movement state. In this study, we analyze neuronal ensemble activity in motor cortex in two clinical trial participants diagnosed with Amyotrophic Lateral Sclerosis (ALS). We find that activity in human motor cortex has similar dynamical structure to that of non-human primates, indicating that human motor cortex contains a similar underlying dynamical system for movement generation.Clinical trial registration: NCT00912041.


2021 ◽  
Author(s):  
Kristopher T. Jensen ◽  
Ta-Chu Kao ◽  
Jasmine Talia Stone ◽  
Guillaume Hennequin

Latent variable models are ubiquitous in the exploratory analysis of neural population recordings, where they allow researchers to summarize the activity of large populations of neurons in lower dimensional 'latent' spaces. Existing methods can generally be categorized into (i) Bayesian methods that facilitate flexible incorporation of prior knowledge and uncertainty estimation, but which typically do not scale to large datasets; and (ii) highly parameterized methods without explicit priors that scale better but often struggle in the low-data regime. Here, we bridge this gap by developing a fully Bayesian yet scalable version of Gaussian process factor analysis (bGPFA) which models neural data as arising from a set of inferred latent processes with a prior that encourages smoothness over time. Additionally, bGPFA uses automatic relevance determination to infer the dimensionality of neural activity directly from the training data during optimization. To enable the analysis of continuous recordings without trial structure, we introduce a novel variational inference strategy that scales near-linearly in time and also allows for non-Gaussian noise models more appropriate for electrophysiological recordings. We apply bGPFA to continuous recordings spanning 30 minutes with over 14 million data points from primate motor and somatosensory cortices during a self-paced reaching task. We show that neural activity progresses from an initial state at target onset to a reach-specific preparatory state well before movement onset. The distance between these initial and preparatory latent states is predictive of reaction times across reaches, suggesting that such preparatory dynamics have behavioral relevance despite the lack of externally imposed delay periods. Additionally, bGPFA discovers latent processes that evolve over slow timescales on the order of several seconds and contain complementary information about reaction time. These timescales are longer than those revealed by methods which focus on individual movement epochs and may reflect fluctuations in e.g. task engagement.


Sign in / Sign up

Export Citation Format

Share Document