scholarly journals A deep learning framework for inference of single-trial neural population activity from calcium imaging with sub-frame temporal resolution

2021 ◽  
Author(s):  
Feng Zhu ◽  
Harrison A Grier ◽  
Raghav Tandon ◽  
Changjia Cai ◽  
Andrea Giovannucci ◽  
...  

In many brain areas, neural populations act as a coordinated network whose state is tied to behavior on a moment-by-moment basis and millisecond timescale. Two-photon (2p) calcium imaging is a powerful tool to probe network-scale computation, as it can measure the activity of many individual neurons, monitor multiple layers simultaneously, and sample from identified cell types. However, estimating network states and dynamics from 2p measurements has proven challenging because of noise, inherent nonlinearities, and limitations on temporal resolution. Here we describe RADICaL, a deep learning method to overcome these limitations at the population level. RADICaL extends methods that exploit dynamics in spiking activity for application to deconvolved calcium signals, whose statistics and temporal dynamics are quite distinct from electrophysiologically-recorded spikes. It incorporates a novel network training strategy that exploits the timing of 2p sampling to recover network dynamics with high temporal precision. In synthetic tests, RADICaL infers network states more accurately than previous methods, particularly for high-frequency components. In real 2p recordings from sensorimotor areas in mice performing a "water grab" task, RADICaL infers network states with close correspondence to single-trial variations in behavior, and maintains high-quality inference even when neuronal populations are substantially reduced.

2018 ◽  
Author(s):  
Diogo Peixoto ◽  
Roozbeh Kiani ◽  
Chandramouli Chandrasekaran ◽  
Stephen I. Ryu ◽  
Krishna V. Shenoy ◽  
...  

SummaryStudies in multiple species have revealed the existence of neural signals that lawfully co-vary with different aspects of the decision-making process, including choice, sensory evidence that supports the choice, and reaction time. These signals, often interpreted as the representation of a decision variable (DV), have been identified in several motor preparation circuits and provide insight about mechanisms underlying the decision-making process. However, single-trial dynamics of this process or its representation at the neural population level remain poorly understood. Here, we examine the representation of the DV in simultaneously recorded neural populations of dorsal premotor (PMd) and primary motor (M1) cortices of monkeys performing a random dots direction discrimination task with arm movements as the behavioral report. We show that single-trial DVs covary with stimulus difficulty in both areas but are stronger and appear earlier in PMd compared to M1 when the stimulus duration is fixed and predictable. When temporal uncertainty is introduced by making the stimulus duration variable, single-trial DV dynamics are accelerated across the board and the two areas become largely indistinguishable throughout the entire trial. These effects are not trivially explained by the faster emergence of motor kinematic signals in PMd and M1. All key aspects of the data were replicated by a computational model that relies on progressive recruitment of units with stable choice-related modulation of neural population activity. In contrast with several recent results in rodents, decision signals in PMd and M1 are not carried by short sequences of activity in non-overlapping groups of neurons but are instead distributed across many neurons, which once recruited, represent the decision stably during individual behavioral epochs of the trial.


2017 ◽  
Author(s):  
Chethan Pandarinath ◽  
Daniel J. O’Shea ◽  
Jasmine Collins ◽  
Rafal Jozefowicz ◽  
Sergey D. Stavisky ◽  
...  

Neuroscience is experiencing a data revolution in which simultaneous recording of many hundreds or thousands of neurons is revealing structure in population activity that is not apparent from single-neuron responses. This structure is typically extracted from trial-averaged data. Single-trial analyses are challenging due to incomplete sampling of the neural population, trial-to-trial variability, and fluctuations in action potential timing. Here we introduce Latent Factor Analysis via Dynamical Systems (LFADS), a deep learning method to infer latent dynamics from single-trial neural spiking data. LFADS uses a nonlinear dynamical system (a recurrent neural network) to infer the dynamics underlying observed population activity and to extract ‘de-noised’ single-trial firing rates from neural spiking data. We apply LFADS to a variety of monkey and human motor cortical datasets, demonstrating its ability to predict observed behavioral variables with unprecedented accuracy, extract precise estimates of neural dynamics on single trials, infer perturbations to those dynamics that correlate with behavioral choices, and combine data from non-overlapping recording sessions (spanning months) to improve inference of underlying dynamics. In summary, LFADS leverages all observations of a neural population’s activity to accurately model its dynamics on single trials, opening the door to a detailed understanding of the role of dynamics in performing computation and ultimately driving behavior.


2021 ◽  
Author(s):  
Gwendolin Schoenfeld ◽  
Stefano Carta ◽  
Peter Rupprecht ◽  
Aslı Ayaz ◽  
Fritjof Helmchen

Neuronal population activity in the hippocampal CA3 subfield is implicated in cognitive brain functions such as memory processing and spatial navigation. However, because of its deep location in the brain, the CA3 area has been difficult to target with modern calcium imaging approaches. Here, we achieved chronic two-photon calcium imaging of CA3 pyramidal neurons with the red fluorescent calcium indicator R-CaMP1.07 in anesthetized and awake mice. We characterize CA3 neuronal activity at both the single-cell and population level and assess its stability across multiple imaging days. During both anesthesia and wakefulness, nearly all CA3 pyramidal neurons displayed calcium transients. Most of the calcium transients were consistent with a high incidence of bursts of action potentials, based on calibration measurements using simultaneous juxtacellular recordings and calcium imaging. In awake mice, we found state-dependent differences with striking large and prolonged calcium transients during locomotion. We estimate that trains of >30 action potentials over 3 s underlie these salient events. Their abundance in particular subsets of neurons was relatively stable across days. At the population level, we found that coactivity within the CA3 network was above chance level and that co-active neuron pairs maintained their correlated activity over days. Our results corroborate the notion of state-dependent spatiotemporal activity patterns in the recurrent network of CA3 and demonstrate that at least some features of population activity, namely coactivity of cell pairs and likelihood to engage in prolonged high activity, are maintained over days.


2021 ◽  
Author(s):  
Angus Chadwick ◽  
Adil Khan ◽  
Jasper Poort ◽  
Antonin Blot ◽  
Sonja Hofer ◽  
...  

Adaptive sensory behavior is thought to depend on processing in recurrent cortical circuits, but how dynamics in these circuits shapes the integration and transmission of sensory information is not well understood. Here, we study neural coding in recurrently connected networks of neurons driven by sensory input. We show analytically how information available in the network output varies with the alignment between feedforward input and the integrating modes of the circuit dynamics. In light of this theory, we analyzed neural population activity in the visual cortex of mice that learned to discriminate visual features. We found that over learning, slow patterns of network dynamics realigned to better integrate input relevant to the discrimination task. This realignment of network dynamics could be explained by changes in excitatory-inhibitory connectivity amongst neurons tuned to relevant features. These results suggest that learning tunes the temporal dynamics of cortical circuits to optimally integrate relevant sensory input.


2009 ◽  
Vol 102 (1) ◽  
pp. 614-635 ◽  
Author(s):  
Byron M. Yu ◽  
John P. Cunningham ◽  
Gopal Santhanam ◽  
Stephen I. Ryu ◽  
Krishna V. Shenoy ◽  
...  

We consider the problem of extracting smooth, low-dimensional neural trajectories that summarize the activity recorded simultaneously from many neurons on individual experimental trials. Beyond the benefit of visualizing the high-dimensional, noisy spiking activity in a compact form, such trajectories can offer insight into the dynamics of the neural circuitry underlying the recorded activity. Current methods for extracting neural trajectories involve a two-stage process: the spike trains are first smoothed over time, then a static dimensionality-reduction technique is applied. We first describe extensions of the two-stage methods that allow the degree of smoothing to be chosen in a principled way and that account for spiking variability, which may vary both across neurons and across time. We then present a novel method for extracting neural trajectories—Gaussian-process factor analysis (GPFA)—which unifies the smoothing and dimensionality-reduction operations in a common probabilistic framework. We applied these methods to the activity of 61 neurons recorded simultaneously in macaque premotor and motor cortices during reach planning and execution. By adopting a goodness-of-fit metric that measures how well the activity of each neuron can be predicted by all other recorded neurons, we found that the proposed extensions improved the predictive ability of the two-stage methods. The predictive ability was further improved by going to GPFA. From the extracted trajectories, we directly observed a convergence in neural state during motor planning, an effect that was shown indirectly by previous studies. We then show how such methods can be a powerful tool for relating the spiking activity across a neural population to the subject's behavior on a single-trial basis. Finally, to assess how well the proposed methods characterize neural population activity when the underlying time course is known, we performed simulations that revealed that GPFA performed tens of percent better than the best two-stage method.


2020 ◽  
Vol 16 (11) ◽  
pp. e1008330
Author(s):  
Marcus A. Triplett ◽  
Zac Pujic ◽  
Biao Sun ◽  
Lilach Avitan ◽  
Geoffrey J. Goodhill

The pattern of neural activity evoked by a stimulus can be substantially affected by ongoing spontaneous activity. Separating these two types of activity is particularly important for calcium imaging data given the slow temporal dynamics of calcium indicators. Here we present a statistical model that decouples stimulus-driven activity from low dimensional spontaneous activity in this case. The model identifies hidden factors giving rise to spontaneous activity while jointly estimating stimulus tuning properties that account for the confounding effects that these factors introduce. By applying our model to data from zebrafish optic tectum and mouse visual cortex, we obtain quantitative measurements of the extent that neurons in each case are driven by evoked activity, spontaneous activity, and their interaction. By not averaging away potentially important information encoded in spontaneous activity, this broadly applicable model brings new insight into population-level neural activity within single trials.


2021 ◽  
pp. 1-39
Author(s):  
Laurent Bonnasse-Gahot ◽  
Jean-Pierre Nadal

Abstract Classification is one of the major tasks that deep learning is successfully tackling. Categorization is also a fundamental cognitive ability. A well-known perceptual consequence of categorization in humans and other animals, categorical per ception, is notably characterized by a within-category compression and a between-category separation: two items, close in input space, are perceived closer if they belong to the same category than if they belong to different categories. Elaborating on experimental and theoretical results in cognitive science, here we study categorical effects in artificial neural networks. We combine a theoretical analysis that makes use of mutual and Fisher information quantities and a series of numerical simulations on networks of increasing complexity. These formal and numerical analyses provide insights into the geometry of the neural representation in deep layers, with expansion of space near category boundaries and contraction far from category boundaries. We investigate categorical representation by using two complementary approaches: one mimics experiments in psychophysics and cognitive neuroscience by means of morphed continua between stimuli of different categories, while the other introduces a categoricality index that, for each layer in the network, quantifies the separability of the categories at the neural population level. We show on both shallow and deep neural networks that category learning automatically induces categorical perception. We further show that the deeper a layer, the stronger the categorical effects. As an outcome of our study, we propose a coherent view of the efficacy of different heuristic practices of the dropout regularization technique. More generally, our view, which finds echoes in the neuroscience literature, insists on the differential impact of noise in any given layer depending on the geometry of the neural representation that is being learned, that is, on how this geometry reflects the structure of the categories.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Aneesha K Suresh ◽  
James M Goodman ◽  
Elizaveta V Okorokova ◽  
Matthew Kaufman ◽  
Nicholas G Hatsopoulos ◽  
...  

Low-dimensional linear dynamics are observed in neuronal population activity in primary motor cortex (M1) when monkeys make reaching movements. This population-level behavior is consistent with a role for M1 as an autonomous pattern generator that drives muscles to give rise to movement. In the present study, we examine whether similar dynamics are also observed during grasping movements, which involve fundamentally different patterns of kinematics and muscle activations. Using a variety of analytical approaches, we show that M1 does not exhibit such dynamics during grasping movements. Rather, the grasp-related neuronal dynamics in M1 are similar to their counterparts in somatosensory cortex, whose activity is driven primarily by afferent inputs rather than by intrinsic dynamics. The basic structure of the neuronal activity underlying hand control is thus fundamentally different from that underlying arm control.


2019 ◽  
Author(s):  
Marcus A. Triplett ◽  
Zac Pujic ◽  
Biao Sun ◽  
Lilach Avitan ◽  
Geoffrey J. Goodhill

AbstractThe pattern of neural activity evoked by a stimulus can be substantially affected by ongoing spontaneous activity. Separating these two types of activity is particularly important for calcium imaging data given the slow temporal dynamics of calcium indicators. Here we present a statistical model that decouples stimulus-driven activity from low dimensional spontaneous activity in this case. The model identifies hidden factors giving rise to spontaneous activity while jointly estimating stimulus tuning properties that account for the confounding effects that these factors introduce. By applying our model to data from zebrafish optic tectum and mouse visual cortex, we obtain quantitative measurements of the extent that neurons in each case are driven by evoked activity, spontaneous activity, and their interaction. This broadly applicable model brings new insight into population-level neural activity in single trials without averaging away potentially important information encoded in spontaneous activity.


Sign in / Sign up

Export Citation Format

Share Document