scholarly journals Temporal alignment and latent Gaussian process factor inference in population spike trains

2018 ◽  
Author(s):  
Lea Duncker ◽  
Maneesh Sahani

AbstractWe introduce a novel scalable approach to identifying common latent structure in neural population spike-trains, which allows for variability both in the trajectory and in the rate of progression of the underlying computation. Our approach is based on shared latent Gaussian processes (GPs) which are combined linearly, as in the Gaussian Process Factor Analysis (GPFA) algorithm. We extend GPFA to handle unbinned spike-train data by incorporating a continuous time point-process likelihood model, achieving scalability with a sparse variational approximation. Shared variability is separated into terms that express condition dependence, as well as trial-to-trial variation in trajectories. Finally, we introduce a nested GP formulation to capture variability in the rate of evolution along the trajectory. We show that the new method learns to recover latent trajectories in synthetic data, and can accurately identify the trial-to-trial timing of movement-related parameters from motor cortical data without any supervision.

2009 ◽  
Vol 102 (1) ◽  
pp. 614-635 ◽  
Author(s):  
Byron M. Yu ◽  
John P. Cunningham ◽  
Gopal Santhanam ◽  
Stephen I. Ryu ◽  
Krishna V. Shenoy ◽  
...  

We consider the problem of extracting smooth, low-dimensional neural trajectories that summarize the activity recorded simultaneously from many neurons on individual experimental trials. Beyond the benefit of visualizing the high-dimensional, noisy spiking activity in a compact form, such trajectories can offer insight into the dynamics of the neural circuitry underlying the recorded activity. Current methods for extracting neural trajectories involve a two-stage process: the spike trains are first smoothed over time, then a static dimensionality-reduction technique is applied. We first describe extensions of the two-stage methods that allow the degree of smoothing to be chosen in a principled way and that account for spiking variability, which may vary both across neurons and across time. We then present a novel method for extracting neural trajectories—Gaussian-process factor analysis (GPFA)—which unifies the smoothing and dimensionality-reduction operations in a common probabilistic framework. We applied these methods to the activity of 61 neurons recorded simultaneously in macaque premotor and motor cortices during reach planning and execution. By adopting a goodness-of-fit metric that measures how well the activity of each neuron can be predicted by all other recorded neurons, we found that the proposed extensions improved the predictive ability of the two-stage methods. The predictive ability was further improved by going to GPFA. From the extracted trajectories, we directly observed a convergence in neural state during motor planning, an effect that was shown indirectly by previous studies. We then show how such methods can be a powerful tool for relating the spiking activity across a neural population to the subject's behavior on a single-trial basis. Finally, to assess how well the proposed methods characterize neural population activity when the underlying time course is known, we performed simulations that revealed that GPFA performed tens of percent better than the best two-stage method.


2020 ◽  
Author(s):  
Stephen L. Keeley ◽  
Mikio C. Aoi ◽  
Yiyi Yu ◽  
Spencer L. Smith ◽  
Jonathan W. Pillow

AbstractNeural datasets often contain measurements of neural activity across multiple trials of a repeated stimulus or behavior. An important problem in the analysis of such datasets is to characterize systematic aspects of neural activity that carry information about the repeated stimulus or behavior of interest, which can be considered “signal”, and to separate them from the trial-to-trial fluctuations in activity that are not time-locked to the stimulus, which for purposes of such analyses can be considered “noise”. Gaussian Process factor models provide a powerful tool for identifying shared structure in high-dimensional neural data. However, they have not yet been adapted to the problem of characterizing signal and noise in multi-trial datasets. Here we address this shortcoming by proposing “signal-noise” Poisson-spiking Gaussian Process Factor Analysis (SNP-GPFA), a flexible latent variable model that resolves signal and noise latent structure in neural population spiking activity. To learn the parameters of our model, we introduce a Fourier-domain black box variational inference method that quickly identifies smooth latent structure. The resulting model reliably uncovers latent signal and trial-to-trial noise-related fluctuations in large-scale recordings. We use this model to show that predominantly, noise fluctuations perturb neural activity within a subspace orthogonal to signal activity, suggesting that trial-by-trial noise does not interfere with signal representations. Finally, we extend the model to capture statistical dependencies across brain regions in multi-region data. We show that in mouse visual cortex, models with shared noise across brain regions out-perform models with independent per-region noise.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Ryota Kobayashi ◽  
Shuhei Kurita ◽  
Anno Kurth ◽  
Katsunori Kitano ◽  
Kenji Mizuseki ◽  
...  

Abstract State-of-the-art techniques allow researchers to record large numbers of spike trains in parallel for many hours. With enough such data, we should be able to infer the connectivity among neurons. Here we develop a method for reconstructing neuronal circuitry by applying a generalized linear model (GLM) to spike cross-correlations. Our method estimates connections between neurons in units of postsynaptic potentials and the amount of spike recordings needed to verify connections. The performance of inference is optimized by counting the estimation errors using synthetic data. This method is superior to other established methods in correctly estimating connectivity. By applying our method to rat hippocampal data, we show that the types of estimated connections match the results inferred from other physiological cues. Thus our method provides the means to build a circuit diagram from recorded spike trains, thereby providing a basis for elucidating the differences in information processing in different brain regions.


2010 ◽  
Vol 22 (9) ◽  
pp. 2369-2389 ◽  
Author(s):  
Kentaro Katahira ◽  
Jun Nishikawa ◽  
Kazuo Okanoya ◽  
Masato Okada

Neural activity is nonstationary and varies across time. Hidden Markov models (HMMs) have been used to track the state transition among quasi-stationary discrete neural states. Within this context, an independent Poisson model has been used for the output distribution of HMMs; hence, the model is incapable of tracking the change in correlation without modulating the firing rate. To achieve this, we applied a multivariate Poisson distribution with correlation terms for the output distribution of HMMs. We formulated a variational Bayes (VB) inference for the model. The VB could automatically determine the appropriate number of hidden states and correlation types while avoiding the overlearning problem. We developed an efficient algorithm for computing posteriors using the recursive relationship of a multivariate Poisson distribution. We demonstrated the performance of our method on synthetic data and real spike trains recorded from a songbird.


2020 ◽  
Vol 34 (04) ◽  
pp. 6803-6810
Author(s):  
Rui Zhang ◽  
Christian Walder ◽  
Marian-Andrei Rizoiu

The Hawkes process (HP) has been widely applied to modeling self-exciting events including neuron spikes, earthquakes and tweets. To avoid designing parametric triggering kernel and to be able to quantify the prediction confidence, the non-parametric Bayesian HP has been proposed. However, the inference of such models suffers from unscalability or slow convergence. In this paper, we aim to solve both problems. Specifically, first, we propose a new non-parametric Bayesian HP in which the triggering kernel is modeled as a squared sparse Gaussian process. Then, we propose a novel variational inference schema for model optimization. We employ the branching structure of the HP so that maximization of evidence lower bound (ELBO) is tractable by the expectation-maximization algorithm. We propose a tighter ELBO which improves the fitting performance. Further, we accelerate the novel variational inference schema to linear time complexity by leveraging the stationarity of the triggering kernel. Different from prior acceleration methods, ours enjoys higher efficiency. Finally, we exploit synthetic data and two large social media datasets to evaluate our method. We show that our approach outperforms state-of-the-art non-parametric frequentist and Bayesian methods. We validate the efficiency of our accelerated variational inference schema and practical utility of our tighter ELBO for model selection. We observe that the tighter ELBO exceeds the common one in model selection.


Author(s):  
Lucia Castellanos ◽  
Vincent Q. Vu ◽  
Sagi Perel ◽  
Andrew B. Schwartz ◽  
Robert E . Kass

2021 ◽  
Author(s):  
Kristopher T. Jensen ◽  
Ta-Chu Kao ◽  
Jasmine Talia Stone ◽  
Guillaume Hennequin

Latent variable models are ubiquitous in the exploratory analysis of neural population recordings, where they allow researchers to summarize the activity of large populations of neurons in lower dimensional 'latent' spaces. Existing methods can generally be categorized into (i) Bayesian methods that facilitate flexible incorporation of prior knowledge and uncertainty estimation, but which typically do not scale to large datasets; and (ii) highly parameterized methods without explicit priors that scale better but often struggle in the low-data regime. Here, we bridge this gap by developing a fully Bayesian yet scalable version of Gaussian process factor analysis (bGPFA) which models neural data as arising from a set of inferred latent processes with a prior that encourages smoothness over time. Additionally, bGPFA uses automatic relevance determination to infer the dimensionality of neural activity directly from the training data during optimization. To enable the analysis of continuous recordings without trial structure, we introduce a novel variational inference strategy that scales near-linearly in time and also allows for non-Gaussian noise models more appropriate for electrophysiological recordings. We apply bGPFA to continuous recordings spanning 30 minutes with over 14 million data points from primate motor and somatosensory cortices during a self-paced reaching task. We show that neural activity progresses from an initial state at target onset to a reach-specific preparatory state well before movement onset. The distance between these initial and preparatory latent states is predictive of reaction times across reaches, suggesting that such preparatory dynamics have behavioral relevance despite the lack of externally imposed delay periods. Additionally, bGPFA discovers latent processes that evolve over slow timescales on the order of several seconds and contain complementary information about reaction time. These timescales are longer than those revealed by methods which focus on individual movement epochs and may reflect fluctuations in e.g. task engagement.


Sign in / Sign up

Export Citation Format

Share Document