scholarly journals State space discovery in spatial representation circuits with persistent cohomology

2020 ◽  
Author(s):  
Louis Kang ◽  
Boyan Xu ◽  
Dmitriy Morozov

AbstractPersistent cohomology is a powerful technique for discovering topological structure in data. Strategies for its use in neuroscience are still undergoing development. We explore the application of persistent cohomology to the brain’s spatial representation system. We simulate populations of grid cells, head direction cells, and conjunctive cells, each of which span low-dimensional topological structures embedded in high-dimensional neural activity space. We evaluate the ability for persistent cohomology to discover these structures and demonstrate its robustness to various forms of noise. We identify regimes under which mixtures of populations form product topologies can be detected. Our results suggest guidelines for applying persistent cohomology, as well as persistent homology, to experimental neural recordings.

2021 ◽  
Vol 15 ◽  
Author(s):  
Louis Kang ◽  
Boyan Xu ◽  
Dmitriy Morozov

Persistent cohomology is a powerful technique for discovering topological structure in data. Strategies for its use in neuroscience are still undergoing development. We comprehensively and rigorously assess its performance in simulated neural recordings of the brain's spatial representation system. Grid, head direction, and conjunctive cell populations each span low-dimensional topological structures embedded in high-dimensional neural activity space. We evaluate the ability for persistent cohomology to discover these structures for different dataset dimensions, variations in spatial tuning, and forms of noise. We quantify its ability to decode simulated animal trajectories contained within these topological structures. We also identify regimes under which mixtures of populations form product topologies that can be detected. Our results reveal how dataset parameters affect the success of topological discovery and suggest principles for applying persistent cohomology, as well as persistent homology, to experimental neural recordings.


2018 ◽  
Author(s):  
Emily L. Mackevicius ◽  
Andrew H. Bahle ◽  
Alex H. Williams ◽  
Shijie Gu ◽  
Natalia I. Denissenko ◽  
...  

AbstractIdentifying low-dimensional features that describe large-scale neural recordings is a major challenge in neuroscience. Repeated temporal patterns (sequences) are thought to be a salient feature of neural dynamics, but are not succinctly captured by traditional dimensionality reduction techniques. Here we describe a software toolbox—called seqNMF—with new methods for extracting informative, non-redundant, sequences from high-dimensional neural data, testing the significance of these extracted patterns, and assessing the prevalence of sequential structure in data. We test these methods on simulated data under multiple noise conditions, and on several real neural and behavioral data sets. In hippocampal data, seqNMF identifies neural sequences that match those calculated manually by reference to behavioral events. In songbird data, seqNMF discovers neural sequences in untutored birds that lack stereotyped songs. Thus, by identifying temporal structure directly from neural data, seqNMF enables dissection of complex neural circuits without relying on temporal references from stimuli or behavioral outputs.


2020 ◽  
Author(s):  
Mohammad R. Rezaei ◽  
Alex E. Hadjinicolaou ◽  
Sydney S. Cash ◽  
Uri T. Eden ◽  
Ali Yousefi

AbstractThe Bayesian state-space neural encoder-decoder modeling framework is an established solution to reveal how changes in brain dynamics encode physiological covariates like movement or cognition. Although the framework is increasingly being applied to progress the field of neuroscience, its application to modeling high-dimensional neural data continues to be a challenge. Here, we propose a novel solution that avoids the complexity of encoder models that characterize high-dimensional data as a function of the underlying state processes. We build a discriminative model to estimate state processes as a function of current and previous observations of neural activity. We then develop the filter and parameter estimation solutions for this new class of state-space modeling framework called the “direct decoder” model. We apply the model to decode movement trajectories of a rat in a W-shaped maze from the ensemble spiking activity of place cells and achieve comparable performance to modern decoding solutions, without needing an encoding step in the model development. We further demonstrate how a dynamical auto-encoder can be built using the direct decoder model; here, the underlying state process links the high-dimensional neural activity to the behavioral readout. The dynamical auto-encoder can optimally estimate the low-dimensional dynamical manifold which represents the relationship between brain and behavior.


2018 ◽  
Author(s):  
Erik Rybakken ◽  
Nils Baas ◽  
Benjamin Dunn

AbstractWe introduce a novel data-driven approach to discover and decode features in the neural code coming from large population neural recordings with minimal assumptions, using cohomological learning. We apply our approach to neural recordings of mice moving freely in a box, where we find a circular feature. We then observe that the decoded value corresponds well to the head direction of the mouse. Thus we capture head direction cells and decode the head direction from the neural population activity without having to process the behaviour of the mouse. Interestingly, the decoded values convey more information about the neural activity than the tracked head direction does, with differences that have some spatial organization. Finally, we note that the residual population activity, after the head direction has been accounted for, retains some low-dimensional structure which is correlated with the speed of the mouse.


2020 ◽  
Author(s):  
Ege Altan ◽  
Sara A. Solla ◽  
Lee E. Miller ◽  
Eric J. Perreault

AbstractIt is generally accepted that the number of neurons in a given brain area far exceeds the information that area encodes. For example, motor areas of the human brain contain tens of millions of neurons that control the activation of tens or at most hundreds of muscles. This massive redundancy implies the covariation of many neurons, which constrains the population activity to a low-dimensional manifold within the space of all possible patterns of neural activity. To gain a conceptual understanding of the complexity of the neural activity within a manifold, it is useful to estimate its dimensionality, which quantifies the number of degrees of freedom required to describe the observed population activity without significant information loss. While there are many algorithms for dimensionality estimation, we do not know which are well suited for analyzing neural activity. The objective of this study was to evaluate the efficacy of several representative algorithms for estimating linearly and nonlinearly embedded data. We generated synthetic neural recordings with known intrinsic dimensionality and used them to test the algorithms’ accuracy and robustness. We emulated some of the important challenges associated with experimental data by adding noise, altering the nature of the embedding from the low-dimensional manifold to the high-dimensional recordings, varying the dimensionality of the manifold, and limiting the amount of available data. We demonstrated that linear algorithms overestimate the dimensionality of nonlinear, noise-free data. In cases of high noise, most algorithms overestimated dimensionality. We thus developed a denoising algorithm based on deep learning, the “Joint Autoencoder”, which significantly improved subsequent dimensionality estimation. Critically, we found that all algorithms failed when the dimensionality was high (above 20) or when the amount of data used for estimation was low. Based on the challenges we observed, we formulated a pipeline for estimating the dimensionality of experimental neural data.Author SummaryThe number of neurons that we can record from has increased exponentially for decades; today we can simultaneously record from thousands of neurons. However, the individual firing rates are highly redundant. One approach to identifying important features from redundant data is to estimate the dimensionality of the neural recordings, which represents the number of degrees of freedom required to describe the data without significant information loss. Better understanding of dimensionality may also uncover the mechanisms of computation within a neural circuit. Circuits carrying out complex computations might be higher-dimensional than those carrying out simpler computations. Typically, studies have quantified neural dimensionality using one of several available methods despite a lack of consensus on which method would be most appropriate for neural data. In this work, we used several methods to investigate the accuracy of simulated neural data with properties mimicking those of actual neural recordings. Based on these results, we devised an analysis pipeline to estimate the dimensionality of neural recordings. Our work will allow scientists to extract informative features from a large number of highly redundant neurons, as well as quantify the complexity of information encoded by these neurons.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Emily L Mackevicius ◽  
Andrew H Bahle ◽  
Alex H Williams ◽  
Shijie Gu ◽  
Natalia I Denisenko ◽  
...  

Identifying low-dimensional features that describe large-scale neural recordings is a major challenge in neuroscience. Repeated temporal patterns (sequences) are thought to be a salient feature of neural dynamics, but are not succinctly captured by traditional dimensionality reduction techniques. Here, we describe a software toolbox—called seqNMF—with new methods for extracting informative, non-redundant, sequences from high-dimensional neural data, testing the significance of these extracted patterns, and assessing the prevalence of sequential structure in data. We test these methods on simulated data under multiple noise conditions, and on several real neural and behavioral data sets. In hippocampal data, seqNMF identifies neural sequences that match those calculated manually by reference to behavioral events. In songbird data, seqNMF discovers neural sequences in untutored birds that lack stereotyped songs. Thus, by identifying temporal structure directly from neural data, seqNMF enables dissection of complex neural circuits without relying on temporal references from stimuli or behavioral outputs.


2019 ◽  
Vol 31 (1) ◽  
pp. 68-93 ◽  
Author(s):  
Erik Rybakken ◽  
Nils Baas ◽  
Benjamin Dunn

We introduce a novel data-driven approach to discover and decode features in the neural code coming from large population neural recordings with minimal assumptions, using cohomological feature extraction. We apply our approach to neural recordings of mice moving freely in a box, where we find a circular feature. We then observe that the decoded value corresponds well to the head direction of the mouse. Thus, we capture head direction cells and decode the head direction from the neural population activity without having to process the mouse's behavior. Interestingly, the decoded values convey more information about the neural activity than the tracked head direction does, with differences that have some spatial organization. Finally, we note that the residual population activity, after the head direction has been accounted for, retains some low-dimensional structure that is correlated with the speed of the mouse.


2021 ◽  
Vol 17 (11) ◽  
pp. e1008591
Author(s):  
Ege Altan ◽  
Sara A. Solla ◽  
Lee E. Miller ◽  
Eric J. Perreault

It is generally accepted that the number of neurons in a given brain area far exceeds the number of neurons needed to carry any specific function controlled by that area. For example, motor areas of the human brain contain tens of millions of neurons that control the activation of tens or at most hundreds of muscles. This massive redundancy implies the covariation of many neurons, which constrains the population activity to a low-dimensional manifold within the space of all possible patterns of neural activity. To gain a conceptual understanding of the complexity of the neural activity within a manifold, it is useful to estimate its dimensionality, which quantifies the number of degrees of freedom required to describe the observed population activity without significant information loss. While there are many algorithms for dimensionality estimation, we do not know which are well suited for analyzing neural activity. The objective of this study was to evaluate the efficacy of several representative algorithms for estimating the dimensionality of linearly and nonlinearly embedded data. We generated synthetic neural recordings with known intrinsic dimensionality and used them to test the algorithms’ accuracy and robustness. We emulated some of the important challenges associated with experimental data by adding noise, altering the nature of the embedding of the low-dimensional manifold within the high-dimensional recordings, varying the dimensionality of the manifold, and limiting the amount of available data. We demonstrated that linear algorithms overestimate the dimensionality of nonlinear, noise-free data. In cases of high noise, most algorithms overestimated the dimensionality. We thus developed a denoising algorithm based on deep learning, the “Joint Autoencoder”, which significantly improved subsequent dimensionality estimation. Critically, we found that all algorithms failed when the intrinsic dimensionality was high (above 20) or when the amount of data used for estimation was low. Based on the challenges we observed, we formulated a pipeline for estimating the dimensionality of experimental neural data.


2021 ◽  
Vol 33 (3) ◽  
pp. 827-852
Author(s):  
Omri Barak ◽  
Sandro Romani

Empirical estimates of the dimensionality of neural population activity are often much lower than the population size. Similar phenomena are also observed in trained and designed neural network models. These experimental and computational results suggest that mapping low-dimensional dynamics to high-dimensional neural space is a common feature of cortical computation. Despite the ubiquity of this observation, the constraints arising from such mapping are poorly understood. Here we consider a specific example of mapping low-dimensional dynamics to high-dimensional neural activity—the neural engineering framework. We analytically solve the framework for the classic ring model—a neural network encoding a static or dynamic angular variable. Our results provide a complete characterization of the success and failure modes for this model. Based on similarities between this and other frameworks, we speculate that these results could apply to more general scenarios.


2001 ◽  
Vol 24 (5) ◽  
pp. 793-810 ◽  
Author(s):  
Ichiro Tsuda

Using the concepts of chaotic dynamical systems, we present an interpretation of dynamic neural activity found in cortical and subcortical areas. The discovery of chaotic itinerancy in high-dimensional dynamical systems with and without a noise term has motivated a new interpretation of this dynamic neural activity, cast in terms of the high-dimensional transitory dynamics among “exotic” attractors. This interpretation is quite different from the conventional one, cast in terms of simple behavior on low-dimensional attractors. Skarda and Freeman (1987) presented evidence in support of the conclusion that animals cannot memorize odor without chaotic activity of neuron populations. Following their work, we study the role of chaotic dynamics in biological information processing, perception, and memory. We propose a new coding scheme of information in chaos-driven contracting systems we refer to as Cantor coding. Since these systems are found in the hippocampal formation and also in the olfactory system, the proposed coding scheme should be of biological significance. Based on these intensive studies, a hypothesis regarding the formation of episodic memory is given.


Sign in / Sign up

Export Citation Format

Share Document