scholarly journals Estimating the dimensionality of the manifold underlying multi-electrode neural recordings

2021 ◽  
Vol 17 (11) ◽  
pp. e1008591
Author(s):  
Ege Altan ◽  
Sara A. Solla ◽  
Lee E. Miller ◽  
Eric J. Perreault

It is generally accepted that the number of neurons in a given brain area far exceeds the number of neurons needed to carry any specific function controlled by that area. For example, motor areas of the human brain contain tens of millions of neurons that control the activation of tens or at most hundreds of muscles. This massive redundancy implies the covariation of many neurons, which constrains the population activity to a low-dimensional manifold within the space of all possible patterns of neural activity. To gain a conceptual understanding of the complexity of the neural activity within a manifold, it is useful to estimate its dimensionality, which quantifies the number of degrees of freedom required to describe the observed population activity without significant information loss. While there are many algorithms for dimensionality estimation, we do not know which are well suited for analyzing neural activity. The objective of this study was to evaluate the efficacy of several representative algorithms for estimating the dimensionality of linearly and nonlinearly embedded data. We generated synthetic neural recordings with known intrinsic dimensionality and used them to test the algorithms’ accuracy and robustness. We emulated some of the important challenges associated with experimental data by adding noise, altering the nature of the embedding of the low-dimensional manifold within the high-dimensional recordings, varying the dimensionality of the manifold, and limiting the amount of available data. We demonstrated that linear algorithms overestimate the dimensionality of nonlinear, noise-free data. In cases of high noise, most algorithms overestimated the dimensionality. We thus developed a denoising algorithm based on deep learning, the “Joint Autoencoder”, which significantly improved subsequent dimensionality estimation. Critically, we found that all algorithms failed when the intrinsic dimensionality was high (above 20) or when the amount of data used for estimation was low. Based on the challenges we observed, we formulated a pipeline for estimating the dimensionality of experimental neural data.

2020 ◽  
Author(s):  
Ege Altan ◽  
Sara A. Solla ◽  
Lee E. Miller ◽  
Eric J. Perreault

AbstractIt is generally accepted that the number of neurons in a given brain area far exceeds the information that area encodes. For example, motor areas of the human brain contain tens of millions of neurons that control the activation of tens or at most hundreds of muscles. This massive redundancy implies the covariation of many neurons, which constrains the population activity to a low-dimensional manifold within the space of all possible patterns of neural activity. To gain a conceptual understanding of the complexity of the neural activity within a manifold, it is useful to estimate its dimensionality, which quantifies the number of degrees of freedom required to describe the observed population activity without significant information loss. While there are many algorithms for dimensionality estimation, we do not know which are well suited for analyzing neural activity. The objective of this study was to evaluate the efficacy of several representative algorithms for estimating linearly and nonlinearly embedded data. We generated synthetic neural recordings with known intrinsic dimensionality and used them to test the algorithms’ accuracy and robustness. We emulated some of the important challenges associated with experimental data by adding noise, altering the nature of the embedding from the low-dimensional manifold to the high-dimensional recordings, varying the dimensionality of the manifold, and limiting the amount of available data. We demonstrated that linear algorithms overestimate the dimensionality of nonlinear, noise-free data. In cases of high noise, most algorithms overestimated dimensionality. We thus developed a denoising algorithm based on deep learning, the “Joint Autoencoder”, which significantly improved subsequent dimensionality estimation. Critically, we found that all algorithms failed when the dimensionality was high (above 20) or when the amount of data used for estimation was low. Based on the challenges we observed, we formulated a pipeline for estimating the dimensionality of experimental neural data.Author SummaryThe number of neurons that we can record from has increased exponentially for decades; today we can simultaneously record from thousands of neurons. However, the individual firing rates are highly redundant. One approach to identifying important features from redundant data is to estimate the dimensionality of the neural recordings, which represents the number of degrees of freedom required to describe the data without significant information loss. Better understanding of dimensionality may also uncover the mechanisms of computation within a neural circuit. Circuits carrying out complex computations might be higher-dimensional than those carrying out simpler computations. Typically, studies have quantified neural dimensionality using one of several available methods despite a lack of consensus on which method would be most appropriate for neural data. In this work, we used several methods to investigate the accuracy of simulated neural data with properties mimicking those of actual neural recordings. Based on these results, we devised an analysis pipeline to estimate the dimensionality of neural recordings. Our work will allow scientists to extract informative features from a large number of highly redundant neurons, as well as quantify the complexity of information encoded by these neurons.


2010 ◽  
Vol 20 (03) ◽  
pp. 177-192 ◽  
Author(s):  
JOCHEN EINBECK ◽  
LUDGER EVERS ◽  
BENEDICT POWELL

We consider principal curves and surfaces in the context of multivariate regression modelling. For predictor spaces featuring complex dependency patterns between the involved variables, the intrinsic dimensionality of the data tends to be very small due to the high redundancy induced by the dependencies. In situations of this type, it is useful to approximate the high-dimensional predictor space through a low-dimensional manifold (i.e., a curve or a surface), and use the projections onto the manifold as compressed predictors in the regression problem. In the case that the intrinsic dimensionality of the predictor space equals one, we use the local principal curve algorithm for the the compression step. We provide a novel algorithm which extends this idea to local principal surfaces, thus covering cases of an intrinsic dimensionality equal to two, which is in principle extendible to manifolds of arbitrary dimension. We motivate and apply the novel techniques using astrophysical and oceanographic data examples.


2017 ◽  
Vol 19 (12) ◽  
pp. 125012 ◽  
Author(s):  
Carlos Floyd ◽  
Christopher Jarzynski ◽  
Garegin Papoian

2020 ◽  
Author(s):  
Wei Guo ◽  
Jie J. Zhang ◽  
Jonathan P. Newman ◽  
Matthew A. Wilson

AbstractLatent learning allows the brain the transform experiences into cognitive maps, a form of implicit memory, without reinforced training. Its mechanism is unclear. We tracked the internal states of the hippocampal neural ensembles and discovered that during latent learning of a spatial map, the state space evolved into a low-dimensional manifold that topologically resembled the physical environment. This process requires repeated experiences and sleep in-between. Further investigations revealed that a subset of hippocampal neurons, instead of rapidly forming place fields in a novel environment, remained weakly tuned but gradually developed correlated activity with other neurons. These ‘weakly spatial’ neurons bond activity of neurons with stronger spatial tuning, linking discrete place fields into a map that supports flexible navigation.


2018 ◽  
Vol 21 (5) ◽  
pp. 824-837 ◽  
Author(s):  
Jian Huang ◽  
Gordon McTaggart-Cowan ◽  
Sandeep Munshi

This article describes the application of a modified first-order conditional moment closure model used in conjunction with the trajectory-generated low-dimensional manifold method in large-eddy simulation of pilot ignited high-pressure direct injection natural gas combustion in a heavy-duty diesel engine. The article starts with a review of the intrinsic low-dimensional manifold method for reducing detailed chemistry and various formulations for the construction of such manifolds. It is followed by a brief review of the conditional moment closure method for modelling the interaction between turbulence and combustion chemistry. The high computational cost associated with the direct implementation of the basic conditional moment closure model was discussed. The article then describes the formulation of a modified approach to solve the conditional moment closure equation, whose reaction source terms for the conditional mass fractions for species were obtained by projecting the turbulent perturbation onto the reaction manifold. The main model assumptions were explained and the resulting limitations were discussed. A numerical experiment was conducted to examine the validity the model assumptions. The model was then implemented in a combustion computational fluid dynamics solver developed on an open-source computational fluid dynamics platform. Non-reactive jet simulations were first conducted and the results were compared to the experimental measurement from a high-pressure visualization chamber to verify that the jet penetration under engine relevant conditions was correctly predicted. The model was then used to simulate natural gas combustion in a heavy-duty diesel engine equipped with a high-pressure direct injection system. The simulation results were compared with the experimental measurement from a research engine to verify the accuracy of the model for both the combustion rate and engine-out emissions.


2020 ◽  
Vol 371 ◽  
pp. 108-123 ◽  
Author(s):  
Ruiqiang He ◽  
Xiangchu Feng ◽  
Weiwei Wang ◽  
Xiaolong Zhu ◽  
Chunyu Yang

Sign in / Sign up

Export Citation Format

Share Document