scholarly journals Continuous Attractor Neural Networks: Candidate of a Canonical Model for Neural Information Representation

F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 156 ◽  
Author(s):  
Si Wu ◽  
K Y Michael Wong ◽  
C C Alan Fung ◽  
Yuanyuan Mi ◽  
Wenhao Zhang

Owing to its many computationally desirable properties, the model of continuous attractor neural networks (CANNs) has been successfully applied to describe the encoding of simple continuous features in neural systems, such as orientation, moving direction, head direction, and spatial location of objects. Recent experimental and computational studies revealed that complex features of external inputs may also be encoded by low-dimensional CANNs embedded in the high-dimensional space of neural population activity. The new experimental data also confirmed the existence of the M-shaped correlation between neuronal responses, which is a correlation structure associated with the unique dynamics of CANNs. This body of evidence, which is reviewed in this report, suggests that CANNs may serve as a canonical model for neural information representation.

2009 ◽  
Vol 102 (1) ◽  
pp. 614-635 ◽  
Author(s):  
Byron M. Yu ◽  
John P. Cunningham ◽  
Gopal Santhanam ◽  
Stephen I. Ryu ◽  
Krishna V. Shenoy ◽  
...  

We consider the problem of extracting smooth, low-dimensional neural trajectories that summarize the activity recorded simultaneously from many neurons on individual experimental trials. Beyond the benefit of visualizing the high-dimensional, noisy spiking activity in a compact form, such trajectories can offer insight into the dynamics of the neural circuitry underlying the recorded activity. Current methods for extracting neural trajectories involve a two-stage process: the spike trains are first smoothed over time, then a static dimensionality-reduction technique is applied. We first describe extensions of the two-stage methods that allow the degree of smoothing to be chosen in a principled way and that account for spiking variability, which may vary both across neurons and across time. We then present a novel method for extracting neural trajectories—Gaussian-process factor analysis (GPFA)—which unifies the smoothing and dimensionality-reduction operations in a common probabilistic framework. We applied these methods to the activity of 61 neurons recorded simultaneously in macaque premotor and motor cortices during reach planning and execution. By adopting a goodness-of-fit metric that measures how well the activity of each neuron can be predicted by all other recorded neurons, we found that the proposed extensions improved the predictive ability of the two-stage methods. The predictive ability was further improved by going to GPFA. From the extracted trajectories, we directly observed a convergence in neural state during motor planning, an effect that was shown indirectly by previous studies. We then show how such methods can be a powerful tool for relating the spiking activity across a neural population to the subject's behavior on a single-trial basis. Finally, to assess how well the proposed methods characterize neural population activity when the underlying time course is known, we performed simulations that revealed that GPFA performed tens of percent better than the best two-stage method.


2018 ◽  
Author(s):  
Erik Rybakken ◽  
Nils Baas ◽  
Benjamin Dunn

AbstractWe introduce a novel data-driven approach to discover and decode features in the neural code coming from large population neural recordings with minimal assumptions, using cohomological learning. We apply our approach to neural recordings of mice moving freely in a box, where we find a circular feature. We then observe that the decoded value corresponds well to the head direction of the mouse. Thus we capture head direction cells and decode the head direction from the neural population activity without having to process the behaviour of the mouse. Interestingly, the decoded values convey more information about the neural activity than the tracked head direction does, with differences that have some spatial organization. Finally, we note that the residual population activity, after the head direction has been accounted for, retains some low-dimensional structure which is correlated with the speed of the mouse.


2021 ◽  
Author(s):  
C. Daniel Greenidge ◽  
Benjamin Scholl ◽  
Jacob Yates ◽  
Jonathan W. Pillow

Neural decoding methods provide a powerful tool for quantifying the information content of neural population codes and the limits imposed by correlations in neural activity. However, standard decoding methods are prone to overfitting and scale poorly to high-dimensional settings. Here, we introduce a novel decoding method to overcome these limitations. Our approach, the Gaussian process multi-class decoder (GPMD), is well-suited to decoding a continuous low-dimensional variable from high-dimensional population activity, and provides a platform for assessing the importance of correlations in neural population codes. The GPMD is a multinomial logistic regression model with a Gaussian process prior over the decoding weights. The prior includes hyperparameters that govern the smoothness of each neuron's decoding weights, allowing automatic pruning of uninformative neurons during inference. We provide a variational inference method for fitting the GPMD to data, which scales to hundreds or thousands of neurons and performs well even in datasets with more neurons than trials. We apply the GPMD to recordings from primary visual cortex in three different species: monkey, ferret, and mouse. Our decoder achieves state-of-the-art accuracy on all three datasets, and substantially outperforms independent Bayesian decoding, showing that knowledge of the correlation structure is essential for optimal decoding in all three species.


2002 ◽  
Vol 14 (5) ◽  
pp. 1195-1232 ◽  
Author(s):  
Douglas L. T. Rohde

Multidimensional scaling (MDS) is the process of transforming a set of points in a high-dimensional space to a lower-dimensional one while preserving the relative distances between pairs of points. Although effective methods have been developed for solving a variety of MDS problems, they mainly depend on the vectors in the lower-dimensional space having real-valued components. For some applications, the training of neural networks in particular, it is preferable or necessary to obtain vectors in a discrete, binary space. Unfortunately, MDS into a low-dimensional discrete space appears to be a significantly harder problem than MDS into a continuous space. This article introduces and analyzes several methods for performing approximately optimized binary MDS.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Aneesha K Suresh ◽  
James M Goodman ◽  
Elizaveta V Okorokova ◽  
Matthew Kaufman ◽  
Nicholas G Hatsopoulos ◽  
...  

Low-dimensional linear dynamics are observed in neuronal population activity in primary motor cortex (M1) when monkeys make reaching movements. This population-level behavior is consistent with a role for M1 as an autonomous pattern generator that drives muscles to give rise to movement. In the present study, we examine whether similar dynamics are also observed during grasping movements, which involve fundamentally different patterns of kinematics and muscle activations. Using a variety of analytical approaches, we show that M1 does not exhibit such dynamics during grasping movements. Rather, the grasp-related neuronal dynamics in M1 are similar to their counterparts in somatosensory cortex, whose activity is driven primarily by afferent inputs rather than by intrinsic dynamics. The basic structure of the neuronal activity underlying hand control is thus fundamentally different from that underlying arm control.


2011 ◽  
Vol 23 (6) ◽  
pp. 1452-1483 ◽  
Author(s):  
Felipe Gerhard ◽  
Robert Haslinger ◽  
Gordon Pipa

Statistical models of neural activity are integral to modern neuroscience. Recently interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However, any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based on the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models that neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem and provide a practical step-by-step procedure for applying it to testing the sufficiency of neural population models. Using several simple analytically tractable models and more complex simulated and real data sets, we demonstrate that important features of the population activity can be detected only using the multivariate extension of the test.


2021 ◽  
Author(s):  
W. Jeffrey Johnston ◽  
Stefano Fusi

Humans and other animals demonstrate a remarkable ability to generalize knowledge across distinct contexts and objects during natural behavior. We posit that this ability depends on the geometry of the neural population representations of these objects and contexts. Specifically, abstract, or disentangled, neural representations -- in which neural population activity is a linear function of the variables important for making a decision -- are known to allow for this kind of generalization. Further, recent neurophysiological studies have shown that the brain has sufficiently abstract representations of some sensory and cognitive variables to enable generalization across distinct contexts. However, it is unknown how these abstract representations emerge. Here, using feedforward neural networks, we demonstrate a simple mechanism by which these abstract representations can be produced: The learning of multiple distinct classification tasks. We demonstrate that, despite heterogeneity in the task structure, abstract representations that enable reliable generalization can be produced from a variety of different inputs -- including standard nonlinearly mixed inputs, inputs that mimic putative representations from early sensory areas, and even simple image inputs from a standard machine learning data set. Thus, we conclude that abstract representations of sensory and cognitive variables emerge from the multiple behaviors that animals exhibit in the natural world, and may be pervasive in high-level brain regions. We make several specific predictions about which variables will be represented abstractly as well as show how these representations can be detected.


2019 ◽  
Vol 31 (1) ◽  
pp. 68-93 ◽  
Author(s):  
Erik Rybakken ◽  
Nils Baas ◽  
Benjamin Dunn

We introduce a novel data-driven approach to discover and decode features in the neural code coming from large population neural recordings with minimal assumptions, using cohomological feature extraction. We apply our approach to neural recordings of mice moving freely in a box, where we find a circular feature. We then observe that the decoded value corresponds well to the head direction of the mouse. Thus, we capture head direction cells and decode the head direction from the neural population activity without having to process the mouse's behavior. Interestingly, the decoded values convey more information about the neural activity than the tracked head direction does, with differences that have some spatial organization. Finally, we note that the residual population activity, after the head direction has been accounted for, retains some low-dimensional structure that is correlated with the speed of the mouse.


NeuroImage ◽  
2021 ◽  
pp. 118200
Author(s):  
Sayan Ghosal ◽  
Qiang Chen ◽  
Giulio Pergola ◽  
Aaron L. Goldman ◽  
William Ulrich ◽  
...  

2021 ◽  
pp. 1-12
Author(s):  
Jian Zheng ◽  
Jianfeng Wang ◽  
Yanping Chen ◽  
Shuping Chen ◽  
Jingjin Chen ◽  
...  

Neural networks can approximate data because of owning many compact non-linear layers. In high-dimensional space, due to the curse of dimensionality, data distribution becomes sparse, causing that it is difficulty to provide sufficient information. Hence, the task becomes even harder if neural networks approximate data in high-dimensional space. To address this issue, according to the Lipschitz condition, the two deviations, i.e., the deviation of the neural networks trained using high-dimensional functions, and the deviation of high-dimensional functions approximation data, are derived. This purpose of doing this is to improve the ability of approximation high-dimensional space using neural networks. Experimental results show that the neural networks trained using high-dimensional functions outperforms that of using data in the capability of approximation data in high-dimensional space. We find that the neural networks trained using high-dimensional functions more suitable for high-dimensional space than that of using data, so that there is no need to retain sufficient data for neural networks training. Our findings suggests that in high-dimensional space, by tuning hidden layers of neural networks, this is hard to have substantial positive effects on improving precision of approximation data.


Sign in / Sign up

Export Citation Format

Share Document