scholarly journals Mapping Low-Dimensional Dynamics to High-Dimensional Neural Activity: A Derivation of the Ring Model From the Neural Engineering Framework

2021 ◽  
Vol 33 (3) ◽  
pp. 827-852
Author(s):  
Omri Barak ◽  
Sandro Romani

Empirical estimates of the dimensionality of neural population activity are often much lower than the population size. Similar phenomena are also observed in trained and designed neural network models. These experimental and computational results suggest that mapping low-dimensional dynamics to high-dimensional neural space is a common feature of cortical computation. Despite the ubiquity of this observation, the constraints arising from such mapping are poorly understood. Here we consider a specific example of mapping low-dimensional dynamics to high-dimensional neural activity—the neural engineering framework. We analytically solve the framework for the classic ring model—a neural network encoding a static or dynamic angular variable. Our results provide a complete characterization of the success and failure modes for this model. Based on similarities between this and other frameworks, we speculate that these results could apply to more general scenarios.

2000 ◽  
Author(s):  
Taejun Choi ◽  
Yung C. Shin

Abstract A new method for on-line chatter detection is presented. The proposed method characterizes the significant transition from high dimensional to low dimensional dynamics in the cutting process at the onset of chatter. Based on the likeness of the cutting process to the nearly-1/f process, this wavelet-based maximum likelihood (ML) estimation algorithm is applied for on-line chatter detection. The presented chatter detection index γ is independent of the cutting conditions and gives excellent detection accuracy and permissible computational efficiency, which makes it suitable for on-line implementation. The validity of the proposed method is demonstrated through the tests with extensive actual data obtained from turning and milling processes.


2020 ◽  
Author(s):  
Alexander Feigin ◽  
Aleksei Seleznev ◽  
Dmitry Mukhin ◽  
Andrey Gavrilov ◽  
Evgeny Loskutov

<p>We suggest a new method for construction of data-driven dynamical models from observed multidimensional time series. The method is based on a recurrent neural network (RNN) with specific structure, which allows for the joint reconstruction of both a low-dimensional embedding for dynamical components in the data and an operator describing the low-dimensional evolution of the system. The key link of the method is a Bayesian optimization of both model structure and the hypothesis about the data generating law, which is needed for constructing the cost function for model learning.  The form of the model we propose allows us to construct a stochastic dynamical system of moderate dimension that copies dynamical properties of the original high-dimensional system. An advantage of the proposed method is the data-adaptive properties of the RNN model: it is based on the adjustable nonlinear elements and has easily scalable structure. The combination of the RNN with the Bayesian optimization procedure efficiently provides the model with statistically significant nonlinearity and dimension.<br>The method developed for the model optimization aims to detect the long-term connections between system’s states – the memory of the system: the cost-function used for model learning is constructed taking into account this factor. In particular, in the case of absence of interaction between the dynamical component and noise, the method provides unbiased reconstruction of the hidden deterministic system. In the opposite case when the noise has strong impact on the dynamics, the method yield a model in the form of a nonlinear stochastic map determining the Markovian process with memory. Bayesian approach used for selecting both the optimal model’s structure and the appropriate cost function allows to obtain the statistically significant inferences about the dynamical signal in data as well as its interaction with the noise components.<br>Data driven model derived from the relatively short time series of the QG3 model – the high dimensional nonlinear system producing chaotic behavior – is shown be able to serve as a good simulator for the QG3 LFV components. The statistically significant recurrent states of the QG3 model, i.e. the well-known teleconnections in NH, are all reproduced by the model obtained. Moreover, statistics of the residence times of the model near these states is very close to the corresponding statistics of the original QG3 model. These results demonstrate that the method can be useful in modeling the variability of the real atmosphere.</p><p>The work was supported by the Russian Science Foundation (Grant No. 19-42-04121).</p>


Author(s):  
Felix Jimenez ◽  
Amanda Koepke ◽  
Mary Gregg ◽  
Michael Frey

A generative adversarial network (GAN) is an artificial neural network with a distinctive training architecture, designed to createexamples that faithfully reproduce a target distribution. GANs have recently had particular success in applications involvinghigh-dimensional distributions in areas such as image processing. Little work has been reported for low dimensions, where properties of GANs may be better identified and understood. We studied GAN performance in simulated low-dimensional settings, allowing us totransparently assess effects of target distribution complexity and training data sample size on GAN performance in a simpleexperiment. This experiment revealed two important forms of GAN error, tail underfilling and bridge bias, where the latter is analogousto the tunneling observed in high-dimensional GANs.


2021 ◽  
Author(s):  
Taylor W Webb ◽  
Kiyofumi Miyoshi ◽  
Tsz Yan So ◽  
Sivananda Rajananda ◽  
Hakwan Lau

Previous work has sought to understand decision confidence as a prediction of the probability that a decision will be correct, leading to debate over whether these predictions are optimal, and whether they rely on the same decision variable as decisions themselves. This work has generally relied on idealized, low-dimensional modeling frameworks, such as signal detection theory or Bayesian inference, leaving open the question of how decision confidence operates in the domain of high-dimensional, naturalistic stimuli. To address this, we developed a deep neural network model optimized to assess decision confidence directly given high-dimensional inputs such as images. The model naturally accounts for a number of puzzling dissociations between decisions and confidence, suggests a principled explanation of these dissociations in terms of optimization for the statistics of sensory inputs, and makes the surprising prediction that, despite these dissociations, decisions and confidence depend on a common decision variable.


2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Paul T. Pearson

This paper develops a process whereby a high-dimensional clustering problem is solved using a neural network and a low-dimensional cluster diagram of the results is produced using the Mapper method from topological data analysis. The low-dimensional cluster diagram makes the neural network's solution to the high-dimensional clustering problem easy to visualize, interpret, and understand. As a case study, a clustering problem from a diabetes study is solved using a neural network. The clusters in this neural network are visualized using the Mapper method during several stages of the iterative process used to construct the neural network. The neural network and Mapper clustering diagram results for the diabetes study are validated by comparison to principal component analysis.


2020 ◽  
Author(s):  
Mohammad R. Rezaei ◽  
Alex E. Hadjinicolaou ◽  
Sydney S. Cash ◽  
Uri T. Eden ◽  
Ali Yousefi

AbstractThe Bayesian state-space neural encoder-decoder modeling framework is an established solution to reveal how changes in brain dynamics encode physiological covariates like movement or cognition. Although the framework is increasingly being applied to progress the field of neuroscience, its application to modeling high-dimensional neural data continues to be a challenge. Here, we propose a novel solution that avoids the complexity of encoder models that characterize high-dimensional data as a function of the underlying state processes. We build a discriminative model to estimate state processes as a function of current and previous observations of neural activity. We then develop the filter and parameter estimation solutions for this new class of state-space modeling framework called the “direct decoder” model. We apply the model to decode movement trajectories of a rat in a W-shaped maze from the ensemble spiking activity of place cells and achieve comparable performance to modern decoding solutions, without needing an encoding step in the model development. We further demonstrate how a dynamical auto-encoder can be built using the direct decoder model; here, the underlying state process links the high-dimensional neural activity to the behavioral readout. The dynamical auto-encoder can optimally estimate the low-dimensional dynamical manifold which represents the relationship between brain and behavior.


2021 ◽  
Author(s):  
C. Daniel Greenidge ◽  
Benjamin Scholl ◽  
Jacob Yates ◽  
Jonathan W. Pillow

Neural decoding methods provide a powerful tool for quantifying the information content of neural population codes and the limits imposed by correlations in neural activity. However, standard decoding methods are prone to overfitting and scale poorly to high-dimensional settings. Here, we introduce a novel decoding method to overcome these limitations. Our approach, the Gaussian process multi-class decoder (GPMD), is well-suited to decoding a continuous low-dimensional variable from high-dimensional population activity, and provides a platform for assessing the importance of correlations in neural population codes. The GPMD is a multinomial logistic regression model with a Gaussian process prior over the decoding weights. The prior includes hyperparameters that govern the smoothness of each neuron's decoding weights, allowing automatic pruning of uninformative neurons during inference. We provide a variational inference method for fitting the GPMD to data, which scales to hundreds or thousands of neurons and performs well even in datasets with more neurons than trials. We apply the GPMD to recordings from primary visual cortex in three different species: monkey, ferret, and mouse. Our decoder achieves state-of-the-art accuracy on all three datasets, and substantially outperforms independent Bayesian decoding, showing that knowledge of the correlation structure is essential for optimal decoding in all three species.


2022 ◽  
Author(s):  
Simone Blanco Malerba ◽  
Mirko Pieropan ◽  
Yoram Burak ◽  
Rava Azeredo da Silveira

Classical models of efficient coding in neurons assume simple mean responses--'tuning curves'--such as bell shaped or monotonic functions of a stimulus feature. Real neurons, however, can be more complex: grid cells, for example, exhibit periodic responses which impart the neural population code with high accuracy. But do highly accurate codes require fine tuning of the response properties? We address this question with the use of a benchmark model: a neural network with random synaptic weights which result in output cells with irregular tuning curves. Irregularity enhances the local resolution of the code but gives rise to catastrophic, global errors. For optimal smoothness of the tuning curves, when local and global errors balance out, the neural network compresses information from a high-dimensional representation to a low-dimensional one, and the resulting distributed code achieves exponential accuracy. An analysis of recordings from monkey motor cortex points to such 'compressed efficient coding'. Efficient codes do not require a finely tuned design--they emerge robustly from irregularity or randomness.


1996 ◽  
Vol 07 (04) ◽  
pp. 429-435 ◽  
Author(s):  
XING PEI ◽  
FRANK MOSS

We discuss the well-known problems associated with efforts to detect and characterize chaos and other low dimensional dynamics in biological settings. We propose a new method which shows promise for addressing these problems, and we demonstrate its effectiveness in an experiment with the crayfish sensory system. Recordings of action potentials in this system are the data. We begin with a pair of assumptions: that the times of firings of neural action potentials are largely determined by high dimensional random processes or “noise”; and that most biological files are non stationary, so that only relatively short files can be obtained under approximately constant conditions. The method is thus statistical in nature. It is designed to recognize individual “events” in the form of particular sequences of time intervals between action potentials which are the signatures of certain well defined dynamical behaviors. We show that chaos can be distinguished from limit cycles, even when the dynamics is heavily contaminated with noise. Extracellular recordings from the crayfish caudal photoreceptor, obtained while hydrodynamically stimulating the array of hair receptors on the tailfan, are used to illustrate the method.


2003 ◽  
Vol 125 (1) ◽  
pp. 21-28 ◽  
Author(s):  
Taejun Choi ◽  
Yung C. Shin

A new method for on-line chatter detection is presented. The proposed method characterizes the significant transition from high dimensional to low dimensional dynamics in the cutting process at the onset of chatter. Based on the observation that cutting signals contain fractal patterns, a wavelet-based maximum likelihood (ML) estimation algorithm is applied to on-line chatter detection. The presented chatter detection index γ is independent of the cutting conditions and gives excellent detection accuracy and permissible computational efficiency, which makes it suitable for on-line implementation. The validity of the proposed method is demonstrated through the tests with extensive actual data obtained from turning and milling processes.


Sign in / Sign up

Export Citation Format

Share Document