scholarly journals Differential geometry methods for constructing manifold-targeted recurrent neural networks.

2021 ◽  
Author(s):  
Federico Claudi ◽  
Tiago Branco

Neural computations can be framed as dynamical processes, whereby the structure of the dynamics within a neural network are a direct reflection of the computations that the network performs. A key step in generating mechanistic interpretations within this computation through dynamics framework is to establish the link between network connectivity, dynamics and computation. This link is only partly understood. Recent work has focused on producing algorithms for engineering artificial recurrent neural networks (RNN) with dynamics targeted to a specific goal manifold. Some of these algorithms only require a set of vectors tangent to the target manifold to be computed, and thus provide a general method that can be applied to a diverse set of problems. Nevertheless, computing such vectors for an arbitrary manifold in a high dimensional state space remains highly challenging, which in practice limits the applicability of this approach. Here we demonstrate how topology and differential geometry can be leveraged to simplify this task, by first computing tangent vectors on a low-dimensional topological manifold and then embedding these in state space. The simplicity of this procedure greatly facilitates the creation of manifold-targeted RNNs, as well as the process of designing task-solving on-manifold dynamics. This new method should enable the application of network engineering-based approaches to a wide set of problems in neuroscience and machine learning. Furthermore, our description of how fundamental concepts from differential geometry can be mapped onto different aspects of neural dynamics is a further demonstration of how the language of differential geometry can enrich the conceptual framework for describing neural dynamics and computation.

2018 ◽  
Author(s):  
Jonathan C Kao

AbstractRecurrent neural networks (RNNs) are increasingly being used to model complex cognitive and motor tasks performed by behaving animals. Here, RNNs are trained to reproduce animal behavior while also recapitulating key statistics of empirically recorded neural activity. In this manner, the RNN can be viewed as an in silico circuit whose computational elements share similar motifs with the cortical area it is modeling. Further, as the RNN’s governing equations and parameters are fully known, they can be analyzed to propose hypotheses for how neural populations compute. In this context, we present important considerations when using RNNs to model motor behavior in a delayed reach task. First, by varying the network’s nonlinear activation and rate regularization, we show that RNNs reproducing single neuron firing rate motifs may not adequately capture important population motifs. Second, by visualizing the RNN’s dynamics in low-dimensional projections, we demonstrate that even when RNNs recapitulate key neurophysiological features on both the single neuron and population levels, it can do so through distinctly different dynamical mechanisms. To militate between these mechanisms, we show that an RNN consistent with a previously proposed dynamical mechanism is more robust to noise. Finally, we show that these dynamics are sufficient for the RNN to generalize to a target switch task it was not trained on. Together, these results emphasize important considerations when using RNN models to probe neural dynamics.


2016 ◽  
Vol 39 ◽  
Author(s):  
Stefan L. Frank ◽  
Hartmut Fitz

AbstractPrior language input is not lost but integrated with the current input. This principle is demonstrated by “reservoir computing”: Untrained recurrent neural networks project input sequences onto a random point in high-dimensional state space. Earlier inputs can be retrieved from this projection, albeit less reliably so as more input is received. The bottleneck is therefore not “Now-or-Never” but “Sooner-is-Better.”


Author(s):  
Samuel P. Burns ◽  
Sabato Santaniello ◽  
William S. Anderson ◽  
Sridevi V. Sarma

Communication between specialized regions of the brain is a dynamic process allowing for different connections to accomplish different tasks. While the content of interregional communication is complex, the pattern of connectivity (i.e., which regions communicate) may lie in a lower dimensional state-space. In epilepsy, seizures elicit changes in connectivity, whose patterns shed insight into the nature of seizures and the seizure focus. We investigated connectivity in 3 patients by applying network-based analysis on multi-day subdural electrocorticographic recordings (ECoG). We found that (i) the network connectivity defines a finite set of brain states, (ii) seizures are characterized by a consistent progression of states, and (iii) the focus is isolated from surrounding regions at the seizure onset and becomes most connected in the network towards seizure termination. Our results suggest that a finite-dimensional state-space model may characterize the dynamics of the epileptic brain, and may ultimately be used to localize seizure foci.


2019 ◽  
Author(s):  
Sandeep B. Reddy ◽  
Allan Ross Magee ◽  
Rajeev K. Jaiman ◽  
J. Liu ◽  
W. Xu ◽  
...  

Abstract In this paper, we present a data-driven approach to construct a reduced-order model (ROM) for the unsteady flow field and fluid-structure interaction. This proposed approach relies on (i) a projection of the high-dimensional data from the Navier-Stokes equations to a low-dimensional subspace using the proper orthogonal decomposition (POD) and (ii) integration of the low-dimensional model with the recurrent neural networks. For the hybrid ROM formulation, we consider long short term memory networks with encoder-decoder architecture, which is a special variant of recurrent neural networks. The mathematical structure of recurrent neural networks embodies a non-linear state space form of the underlying dynamical behavior. This particular attribute of an RNN makes it suitable for non-linear unsteady flow problems. In the proposed hybrid RNN method, the spatial and temporal features of the unsteady flow system are captured separately. Time-invariant modes obtained by low-order projection embodies the spatial features of the flow field, while the temporal behavior of the corresponding modal coefficients is learned via recurrent neural networks. The effectiveness of the proposed method is first demonstrated on a canonical problem of flow past a cylinder at low Reynolds number. With regard to a practical marine/offshore engineering demonstration, we have applied and examined the reliability of the proposed data-driven framework for the predictions of vortex-induced vibrations of a flexible offshore riser at high Reynolds number.


2021 ◽  
pp. 1-40
Author(s):  
Germán Abrevaya ◽  
Guillaume Dumas ◽  
Aleksandr Y. Aravkin ◽  
Peng Zheng ◽  
Jean-Christophe Gagnon-Audet ◽  
...  

Abstract Many natural systems, especially biological ones, exhibit complex multivariate nonlinear dynamical behaviors that can be hard to capture by linear autoregressive models. On the other hand, generic nonlinear models such as deep recurrent neural networks often require large amounts of training data, not always available in domains such as brain imaging; also, they often lack interpretability. Domain knowledge about the types of dynamics typically observed in such systems, such as a certain type of dynamical systems models, could complement purely data-driven techniques by providing a good prior. In this work, we consider a class of ordinary differential equation (ODE) models known as van der Pol (VDP) oscil lators and evaluate their ability to capture a low-dimensional representation of neural activity measured by different brain imaging modalities, such as calcium imaging (CaI) and fMRI, in different living organisms: larval zebrafish, rat, and human. We develop a novel and efficient approach to the nontrivial problem of parameters estimation for a network of coupled dynamical systems from multivariate data and demonstrate that the resulting VDP models are both accurate and interpretable, as VDP's coupling matrix reveals anatomically meaningful excitatory and inhibitory interactions across different brain subsystems. VDP outperforms linear autoregressive models (VAR) in terms of both the data fit accuracy and the quality of insight provided by the coupling matrices and often tends to generalize better to unseen data when predicting future brain activity, being comparable to and sometimes better than the recurrent neural networks (LSTMs). Finally, we demonstrate that our (generative) VDP model can also serve as a data-augmentation tool leading to marked improvements in predictive accuracy of recurrent neural networks. Thus, our work contributes to both basic and applied dimensions of neuroimaging: gaining scientific insights and improving brain-based predictive models, an area of potentially high practical importance in clinical diagnosis and neurotechnology.


Sign in / Sign up

Export Citation Format

Share Document