scholarly journals Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence Representation

2021 ◽  
pp. 1-43
Author(s):  
Alfred Rajakumar ◽  
John Rinzel ◽  
Zhe S. Chen

Abstract Recurrent neural networks (RNNs) have been widely used to model sequential neural dynamics (“neural sequences”) of cortical circuits in cognitive and motor tasks. Efforts to incorporate biological constraints and Dale's principle will help elucidate the neural representations and mechanisms of underlying circuits. We trained an excitatory-inhibitory RNN to learn neural sequences in a supervised manner and studied the representations and dynamic attractors of the trained network. The trained RNN was robust to trigger the sequence in response to various input signals and interpolated a time-warped input for sequence representation. Interestingly, a learned sequence can repeat periodically when the RNN evolved beyond the duration of a single sequence. The eigenspectrum of the learned recurrent connectivity matrix with growing or damping modes, together with the RNN's nonlinearity, were adequate to generate a limit cycle attractor. We further examined the stability of dynamic attractors while training the RNN to learn two sequences. Together, our results provide a general framework for understanding neural sequence representation in the excitatory-inhibitory RNN.

Author(s):  
Vasiliy Osipov ◽  
Viktor Nikiforov

Introduction: When substantiating promising architectures of streaming recurrent neural networks, it becomes necessary to assess their stability in processing various input signals. For this, stability diagrams are constructed containing the results of simulation for each of the nodes of these diagrams. Such an estimation can be time-consuming and computationally intensive, especially when analyzing large neural networks. Purpose: Search for methods of quick construction of such diagrams and assessing the stability of streaming recurrent neural networks. Results: Analysis of the features of the stability diagrams under study showed that the nodes of the diagrams are grouped into continuous zones with the same ratio characteristics of the input signal processing defects. With this in mind, the article proposes a method for constructing these diagrams based on bypassing the boundaries of their zones. With this approach, you do not have to perform simulation for the interior nodes of each zone. The simulation should be performed only for the nodes adjacent to zone boundaries. Due to this, the number of nodes for which you need to perform simulation sessions is reduced by an order of magnitude. The influence of the input signal coding types on the streaming recurrent neural network stability has been investigated. It is shown that the representation of input signals in the form of sequences of single pulses with intersecting elements can provide greater stability as compared to pulses without any intersection.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Lei Ding ◽  
Hong-Bing Zeng ◽  
Wei Wang ◽  
Fei Yu

This paper investigates the stability of static recurrent neural networks (SRNNs) with a time-varying delay. Based on the complete delay-decomposing approach and quadratic separation framework, a novel Lyapunov-Krasovskii functional is constructed. By employing a reciprocally convex technique to consider the relationship between the time-varying delay and its varying interval, some improved delay-dependent stability conditions are presented in terms of linear matrix inequalities (LMIs). Finally, a numerical example is provided to show the merits and the effectiveness of the proposed methods.


1999 ◽  
Vol 09 (02) ◽  
pp. 95-98 ◽  
Author(s):  
ANKE MEYER-BÄSE

This paper is concerned with the asymptotic hyperstability of recurrent neural networks. We derive based on the stability results necessary and sufficient conditions for the network parameters. The results we achieve are more general than those based on Lyapunov methods, since they provide milder constraints on the connection weights than the conventional results and do not suppose symmetry of the weights.


2018 ◽  
Author(s):  
Jonathan C Kao

AbstractRecurrent neural networks (RNNs) are increasingly being used to model complex cognitive and motor tasks performed by behaving animals. Here, RNNs are trained to reproduce animal behavior while also recapitulating key statistics of empirically recorded neural activity. In this manner, the RNN can be viewed as an in silico circuit whose computational elements share similar motifs with the cortical area it is modeling. Further, as the RNN’s governing equations and parameters are fully known, they can be analyzed to propose hypotheses for how neural populations compute. In this context, we present important considerations when using RNNs to model motor behavior in a delayed reach task. First, by varying the network’s nonlinear activation and rate regularization, we show that RNNs reproducing single neuron firing rate motifs may not adequately capture important population motifs. Second, by visualizing the RNN’s dynamics in low-dimensional projections, we demonstrate that even when RNNs recapitulate key neurophysiological features on both the single neuron and population levels, it can do so through distinctly different dynamical mechanisms. To militate between these mechanisms, we show that an RNN consistent with a previously proposed dynamical mechanism is more robust to noise. Finally, we show that these dynamics are sufficient for the RNN to generalize to a target switch task it was not trained on. Together, these results emphasize important considerations when using RNN models to probe neural dynamics.


2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Shifang Kuang ◽  
Yunjian Peng ◽  
Feiqi Deng ◽  
Wenhua Gao

Exponential stability in mean square of stochastic delay recurrent neural networks is investigated in detail. By using Itô’s formula and inequality techniques, the sufficient conditions to guarantee the exponential stability in mean square of an equilibrium are given. Under the conditions which guarantee the stability of the analytical solution, the Euler-Maruyama scheme and the split-step backward Euler scheme are proved to be mean-square stable. At last, an example is given to demonstrate our results.


1994 ◽  
Vol 6 (6) ◽  
pp. 1155-1173 ◽  
Author(s):  
Peter Manolios ◽  
Robert Fanelli

We examine the correspondence between first-order recurrent neural networks and deterministic finite state automata. We begin with the problem of inducing deterministic finite state automata from finite training sets, that include both positive and negative examples, an NP-hard problem (Angluin and Smith 1983). We use a neural network architecture with two recurrent layers, which we argue can approximate any discrete-time, time-invariant dynamic system, with computation of the full gradient during learning. The networks are trained to classify strings as belonging or not belonging to the grammar. The training sets used contain only short strings, and the sets are constructed in a way that does not require a priori knowledge of the grammar. After training, the networks are tested using various test sets with strings of length up to 1000, and are often able to correctly classify all the test strings. These results are comparable to those obtained with second-order networks (Giles et al. 1992; Watrous and Kuhn 1992a; Zeng et al. 1993). We observe that the networks emulate finite state automata, confirming the results of other authors, and we use a vector quantization algorithm to extract deterministic finite state automata after training and during testing of the networks, obtaining a table listing the start state, accept states, reject states, all transitions from the states, as well as some useful statistics. We examine the correspondence between finite state automata and neural networks in detail, showing two major stages in the learning process. To this end, we use a graphics module, which graphically depicts the states of the network during the learning and testing phases. We examine the networks' performance when tested on strings much longer than those in the training set, noting a measure based on clustering that is correlated to the stability of the networks. Finally, we observe that with sufficiently long training times, neural networks can become true finite state automata, due to the attractor structure of their dynamics.


Sign in / Sign up

Export Citation Format

Share Document