Echo State Property Linked to an Input: Exploring a Fundamental Characteristic of Recurrent Neural Networks

2013 ◽  
Vol 25 (3) ◽  
pp. 671-696 ◽  
Author(s):  
G. Manjunath ◽  
H. Jaeger

The echo state property is a key for the design and training of recurrent neural networks within the paradigm of reservoir computing. In intuitive terms, this is a passivity condition: a network having this property, when driven by an input signal, will become entrained by the input and develop an internal response signal. This excited internal dynamics can be seen as a high-dimensional, nonlinear, unique transform of the input with a rich memory content. This view has implications for understanding neural dynamics beyond the field of reservoir computing. Available definitions and theorems concerning the echo state property, however, are of little practical use because they do not relate the network response to temporal or statistical properties of the driving input. Here we present a new definition of the echo state property that directly connects it to such properties. We derive a fundamental 0-1 law: if the input comes from an ergodic source, the network response has the echo state property with probability one or zero, independent of the given network. Furthermore, we give a sufficient condition for the echo state property that connects statistical characteristics of the input to algebraic properties of the network connection matrix. The mathematical methods that we employ are freshly imported from the young field of nonautonomous dynamical systems theory. Since these methods are not yet well known in neural computation research, we introduce them in some detail. As a side story, we hope to demonstrate the eminent usefulness of these methods.

2020 ◽  
Vol 126 ◽  
pp. 191-217 ◽  
Author(s):  
P.R. Vlachas ◽  
J. Pathak ◽  
B.R. Hunt ◽  
T.P. Sapsis ◽  
M. Girvan ◽  
...  

2018 ◽  
Author(s):  
Chris Kiefer

Conceptors are a recent development in the field of reservoir computing; they can be used to influence the dynamics of recurrent neural networks (RNNs), enabling generation of arbitrary patterns based on training data. Conceptors allow interpolation and extrapolation between patterns, and also provide a system of boolean logic for combining patterns together. Generation and manipulation of arbitrary patterns using conceptors has significant potential as a sound synthesis method for applications in computer music and procedural audio but has yet to be explored. Two novel methods of sound synthesis based on conceptors are introduced. Conceptular Synthesis is based on granular synthesis; sets of conceptors are trained to recall varying patterns from a single RNN, then a runtime mechanism switches between them, generating short patterns which are recombined into a longer sound. Conceptillators are trainable, pitch-controlled oscillators for harmonically rich waveforms, commonly used in a variety of sound synthesis applications. Both systems can exploit conceptor pattern morphing, boolean logic and manipulation of RNN dynamics, enabling new creative sonic possibilities. Experiments reveal how RNN runtime parameters can be used for pitch-independent timestretching and for precise frequency control of cyclic waveforms. They show how these techniques can create highly malleable sound synthesis models, trainable using short sound samples. Limitations are revealed with regards to reproduction quality, and pragmatic limitations are also shown, where exponential rises in computation and memory requirements preclude the use of these models for training with longer sound samples. The techniques presented here represent an initial exploration of the sound synthesis potential of conceptors; future possibilities and research questions are outlined, including possibilities in generative sound.


2007 ◽  
Vol 362 (1479) ◽  
pp. 403-410 ◽  
Author(s):  
Raffaele Calabretta

The aim of this paper is to propose an interdisciplinary evolutionary connectionism approach for the study of the evolution of modularity. It is argued that neural networks as a model of the nervous system and genetic algorithms as simulative models of biological evolution would allow us to formulate a clear and operative definition of module and to simulate the different evolutionary scenarios proposed for the origin of modularity. I will present a recent model in which the evolution of primate cortical visual streams is possible starting from non-modular neural networks. Simulation results not only confirm the existence of the phenomenon of neural interference in non-modular network architectures but also, for the first time, reveal the existence of another kind of interference at the genetic level, i.e. genetic interference, a new population genetic mechanism that is independent from the network architecture. Our simulations clearly show that genetic interference reduces the evolvability of visual neural networks and sexual reproduction can at least partially solve the problem of genetic interference. Finally, it is shown that entrusting the task of finding the neural network architecture to evolution and that of finding the network connection weights to learning is a way to completely avoid the problem of genetic interference. On the basis of this evidence, it is possible to formulate a new hypothesis on the origin of structural modularity, and thus to overcome the traditional dichotomy between innatist and empiricist theories of mind.


2011 ◽  
Vol 467-469 ◽  
pp. 731-736
Author(s):  
Zong Bing Lin ◽  
Qian Rong Tan ◽  
Jun Li

Globally exponentially stability (GES) of a class of discrete- time recurrent neural networks with unsaturating linear activation functions is studied. Based on matrix eigenvalue, a new definition of GES is presented. By applying matrix theory, some conditions for GES are obtained. Simultaneously, those conditions are proved without energy functions.


2016 ◽  
Vol 39 ◽  
Author(s):  
Stefan L. Frank ◽  
Hartmut Fitz

AbstractPrior language input is not lost but integrated with the current input. This principle is demonstrated by “reservoir computing”: Untrained recurrent neural networks project input sequences onto a random point in high-dimensional state space. Earlier inputs can be retrieved from this projection, albeit less reliably so as more input is received. The bottleneck is therefore not “Now-or-Never” but “Sooner-is-Better.”


2018 ◽  
Vol 30 (6) ◽  
pp. 1449-1513 ◽  
Author(s):  
E. Paxon Frady ◽  
Denis Kleyko ◽  
Friedrich T. Sommer

To accommodate structured approaches of neural computation, we propose a class of recurrent neural networks for indexing and storing sequences of symbols or analog data vectors. These networks with randomized input weights and orthogonal recurrent weights implement coding principles previously described in vector symbolic architectures (VSA) and leverage properties of reservoir computing. In general, the storage in reservoir computing is lossy, and crosstalk noise limits the retrieval accuracy and information capacity. A novel theory to optimize memory performance in such networks is presented and compared with simulation experiments. The theory describes linear readout of analog data and readout with winner-take-all error correction of symbolic data as proposed in VSA models. We find that diverse VSA models from the literature have universal performance properties, which are superior to what previous analyses predicted. Further, we propose novel VSA models with the statistically optimal Wiener filter in the readout that exhibit much higher information capacity, in particular for storing analog data. The theory we present also applies to memory buffers, networks with gradual forgetting, which can operate on infinite data streams without memory overflow. Interestingly, we find that different forgetting mechanisms, such as attenuating recurrent weights or neural nonlinearities, produce very similar behavior if the forgetting time constants are matched. Such models exhibit extensive capacity when their forgetting time constant is optimized for given noise conditions and network size. These results enable the design of new types of VSA models for the online processing of data streams.


2020 ◽  
Author(s):  
Yanan Zhong ◽  
Jianshi Tang ◽  
Xinyi Li ◽  
Bin Gao ◽  
He Qian ◽  
...  

Abstract Reservoir computing (RC) is a highly efficient network for processing spatiotemporal signals due to its low training cost compared to standard recurrent neural networks. The design of different reservoir states plays a very important role in the hardware implementation of RC system. Recent studies have used the device-to-device variation to generate different reservoir states; however, this method is not well controllable and reproducible. To solve this problem, we report a dynamic memristor-based RC system. By applying a controllable mask process, we reveal that even a single dynamic memristor can generate rich reservoir states and realize the complete reservoir function. We further build a parallel RC system that can efficiently handle spatiotemporal tasks including spoken-digit and handwritten-digit recognitions, in which high classification accuracies of 99.6% and 97.6% have been achieved, respectively. The performance of dynamic memristor-based RC system is almost equivalent to the software-based one. Besides, our RC system does not require additional read operations, which can make full use of the device nonlinearity and further improve the system efficiency. Our work could pave the road towards high-efficiency memristor-based RC systems to handle more complex spatiotemporal tasks in the future.


2021 ◽  
Vol 5 (4) ◽  
pp. 260
Author(s):  
Xiao Liu ◽  
Kelin Li ◽  
Qiankun Song ◽  
Xujun Yang

In this paper, the quasi-projective synchronization of distributed-order recurrent neural networks is investigated. Firstly, based on the definition of the distributed-order derivative and metric space theory, two distributed-order differential inequalities are obtained. Then, by employing the Lyapunov method, Laplace transform, Laplace final value theorem, and some inequality techniques, the quasi-projective synchronization sufficient conditions for distributed-order recurrent neural networks are established in cases of feedback control and hybrid control schemes, respectively. Finally, two numerical examples are given to verify the effectiveness of the theoretical results.


2018 ◽  
Author(s):  
Chris Kiefer

Conceptors are a recent development in the field of reservoir computing; they can be used to influence the dynamics of recurrent neural networks (RNNs), enabling generation of arbitrary patterns based on training data. Conceptors allow interpolation and extrapolation between patterns, and also provide a system of boolean logic for combining patterns together. Generation and manipulation of arbitrary patterns using conceptors has significant potential as a sound synthesis method for applications in computer music and procedural audio but has yet to be explored. Two novel methods of sound synthesis based on conceptors are introduced. Conceptular Synthesis is based on granular synthesis; sets of conceptors are trained to recall varying patterns from a single RNN, then a runtime mechanism switches between them, generating short patterns which are recombined into a longer sound. Conceptillators are trainable, pitch-controlled oscillators for harmonically rich waveforms, commonly used in a variety of sound synthesis applications. Both systems can exploit conceptor pattern morphing, boolean logic and manipulation of RNN dynamics, enabling new creative sonic possibilities. Experiments reveal how RNN runtime parameters can be used for pitch-independent timestretching and for precise frequency control of cyclic waveforms. They show how these techniques can create highly malleable sound synthesis models, trainable using short sound samples. Limitations are revealed with regards to reproduction quality, and pragmatic limitations are also shown, where exponential rises in computation and memory requirements preclude the use of these models for training with longer sound samples. The techniques presented here represent an initial exploration of the sound synthesis potential of conceptors; future possibilities and research questions are outlined, including possibilities in generative sound.


Sign in / Sign up

Export Citation Format

Share Document