Weak Sensitivity to Initial Conditions for Generating Temporal Patterns in Recurrent Neural Networks: A Reservoir Computing Approach

Author(s):  
Hiromichi Suetani
2020 ◽  
Vol 126 ◽  
pp. 191-217 ◽  
Author(s):  
P.R. Vlachas ◽  
J. Pathak ◽  
B.R. Hunt ◽  
T.P. Sapsis ◽  
M. Girvan ◽  
...  

2013 ◽  
Vol 25 (3) ◽  
pp. 671-696 ◽  
Author(s):  
G. Manjunath ◽  
H. Jaeger

The echo state property is a key for the design and training of recurrent neural networks within the paradigm of reservoir computing. In intuitive terms, this is a passivity condition: a network having this property, when driven by an input signal, will become entrained by the input and develop an internal response signal. This excited internal dynamics can be seen as a high-dimensional, nonlinear, unique transform of the input with a rich memory content. This view has implications for understanding neural dynamics beyond the field of reservoir computing. Available definitions and theorems concerning the echo state property, however, are of little practical use because they do not relate the network response to temporal or statistical properties of the driving input. Here we present a new definition of the echo state property that directly connects it to such properties. We derive a fundamental 0-1 law: if the input comes from an ergodic source, the network response has the echo state property with probability one or zero, independent of the given network. Furthermore, we give a sufficient condition for the echo state property that connects statistical characteristics of the input to algebraic properties of the network connection matrix. The mathematical methods that we employ are freshly imported from the young field of nonautonomous dynamical systems theory. Since these methods are not yet well known in neural computation research, we introduce them in some detail. As a side story, we hope to demonstrate the eminent usefulness of these methods.


2018 ◽  
Author(s):  
Chris Kiefer

Conceptors are a recent development in the field of reservoir computing; they can be used to influence the dynamics of recurrent neural networks (RNNs), enabling generation of arbitrary patterns based on training data. Conceptors allow interpolation and extrapolation between patterns, and also provide a system of boolean logic for combining patterns together. Generation and manipulation of arbitrary patterns using conceptors has significant potential as a sound synthesis method for applications in computer music and procedural audio but has yet to be explored. Two novel methods of sound synthesis based on conceptors are introduced. Conceptular Synthesis is based on granular synthesis; sets of conceptors are trained to recall varying patterns from a single RNN, then a runtime mechanism switches between them, generating short patterns which are recombined into a longer sound. Conceptillators are trainable, pitch-controlled oscillators for harmonically rich waveforms, commonly used in a variety of sound synthesis applications. Both systems can exploit conceptor pattern morphing, boolean logic and manipulation of RNN dynamics, enabling new creative sonic possibilities. Experiments reveal how RNN runtime parameters can be used for pitch-independent timestretching and for precise frequency control of cyclic waveforms. They show how these techniques can create highly malleable sound synthesis models, trainable using short sound samples. Limitations are revealed with regards to reproduction quality, and pragmatic limitations are also shown, where exponential rises in computation and memory requirements preclude the use of these models for training with longer sound samples. The techniques presented here represent an initial exploration of the sound synthesis potential of conceptors; future possibilities and research questions are outlined, including possibilities in generative sound.


2016 ◽  
Vol 39 ◽  
Author(s):  
Stefan L. Frank ◽  
Hartmut Fitz

AbstractPrior language input is not lost but integrated with the current input. This principle is demonstrated by “reservoir computing”: Untrained recurrent neural networks project input sequences onto a random point in high-dimensional state space. Earlier inputs can be retrieved from this projection, albeit less reliably so as more input is received. The bottleneck is therefore not “Now-or-Never” but “Sooner-is-Better.”


2018 ◽  
Vol 30 (6) ◽  
pp. 1449-1513 ◽  
Author(s):  
E. Paxon Frady ◽  
Denis Kleyko ◽  
Friedrich T. Sommer

To accommodate structured approaches of neural computation, we propose a class of recurrent neural networks for indexing and storing sequences of symbols or analog data vectors. These networks with randomized input weights and orthogonal recurrent weights implement coding principles previously described in vector symbolic architectures (VSA) and leverage properties of reservoir computing. In general, the storage in reservoir computing is lossy, and crosstalk noise limits the retrieval accuracy and information capacity. A novel theory to optimize memory performance in such networks is presented and compared with simulation experiments. The theory describes linear readout of analog data and readout with winner-take-all error correction of symbolic data as proposed in VSA models. We find that diverse VSA models from the literature have universal performance properties, which are superior to what previous analyses predicted. Further, we propose novel VSA models with the statistically optimal Wiener filter in the readout that exhibit much higher information capacity, in particular for storing analog data. The theory we present also applies to memory buffers, networks with gradual forgetting, which can operate on infinite data streams without memory overflow. Interestingly, we find that different forgetting mechanisms, such as attenuating recurrent weights or neural nonlinearities, produce very similar behavior if the forgetting time constants are matched. Such models exhibit extensive capacity when their forgetting time constant is optimized for given noise conditions and network size. These results enable the design of new types of VSA models for the online processing of data streams.


Author(s):  
Hao Hu ◽  
Liqiang Wang ◽  
Guo-Jun Qi

Recent advancements in recurrent neural network (RNN) research have demonstrated the superiority of utilizing multiscale structures in learning temporal representations of time series. Currently, most of multiscale RNNs use fixed scales, which do not comply with the nature of dynamical temporal patterns among sequences. In this paper, we propose Adaptively Scaled Recurrent Neural Networks (ASRNN), a simple but efficient way to handle this problem. Instead of using predefined scales, ASRNNs are able to learn and adjust scales based on different temporal contexts, making them more flexible in modeling multiscale patterns. Compared with other multiscale RNNs, ASRNNs are bestowed upon dynamical scaling capabilities with much simpler structures, and are easy to be integrated with various RNN cells. The experiments on multiple sequence modeling tasks indicate ASRNNs can efficiently adapt scales based on different sequence contexts and yield better performances than baselines without dynamical scaling abilities.


2020 ◽  
Author(s):  
Yanan Zhong ◽  
Jianshi Tang ◽  
Xinyi Li ◽  
Bin Gao ◽  
He Qian ◽  
...  

Abstract Reservoir computing (RC) is a highly efficient network for processing spatiotemporal signals due to its low training cost compared to standard recurrent neural networks. The design of different reservoir states plays a very important role in the hardware implementation of RC system. Recent studies have used the device-to-device variation to generate different reservoir states; however, this method is not well controllable and reproducible. To solve this problem, we report a dynamic memristor-based RC system. By applying a controllable mask process, we reveal that even a single dynamic memristor can generate rich reservoir states and realize the complete reservoir function. We further build a parallel RC system that can efficiently handle spatiotemporal tasks including spoken-digit and handwritten-digit recognitions, in which high classification accuracies of 99.6% and 97.6% have been achieved, respectively. The performance of dynamic memristor-based RC system is almost equivalent to the software-based one. Besides, our RC system does not require additional read operations, which can make full use of the device nonlinearity and further improve the system efficiency. Our work could pave the road towards high-efficiency memristor-based RC systems to handle more complex spatiotemporal tasks in the future.


2020 ◽  
Vol 102 (3) ◽  
Author(s):  
Xiaolu Chen ◽  
Tongfeng Weng ◽  
Huijie Yang ◽  
Changgui Gu ◽  
Jie Zhang ◽  
...  

2018 ◽  
Author(s):  
Chris Kiefer

Conceptors are a recent development in the field of reservoir computing; they can be used to influence the dynamics of recurrent neural networks (RNNs), enabling generation of arbitrary patterns based on training data. Conceptors allow interpolation and extrapolation between patterns, and also provide a system of boolean logic for combining patterns together. Generation and manipulation of arbitrary patterns using conceptors has significant potential as a sound synthesis method for applications in computer music and procedural audio but has yet to be explored. Two novel methods of sound synthesis based on conceptors are introduced. Conceptular Synthesis is based on granular synthesis; sets of conceptors are trained to recall varying patterns from a single RNN, then a runtime mechanism switches between them, generating short patterns which are recombined into a longer sound. Conceptillators are trainable, pitch-controlled oscillators for harmonically rich waveforms, commonly used in a variety of sound synthesis applications. Both systems can exploit conceptor pattern morphing, boolean logic and manipulation of RNN dynamics, enabling new creative sonic possibilities. Experiments reveal how RNN runtime parameters can be used for pitch-independent timestretching and for precise frequency control of cyclic waveforms. They show how these techniques can create highly malleable sound synthesis models, trainable using short sound samples. Limitations are revealed with regards to reproduction quality, and pragmatic limitations are also shown, where exponential rises in computation and memory requirements preclude the use of these models for training with longer sound samples. The techniques presented here represent an initial exploration of the sound synthesis potential of conceptors; future possibilities and research questions are outlined, including possibilities in generative sound.


Sign in / Sign up

Export Citation Format

Share Document