scholarly journals Reservoir computing and the Sooner-is-Better bottleneck

2016 ◽  
Vol 39 ◽  
Author(s):  
Stefan L. Frank ◽  
Hartmut Fitz

AbstractPrior language input is not lost but integrated with the current input. This principle is demonstrated by “reservoir computing”: Untrained recurrent neural networks project input sequences onto a random point in high-dimensional state space. Earlier inputs can be retrieved from this projection, albeit less reliably so as more input is received. The bottleneck is therefore not “Now-or-Never” but “Sooner-is-Better.”

2021 ◽  
Author(s):  
Federico Claudi ◽  
Tiago Branco

Neural computations can be framed as dynamical processes, whereby the structure of the dynamics within a neural network are a direct reflection of the computations that the network performs. A key step in generating mechanistic interpretations within this computation through dynamics framework is to establish the link between network connectivity, dynamics and computation. This link is only partly understood. Recent work has focused on producing algorithms for engineering artificial recurrent neural networks (RNN) with dynamics targeted to a specific goal manifold. Some of these algorithms only require a set of vectors tangent to the target manifold to be computed, and thus provide a general method that can be applied to a diverse set of problems. Nevertheless, computing such vectors for an arbitrary manifold in a high dimensional state space remains highly challenging, which in practice limits the applicability of this approach. Here we demonstrate how topology and differential geometry can be leveraged to simplify this task, by first computing tangent vectors on a low-dimensional topological manifold and then embedding these in state space. The simplicity of this procedure greatly facilitates the creation of manifold-targeted RNNs, as well as the process of designing task-solving on-manifold dynamics. This new method should enable the application of network engineering-based approaches to a wide set of problems in neuroscience and machine learning. Furthermore, our description of how fundamental concepts from differential geometry can be mapped onto different aspects of neural dynamics is a further demonstration of how the language of differential geometry can enrich the conceptual framework for describing neural dynamics and computation.


2020 ◽  
Author(s):  
Laércio Oliveira Junior ◽  
Florian Stelzer ◽  
Liang Zhao

Echo State Networks (ESNs) are recurrent neural networks that map an input signal to a high-dimensional dynamical system, called reservoir, and possess adaptive output weights. The output weights are trained such that the ESN’s output signal fits the desired target signal. Classical reservoirs are sparse and randomly connected networks. In this article, we investigate the effect of different network topologies on the performance of ESNs. Specifically, we use two types of networks to construct clustered reservoirs of ESN: the clustered Erdös–Rényi and the clustered Barabási-Albert network model. Moreover, we compare the performance of these clustered ESNs (CESNs) and classical ESNs with the random reservoir by employing them to two different tasks: frequency filtering and the reconstruction of chaotic signals. By using a clustered topology, one can achieve a significant increase in the ESN’s performance.


2020 ◽  
Vol 126 ◽  
pp. 191-217 ◽  
Author(s):  
P.R. Vlachas ◽  
J. Pathak ◽  
B.R. Hunt ◽  
T.P. Sapsis ◽  
M. Girvan ◽  
...  

2013 ◽  
Vol 25 (3) ◽  
pp. 671-696 ◽  
Author(s):  
G. Manjunath ◽  
H. Jaeger

The echo state property is a key for the design and training of recurrent neural networks within the paradigm of reservoir computing. In intuitive terms, this is a passivity condition: a network having this property, when driven by an input signal, will become entrained by the input and develop an internal response signal. This excited internal dynamics can be seen as a high-dimensional, nonlinear, unique transform of the input with a rich memory content. This view has implications for understanding neural dynamics beyond the field of reservoir computing. Available definitions and theorems concerning the echo state property, however, are of little practical use because they do not relate the network response to temporal or statistical properties of the driving input. Here we present a new definition of the echo state property that directly connects it to such properties. We derive a fundamental 0-1 law: if the input comes from an ergodic source, the network response has the echo state property with probability one or zero, independent of the given network. Furthermore, we give a sufficient condition for the echo state property that connects statistical characteristics of the input to algebraic properties of the network connection matrix. The mathematical methods that we employ are freshly imported from the young field of nonautonomous dynamical systems theory. Since these methods are not yet well known in neural computation research, we introduce them in some detail. As a side story, we hope to demonstrate the eminent usefulness of these methods.


2010 ◽  
Vol 30 (2) ◽  
pp. 192-215 ◽  
Author(s):  
Alexander Shkolnik ◽  
Michael Levashov ◽  
Ian R. Manchester ◽  
Russ Tedrake

A motion planning algorithm is described for bounding over rough terrain with the LittleDog robot. Unlike walking gaits, bounding is highly dynamic and cannot be planned with quasi-steady approximations. LittleDog is modeled as a planar five-link system, with a 16-dimensional state space; computing a plan over rough terrain in this high-dimensional state space that respects the kinodynamic constraints due to underactuation and motor limits is extremely challenging. Rapidly Exploring Random Trees (RRTs) are known for fast kinematic path planning in high-dimensional configuration spaces in the presence of obstacles, but search efficiency degrades rapidly with the addition of challenging dynamics. A computationally tractable planner for bounding was developed by modifying the RRT algorithm by using: (1) motion primitives to reduce the dimensionality of the problem; (2) Reachability Guidance, which dynamically changes the sampling distribution and distance metric to address differential constraints and discontinuous motion primitive dynamics; and (3) sampling with a Voronoi bias in a lower-dimensional “task space” for bounding. Short trajectories were demonstrated to work on the robot, however open-loop bounding is inherently unstable. A feedback controller based on transverse linearization was implemented, and shown in simulation to stabilize perturbations in the presence of noise and time delays.


2018 ◽  
Author(s):  
Chris Kiefer

Conceptors are a recent development in the field of reservoir computing; they can be used to influence the dynamics of recurrent neural networks (RNNs), enabling generation of arbitrary patterns based on training data. Conceptors allow interpolation and extrapolation between patterns, and also provide a system of boolean logic for combining patterns together. Generation and manipulation of arbitrary patterns using conceptors has significant potential as a sound synthesis method for applications in computer music and procedural audio but has yet to be explored. Two novel methods of sound synthesis based on conceptors are introduced. Conceptular Synthesis is based on granular synthesis; sets of conceptors are trained to recall varying patterns from a single RNN, then a runtime mechanism switches between them, generating short patterns which are recombined into a longer sound. Conceptillators are trainable, pitch-controlled oscillators for harmonically rich waveforms, commonly used in a variety of sound synthesis applications. Both systems can exploit conceptor pattern morphing, boolean logic and manipulation of RNN dynamics, enabling new creative sonic possibilities. Experiments reveal how RNN runtime parameters can be used for pitch-independent timestretching and for precise frequency control of cyclic waveforms. They show how these techniques can create highly malleable sound synthesis models, trainable using short sound samples. Limitations are revealed with regards to reproduction quality, and pragmatic limitations are also shown, where exponential rises in computation and memory requirements preclude the use of these models for training with longer sound samples. The techniques presented here represent an initial exploration of the sound synthesis potential of conceptors; future possibilities and research questions are outlined, including possibilities in generative sound.


2011 ◽  
Vol 11 (3&4) ◽  
pp. 313-325
Author(s):  
Warner A. Miller

An increase in the dimension of state space for quantum key distribution (QKD) can decrease its fidelity requirements while also increasing its bandwidth. A significant obstacle for QKD with qu$d$its ($d\geq 3$) has been an efficient and practical quantum state sorter for photons whose complex fields are modulated in both amplitude and phase. We propose such a sorter based on a multiplexed thick hologram, constructed e.g. from photo-thermal refractive (PTR) glass. We validate this approach using coupled-mode theory with parameters consistent with PTR glass to simulate a holographic sorter. The model assumes a three-dimensional state space spanned by three tilted planewaves. The utility of such a sorter for broader quantum information processing applications can be substantial.


Sign in / Sign up

Export Citation Format

Share Document