scholarly journals Reservoir Computing with Delayed Input for Fast and Easy Optimization

Author(s):  
Lina Jaurigue ◽  
Elizabeth Robertson ◽  
Janik Wolters ◽  
Kathy Lüdge

Reservoir computing is a machine learning method that uses the response of a dynamical system to a certain input in order to solve a task. As the training scheme only involves optimising the weights of the responses of the dynamical system, this method is particularly suited for hardware implementation. Furthermore, the inherent memory of dynamical systems which are suitable for use as reservoirs mean that this method has the potential to perform well on time series prediction tasks, as well as other tasks with time dependence. However, reservoir computing still requires extensive task dependent parameter optimisation in order to achieve good performance. We demonstrate that by including a time-delayed version of the input for various time series prediction tasks, good performance can be achieved with an unoptimised reservoir. Furthermore, we show that by including the appropriate time-delayed input, one unaltered reservoir can perform well on six different time series prediction tasks at a very low computational expense. Our approach is of particular relevance to hardware implemented reservoirs, as one does not necessarily have access to pertinent optimisation parameters in physical systems but the inclusion of an additional input is generally possible.

Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1560
Author(s):  
Lina Jaurigue ◽  
Elizabeth Robertson ◽  
Janik Wolters ◽  
Kathy Lüdge

Reservoir computing is a machine learning method that solves tasks using the response of a dynamical system to a certain input. As the training scheme only involves optimising the weights of the responses of the dynamical system, this method is particularly suited for hardware implementation. Furthermore, the inherent memory of dynamical systems which are suitable for use as reservoirs mean that this method has the potential to perform well on time series prediction tasks, as well as other tasks with time dependence. However, reservoir computing still requires extensive task-dependent parameter optimisation in order to achieve good performance. We demonstrate that by including a time-delayed version of the input for various time series prediction tasks, good performance can be achieved with an unoptimised reservoir. Furthermore, we show that by including the appropriate time-delayed input, one unaltered reservoir can perform well on six different time series prediction tasks at a very low computational expense. Our approach is of particular relevance to hardware implemented reservoirs, as one does not necessarily have access to pertinent optimisation parameters in physical systems but the inclusion of an additional input is generally possible.


2016 ◽  
Vol 2016 ◽  
pp. 1-14 ◽  
Author(s):  
Miquel L. Alomar ◽  
Vincent Canals ◽  
Nicolas Perez-Mora ◽  
Víctor Martínez-Moll ◽  
Josep L. Rosselló

Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Yusuke Sakemi ◽  
Kai Morino ◽  
Timothée Leleu ◽  
Kazuyuki Aihara

AbstractReservoir computing (RC) is a machine learning algorithm that can learn complex time series from data very rapidly based on the use of high-dimensional dynamical systems, such as random networks of neurons, called “reservoirs.” To implement RC in edge computing, it is highly important to reduce the amount of computational resources that RC requires. In this study, we propose methods that reduce the size of the reservoir by inputting the past or drifting states of the reservoir to the output layer at the current time step. To elucidate the mechanism of model-size reduction, the proposed methods are analyzed based on information processing capacity proposed by Dambre et al. (Sci Rep 2:514, 2012). In addition, we evaluate the effectiveness of the proposed methods on time-series prediction tasks: the generalized Hénon-map and NARMA. On these tasks, we found that the proposed methods were able to reduce the size of the reservoir up to one tenth without a substantial increase in regression error.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Md Raf E Ul Shougat ◽  
XiaoFu Li ◽  
Tushar Mollik ◽  
Edmon Perkins

AbstractPhysical reservoir computing utilizes a physical system as a computational resource. This nontraditional computing technique can be computationally powerful, without the need of costly training. Here, a Hopf oscillator is implemented as a reservoir computer by using a node-based architecture; however, this implementation does not use delayed feedback lines. This reservoir computer is still powerful, but it is considerably simpler and cheaper to implement as a physical Hopf oscillator. A non-periodic stochastic masking procedure is applied for this reservoir computer following the time multiplexing method. Due to the presence of noise, the Euler–Maruyama method is used to simulate the resulting stochastic differential equations that represent this reservoir computer. An analog electrical circuit is built to implement this Hopf oscillator reservoir computer experimentally. The information processing capability was tested numerically and experimentally by performing logical tasks, emulation tasks, and time series prediction tasks. This reservoir computer has several attractive features, including a simple design that is easy to implement, noise robustness, and a high computational ability for many different benchmark tasks. Since limit cycle oscillators model many physical systems, this architecture could be relatively easily applied in many contexts.


2020 ◽  
Author(s):  
Yusuke Sakemi ◽  
Kai Morino ◽  
Timothee Leleu ◽  
Kazuyuki Aihara

Abstract Reservoir computing (RC) is a machine learning algorithm that can learn complex time series from data very rapidly based on the use of high-dimensional dynamical systems, such as random networks of neurons, called "reservoirs." To implement RC in edge computing, it is highly important to reduce the amount of computational resources that RC requires. In this study, we propose methods that reduce the size of the reservoir by inputting the past or drifting states of the reservoir to the output layer at the current time step. To elucidate the mechanism of model-size reduction, the proposed methods are analyzed based on information processing capacity proposed by Dambre et al. (2012). In addition, we evaluate the effectiveness of the proposed methods on time-series prediction tasks: the generalized Hénon-map and NARMA. On these tasks, we found that the proposed methods were able to reduce the size of the reservoir up to one tenth without a substantial increase in regression error.


2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
C. H. López-Caraballo ◽  
J. A. Lazzús ◽  
I. Salfate ◽  
P. Rojas ◽  
M. Rivera ◽  
...  

An artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey-Glass chaotic time series in the short-termxt+6. The performance prediction was evaluated and compared with other studies available in the literature. Also, we presented properties of the dynamical system via the study of chaotic behaviour obtained from the predicted time series. Next, the hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (calledstochastichybrid ANN+PSO) in order to obtain a new estimator of the predictions, which also allowed us to compute the uncertainties of predictions for noisy Mackey-Glass chaotic time series. Thus, we studied the impact of noise for several cases with a white noise levelσNfrom 0.01 to 0.1.


2021 ◽  
Vol 12 ◽  
Author(s):  
Shahrokh Shahi ◽  
Christopher D. Marcotte ◽  
Conner J. Herndon ◽  
Flavio H. Fenton ◽  
Yohannes Shiferaw ◽  
...  

The electrical signals triggering the heart's contraction are governed by non-linear processes that can produce complex irregular activity, especially during or preceding the onset of cardiac arrhythmias. Forecasts of cardiac voltage time series in such conditions could allow new opportunities for intervention and control but would require efficient computation of highly accurate predictions. Although machine-learning (ML) approaches hold promise for delivering such results, non-linear time-series forecasting poses significant challenges. In this manuscript, we study the performance of two recurrent neural network (RNN) approaches along with echo state networks (ESNs) from the reservoir computing (RC) paradigm in predicting cardiac voltage data in terms of accuracy, efficiency, and robustness. We show that these ML time-series prediction methods can forecast synthetic and experimental cardiac action potentials for at least 15–20 beats with a high degree of accuracy, with ESNs typically two orders of magnitude faster than RNN approaches for the same network size.


Sign in / Sign up

Export Citation Format

Share Document