scholarly journals Model-Size Reduction for Reservoir Computing by Concatenating Internal States Through Time

2020 ◽  
Author(s):  
Yusuke Sakemi ◽  
Kai Morino ◽  
Timothee Leleu ◽  
Kazuyuki Aihara

Abstract Reservoir computing (RC) is a machine learning algorithm that can learn complex time series from data very rapidly based on the use of high-dimensional dynamical systems, such as random networks of neurons, called "reservoirs." To implement RC in edge computing, it is highly important to reduce the amount of computational resources that RC requires. In this study, we propose methods that reduce the size of the reservoir by inputting the past or drifting states of the reservoir to the output layer at the current time step. To elucidate the mechanism of model-size reduction, the proposed methods are analyzed based on information processing capacity proposed by Dambre et al. (2012). In addition, we evaluate the effectiveness of the proposed methods on time-series prediction tasks: the generalized Hénon-map and NARMA. On these tasks, we found that the proposed methods were able to reduce the size of the reservoir up to one tenth without a substantial increase in regression error.

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Yusuke Sakemi ◽  
Kai Morino ◽  
Timothée Leleu ◽  
Kazuyuki Aihara

AbstractReservoir computing (RC) is a machine learning algorithm that can learn complex time series from data very rapidly based on the use of high-dimensional dynamical systems, such as random networks of neurons, called “reservoirs.” To implement RC in edge computing, it is highly important to reduce the amount of computational resources that RC requires. In this study, we propose methods that reduce the size of the reservoir by inputting the past or drifting states of the reservoir to the output layer at the current time step. To elucidate the mechanism of model-size reduction, the proposed methods are analyzed based on information processing capacity proposed by Dambre et al. (Sci Rep 2:514, 2012). In addition, we evaluate the effectiveness of the proposed methods on time-series prediction tasks: the generalized Hénon-map and NARMA. On these tasks, we found that the proposed methods were able to reduce the size of the reservoir up to one tenth without a substantial increase in regression error.


2018 ◽  
Vol 4 (2) ◽  
pp. 563-565
Author(s):  
Rachita Sharma ◽  
Sanjay Kumar Dubey

This paper describes the introduction of Supervised and Unsupervised Techniques with the comparison of SOFM (Self Organized Feature Map) used for Satellite Imagery. In this we have explained the way of spatial and temporal changes detection used in forecasting in satellite imagery. Forecasting is based on time series of images using Artificial Neural Network. Recently neural networks have gained a lot of interest in time series prediction due to their ability to learn effectively nonlinear dependencies from large volume of possibly noisy data with a learning algorithm. Unsupervised neural networks reveal useful information from the temporal sequence and they reported power in cluster analysis and dimensionality reduction. In unsupervised learning, no pre classification and pre labeling of the input data is needed. SOFM is one of the unsupervised neural network used for time series prediction .In time series prediction the goal is to construct a model that can predict the future of the measured process under interest. There are various approaches to time series prediction that have been used over the years. It is a research area having application in diverse fields like weather forecasting, speech recognition, remote sensing. Advances in remote sensing technology and availability of high resolution images in recent years have motivated many researchers to study patterns in the images for the purpose of trend analysis


Author(s):  
Yao Qin ◽  
Dongjin Song ◽  
Haifeng Chen ◽  
Wei Cheng ◽  
Guofei Jiang ◽  
...  

The Nonlinear autoregressive exogenous (NARX) model, which predicts the current value of a time series based upon its previous values as well as the current and past values of multiple driving (exogenous) series, has been studied for decades. Despite the fact that various NARX models have been developed, few of them can capture the long-term temporal dependencies appropriately and select the relevant driving series to make predictions. In this paper, we propose a dual-stage attention-based recurrent neural network (DA-RNN) to address these two issues. In the first stage, we introduce an input attention mechanism to adaptively extract relevant driving series (a.k.a., input features) at each time step by referring to the previous encoder hidden state. In the second stage, we use a temporal attention mechanism to select relevant encoder hidden states across all time steps. With this dual-stage attention scheme, our model can not only make predictions effectively, but can also be easily interpreted. Thorough empirical studies based upon the SML 2010 dataset and the NASDAQ 100 Stock dataset demonstrate that the DA-RNN can outperform state-of-the-art methods for time series prediction.


Author(s):  
Lina Jaurigue ◽  
Elizabeth Robertson ◽  
Janik Wolters ◽  
Kathy Lüdge

Reservoir computing is a machine learning method that uses the response of a dynamical system to a certain input in order to solve a task. As the training scheme only involves optimising the weights of the responses of the dynamical system, this method is particularly suited for hardware implementation. Furthermore, the inherent memory of dynamical systems which are suitable for use as reservoirs mean that this method has the potential to perform well on time series prediction tasks, as well as other tasks with time dependence. However, reservoir computing still requires extensive task dependent parameter optimisation in order to achieve good performance. We demonstrate that by including a time-delayed version of the input for various time series prediction tasks, good performance can be achieved with an unoptimised reservoir. Furthermore, we show that by including the appropriate time-delayed input, one unaltered reservoir can perform well on six different time series prediction tasks at a very low computational expense. Our approach is of particular relevance to hardware implemented reservoirs, as one does not necessarily have access to pertinent optimisation parameters in physical systems but the inclusion of an additional input is generally possible.


Mathematics ◽  
2020 ◽  
Vol 8 (8) ◽  
pp. 1374
Author(s):  
Miguel Atencia ◽  
Ruxandra Stoean ◽  
Gonzalo Joya

The application of echo state networks to time series prediction has provided notable results, favored by their reduced computational cost, since the connection weights require no learning. However, there is a need for general methods that guide the choice of parameters (particularly the reservoir size and ridge regression coefficient), improve the prediction accuracy, and provide an assessment of the uncertainty of the estimates. In this paper we propose such a mechanism for uncertainty quantification based on Monte Carlo dropout, where the output of a subset of reservoir units is zeroed before the computation of the output. Dropout is only performed at the test stage, since the immediate goal is only the computation of a measure of the goodness of the prediction. Results show that the proposal is a promising method for uncertainty quantification, providing a value that is either strongly correlated with the prediction error or reflects the prediction of qualitative features of the time series. This mechanism could eventually be included into the learning algorithm in order to obtain performance enhancements and alleviate the burden of parameter choice.


Author(s):  
Hiroshi Kajino

Dynamic Boltzmann machines (DyBMs) are recently developed generative models of a time series. They are designed to learn a time series by efficient online learning algorithms, whilst taking long-term dependencies into account with help of eligibility traces, recursively updatable memory units storing descriptive statistics of all the past data. The current DyBMs assume a finite-dimensional time series and cannot be applied to a functional time series, in which the dimension goes to infinity (e.g., spatiotemporal data on a continuous space). In this paper, we present a functional dynamic Boltzmann machine (F-DyBM) as a generative model of a functional time series. A technical challenge is to devise an online learning algorithm with which F-DyBM, consisting of functions and integrals, can learn a functional time series using only finite observations of it. We rise to the above challenge by combining a kernel-based function approximation method along with a statistical interpolation method and finally derive closed-form update rules. We design numerical experiments to empirically confirm the effectiveness of our solutions. The experimental results demonstrate consistent error reductions as compared to baseline methods, from which we conclude the effectiveness of F-DyBM for functional time series prediction.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1560
Author(s):  
Lina Jaurigue ◽  
Elizabeth Robertson ◽  
Janik Wolters ◽  
Kathy Lüdge

Reservoir computing is a machine learning method that solves tasks using the response of a dynamical system to a certain input. As the training scheme only involves optimising the weights of the responses of the dynamical system, this method is particularly suited for hardware implementation. Furthermore, the inherent memory of dynamical systems which are suitable for use as reservoirs mean that this method has the potential to perform well on time series prediction tasks, as well as other tasks with time dependence. However, reservoir computing still requires extensive task-dependent parameter optimisation in order to achieve good performance. We demonstrate that by including a time-delayed version of the input for various time series prediction tasks, good performance can be achieved with an unoptimised reservoir. Furthermore, we show that by including the appropriate time-delayed input, one unaltered reservoir can perform well on six different time series prediction tasks at a very low computational expense. Our approach is of particular relevance to hardware implemented reservoirs, as one does not necessarily have access to pertinent optimisation parameters in physical systems but the inclusion of an additional input is generally possible.


Sign in / Sign up

Export Citation Format

Share Document