scholarly journals Transfer-RLS method and transfer-FORCE learning for simple and fast training of reservoir computing models

2021 ◽  
Author(s):  
Hiroto Tamura ◽  
Gouhei Tanaka
2021 ◽  
pp. 24-36
Author(s):  
Unai Armentia ◽  
Irantzu Barrio ◽  
Javier Del Ser

AIP Advances ◽  
2018 ◽  
Vol 8 (5) ◽  
pp. 055602 ◽  
Author(s):  
George Bourianoff ◽  
Daniele Pinna ◽  
Matthias Sitte ◽  
Karin Everschor-Sitte

2012 ◽  
Vol 9 (5) ◽  
pp. 6101-6134 ◽  
Author(s):  
N. J. de Vos

Abstract. Despite theoretical benefits of recurrent artificial neural networks over their feedforward counterparts, it is still unclear whether the former offer practical advantages as rainfall-runoff models. The main drawback of recurrent networks is the increased complexity of the training procedure due to their architecture. This work uses recently introduced, conceptually simple reservoir computing models for one-day-ahead forecasts on twelve river basins in the Eastern United States, and compares them to a variety of traditional feedforward and recurrent models. Two modifications on the reservoir computing models are made to increase the hydrologically relevant information content of their internal state. The results show that the reservoir computing networks outperform feedforward networks and are competitive with state-of-the-art recurrent networks, across a range of performance measures. This, along with their simplicity and ease of training, suggests that reservoir computing models can be considered promising alternatives to traditional artificial neural networks in rainfall-runoff modelling.


2021 ◽  
pp. 115022
Author(s):  
Wei-Jia Wang ◽  
Yong Tang ◽  
Jason Xiong ◽  
Yi-Cheng Zhang

2019 ◽  
Author(s):  
Federica Eftimiadi ◽  
Enrico Pugni Trimigliozzi

Reversible computing is a paradigm where computing models are defined so that they reflect physical reversibility, one of the fundamental microscopic physical property of Nature. Also, it is one of the basic microscopic physical laws of nature. Reversible computing refers tothe computation that could always be reversed to recover its earlier state. It is based on reversible physics, which implies that we can never truly erase information in a computer. Reversible computing is very difficult and its engineering hurdles are enormous. This paper provides a brief introduction to reversible computing. With these constraints, one can still satisfactorily deal with both functional and structural aspects of computing processes; at the same time, one attains a closer correspondence between the behavior of abstract computing systems and the microscopic physical laws (which are presumed to be strictly reversible) that underlay any implementation of such systems Available online at https://int-scientific-journals.com


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4496
Author(s):  
Vlad Pandelea ◽  
Edoardo Ragusa ◽  
Tommaso Apicella ◽  
Paolo Gastaldo ◽  
Erik Cambria

Emotion recognition, among other natural language processing tasks, has greatly benefited from the use of large transformer models. Deploying these models on resource-constrained devices, however, is a major challenge due to their computational cost. In this paper, we show that the combination of large transformers, as high-quality feature extractors, and simple hardware-friendly classifiers based on linear separators can achieve competitive performance while allowing real-time inference and fast training. Various solutions including batch and Online Sequential Learning are analyzed. Additionally, our experiments show that latency and performance can be further improved via dimensionality reduction and pre-training, respectively. The resulting system is implemented on two types of edge device, namely an edge accelerator and two smartphones.


Sign in / Sign up

Export Citation Format

Share Document