The Fixed-Point Implementations for Recurrent Neural Networks

Author(s):  
Hsien-Ju Ko ◽  
Hao-Cheng Yang ◽  
Yuan-Bin Wang ◽  
Han He
2019 ◽  
Vol 31 (2) ◽  
pp. 312-329
Author(s):  
Benjamin Scellier ◽  
Yoshua Bengio

Recurrent backpropagation and equilibrium propagation are supervised learning algorithms for fixed-point recurrent neural networks, which differ in their second phase. In the first phase, both algorithms converge to a fixed point that corresponds to the configuration where the prediction is made. In the second phase, equilibrium propagation relaxes to another nearby fixed point corresponding to smaller prediction error, whereas recurrent backpropagation uses a side network to compute error derivatives iteratively. In this work, we establish a close connection between these two algorithms. We show that at every moment in the second phase, the temporal derivatives of the neural activities in equilibrium propagation are equal to the error derivatives computed iteratively by recurrent backpropagation in the side network. This work shows that it is not required to have a side network for the computation of error derivatives and supports the hypothesis that in biological neural networks, temporal derivatives of neural activities may code for error signals.


Author(s):  
Houssem Achouri ◽  
Chaouki Aouiti

The main aim of this paper is to study the dynamics of a recurrent neural networks with different input currents in terms of asymptotic point. Under certain circumstances, we studied the existence, the uniqueness of bounded solutions and their homoclinic and heteroclinic motions of the considered system with rectangular currents input. Moreover, we studied the unpredictable behavior of the continuous high-order recurrent neural networks and the discrete high-order recurrent neural networks. Our method was primarily based on Banach’s fixed-point theorem, topology of uniform convergence on compact sets and Gronwall inequality. For the demonstration of theoretical results, we give examples and their numerical simulations.


2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Bicky A. Marquez ◽  
Laurent Larger ◽  
Maxime Jacquot ◽  
Yanne K. Chembo ◽  
Daniel Brunner

2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


Sign in / Sign up

Export Citation Format

Share Document