Hardware Implementation of a Multiuser Detection Scheme Based on Recurrent Neural Networks

Author(s):  
Wolfgang Schlecker ◽  
Achim Engelhart ◽  
Werner G. Teich ◽  
Hans-Jörg Pfieiderer
IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 110293-110305 ◽  
Author(s):  
Ke Xiao ◽  
Jianyu Zhao ◽  
Yunhua He ◽  
Chaofei Li ◽  
Wei Cheng

2020 ◽  
Author(s):  
Yanan Zhong ◽  
Jianshi Tang ◽  
Xinyi Li ◽  
Bin Gao ◽  
He Qian ◽  
...  

Abstract Reservoir computing (RC) is a highly efficient network for processing spatiotemporal signals due to its low training cost compared to standard recurrent neural networks. The design of different reservoir states plays a very important role in the hardware implementation of RC system. Recent studies have used the device-to-device variation to generate different reservoir states; however, this method is not well controllable and reproducible. To solve this problem, we report a dynamic memristor-based RC system. By applying a controllable mask process, we reveal that even a single dynamic memristor can generate rich reservoir states and realize the complete reservoir function. We further build a parallel RC system that can efficiently handle spatiotemporal tasks including spoken-digit and handwritten-digit recognitions, in which high classification accuracies of 99.6% and 97.6% have been achieved, respectively. The performance of dynamic memristor-based RC system is almost equivalent to the software-based one. Besides, our RC system does not require additional read operations, which can make full use of the device nonlinearity and further improve the system efficiency. Our work could pave the road towards high-efficiency memristor-based RC systems to handle more complex spatiotemporal tasks in the future.


2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


Sign in / Sign up

Export Citation Format

Share Document