A hybrid precision low power computing-in-memory architecture for neural networks

2020 ◽  
pp. 103351
Author(s):  
Rui Xu ◽  
Linfeng Tao ◽  
Tianqi Wang ◽  
Xi Jin ◽  
Chenxia Li ◽  
...  
2018 ◽  
Vol 67 (5) ◽  
pp. 631-645 ◽  
Author(s):  
Yu Bai ◽  
Ronald F. DeMara ◽  
Jia Di ◽  
Mingjie Lin

2021 ◽  
pp. 1-30
Author(s):  
Asieh Abolpour Mofrad ◽  
Samaneh Abolpour Mofrad ◽  
Anis Yazidi ◽  
Matthew Geoffrey Parker

Abstract Associative memories enjoy many interesting properties in terms of error correction capabilities, robustness to noise, storage capacity, and retrieval performance, and their usage spans over a large set of applications. In this letter, we investigate and extend tournament-based neural networks, originally proposed by Jiang, Gripon, Berrou, and Rabbat (2016), a novel sequence storage associative memory architecture with high memory efficiency and accurate sequence retrieval. We propose a more general method for learning the sequences, which we call feedback tournament-based neural networks. The retrieval process is also extended to both directions: forward and backward—in other words, any large-enough segment of a sequence can produce the whole sequence. Furthermore, two retrieval algorithms, cache-winner and explore-winner, are introduced to increase the retrieval performance. Through simulation results, we shed light on the strengths and weaknesses of each algorithm.


Sign in / Sign up

Export Citation Format

Share Document