memristor array
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 19)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Vol 15 ◽  
Author(s):  
Junyun Zhao ◽  
Siyuan Huang ◽  
Osama Yousuf ◽  
Yutong Gao ◽  
Brian D. Hoskins ◽  
...  

While promising for high-capacity machine learning accelerators, memristor devices have non-idealities that prevent software-equivalent accuracies when used for online training. This work uses a combination of Mini-Batch Gradient Descent (MBGD) to average gradients, stochastic rounding to avoid vanishing weight updates, and decomposition methods to keep the memory overhead low during mini-batch training. Since the weight update has to be transferred to the memristor matrices efficiently, we also investigate the impact of reconstructing the gradient matrixes both internally (rank-seq) and externally (rank-sum) to the memristor array. Our results show that streaming batch principal component analysis (streaming batch PCA) and non-negative matrix factorization (NMF) decomposition algorithms can achieve near MBGD accuracy in a memristor-based multi-layer perceptron trained on the MNIST (Modified National Institute of Standards and Technology) database with only 3 to 10 ranks at significant memory savings. Moreover, NMF rank-seq outperforms streaming batch PCA rank-seq at low-ranks making it more suitable for hardware implementation in future memristor-based accelerators.


Electronics ◽  
2021 ◽  
Vol 10 (21) ◽  
pp. 2600
Author(s):  
Yiyang Zhao ◽  
Yongjia Wang ◽  
Ruibo Wang ◽  
Yuan Rong ◽  
Xianyang Jiang

Since memristor was found, it has shown great application potential in neuromorphic computing. Currently, most neural networks based on memristors deploy the special analog characteristics of memristor. However, owing to the limitation of manufacturing process, non-ideal characteristics such as non-linearity, asymmetry, and inconsistent device periodicity appear frequently and definitely, therefore, it is a challenge to employ memristor in a massive way. On the contrary, a binary neural network (BNN) requires its weights to be either +1 or −1, which can be mapped by digital memristors with high technical maturity. Upon this, a highly robust BNN inference accelerator with binary sigmoid activation function is proposed. In the accelerator, the inputs of each network layer are either +1 or 0, which can facilitate feature encoding and reduce the peripheral circuit complexity of memristor hardware. The proposed two-column reference memristor structure together with current controlled voltage source (CCVS) circuit not only solves the problem of mapping positive and negative weights on memristor array, but also eliminates the sneak current effect under the minimum conductance status. Being compared to the traditional differential pair structure of BNN, the proposed two-column reference scheme can reduce both the number of memristors and the latency to refresh the memristor array by nearly 50%. The influence of non-ideal factors of memristor array such as memristor array yield, memristor conductance fluctuation, and reading noise on the accuracy of BNN is investigated in detail based on a newly memristor circuit model with non-ideal characteristics. The experimental results demonstrate that when the array yield α ≥ 5%, or the reading noise σ ≤ 0.25, a recognition accuracy greater than 97% on the MNIST data set is achieved.


2021 ◽  
pp. 2100151
Author(s):  
Xulei Wu ◽  
Bingjie Dang ◽  
Hong Wang ◽  
Xiulong Wu ◽  
Yuchao Yang

2021 ◽  
Vol 1976 (1) ◽  
pp. 012079
Author(s):  
Desheng Ma ◽  
Yihong Hu ◽  
Nuo Xu ◽  
Chenglong Huang ◽  
Binbin Yang ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document