Memristor Crossbar Architecture for Synchronous Neural Networks

2014 ◽  
Vol 61 (8) ◽  
pp. 2390-2401 ◽  
Author(s):  
Janusz A. Starzyk ◽  
Basawaraj
2017 ◽  
Vol 66 ◽  
pp. 31-40 ◽  
Author(s):  
Raqibul Hasan ◽  
Tarek M. Taha ◽  
Chris Yakopcic

2021 ◽  
Vol 21 (3) ◽  
pp. 1833-1844
Author(s):  
Kyojin Kim ◽  
Kamran Eshraghian ◽  
Hyunsoo Kang ◽  
Kyoungrok Cho

Nano memristor crossbar arrays, which can represent analog signals with smaller silicon areas, are popularly used to describe the node weights of the neural networks. The crossbar arrays provide high computational efficiency, as they can perform additions and multiplications at the same time at a cross-point. In this study, we propose a new approach for the memristor crossbar array architecture consisting of multi-weight nano memristors on each cross-point. As the proposed architecture can represent multiple integer-valued weights, it can enhance the precision of the weight coefficients in comparison with the existing memristor-based neural networks. This study presents a Radix-11 nano memristor crossbar array with weighted memristors; it validates the operations of the circuits, which use the arrays through circuit-level simulation. With the proposed Radix-11 approach, it is possible to represent eleven integer-valued weights. In addition, this study presents a neural network designed using the proposed Radix-11 weights, as an example of high-performance AI applications. The neural network implements a speech-keyword detection algorithm, and it was designed on a TensorFlow platform. The implemented keyword detection algorithm can recognize 35 Korean words with an inferencing accuracy of 95.45%, reducing the inferencing accuracy only by 2% when compared to the 97.53% accuracy of the real-valued weight case.


Micromachines ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 690
Author(s):  
Minh Le ◽  
Thi Kim Hang Pham ◽  
Son Ngoc Truong

We performed a comparative study on the Gaussian noise and memristance variation tolerance of three crossbar architectures, namely the complementary crossbar architecture, the twin crossbar architecture, and the single crossbar architecture, for neuromorphic image recognition and conducted an experiment to determine the performance of the single crossbar architecture for simple pattern recognition. Ten grayscale images with the size of 32×32 pixels were used for testing and comparing the recognition rates of the three architectures. The recognition rates of the three memristor crossbar architectures were compared to each other when the noise level of images was varied from -10 to 4 dB and the percentage of memristance variation was varied from 0% to 40%. The simulation results showed that the single crossbar architecture had the best Gaussian noise input and memristance variation tolerance in terms of recognition rate. At the signal-to-noise ratio of –10 dB, the single crossbar architecture produced a recognition rate of 91%, which was 2% and 87% higher than those of the twin crossbar architecture and the complementary crossbar architecture, respectively. When the memristance variation percentage reached 40%, the single crossbar architecture had a recognition rate as high as 67.8%, which was 1.8% and 9.8% higher than the recognition rates of the twin crossbar architecture and the complementary crossbar architecture, respectively. Finally, we carried out an experiment to determine the performance of the single crossbar architecture with a fabricated 3 × 3 memristor crossbar based on carbon fiber and aluminum film. The experiment proved successful implementation of pattern recognition with the single crossbar architecture.


Author(s):  
Xiaoyang Liu ◽  
Zhigang Zeng

AbstractThe paper presents memristor crossbar architectures for implementing layers in deep neural networks, including the fully connected layer, the convolutional layer, and the pooling layer. The crossbars achieve positive and negative weight values and approximately realize various nonlinear activation functions. Then the layers constructed by the crossbars are adopted to build the memristor-based multi-layer neural network (MMNN) and the memristor-based convolutional neural network (MCNN). Two kinds of in-situ weight update schemes, which are the fixed-voltage update and the approximately linear update, respectively, are used to train the networks. Consider variations resulted from the inherent characteristics of memristors and the errors of programming voltages, the robustness of MMNN and MCNN to these variations is analyzed. The simulation results on standard datasets show that deep neural networks (DNNs) built by the memristor crossbars work satisfactorily in pattern recognition tasks and have certain robustness to memristor variations.


Sign in / Sign up

Export Citation Format

Share Document