Training, Programming, and Correction Techniques of Memristor-Crossbar Neural Networks with Non-Ideal Effects such as Defects, Variation, and Parasitic Resistance

Author(s):  
Tien Van Nguyen ◽  
Jiyong An ◽  
Seokjin Oh
2017 ◽  
Vol 66 ◽  
pp. 31-40 ◽  
Author(s):  
Raqibul Hasan ◽  
Tarek M. Taha ◽  
Chris Yakopcic

2021 ◽  
Vol 21 (3) ◽  
pp. 1833-1844
Author(s):  
Kyojin Kim ◽  
Kamran Eshraghian ◽  
Hyunsoo Kang ◽  
Kyoungrok Cho

Nano memristor crossbar arrays, which can represent analog signals with smaller silicon areas, are popularly used to describe the node weights of the neural networks. The crossbar arrays provide high computational efficiency, as they can perform additions and multiplications at the same time at a cross-point. In this study, we propose a new approach for the memristor crossbar array architecture consisting of multi-weight nano memristors on each cross-point. As the proposed architecture can represent multiple integer-valued weights, it can enhance the precision of the weight coefficients in comparison with the existing memristor-based neural networks. This study presents a Radix-11 nano memristor crossbar array with weighted memristors; it validates the operations of the circuits, which use the arrays through circuit-level simulation. With the proposed Radix-11 approach, it is possible to represent eleven integer-valued weights. In addition, this study presents a neural network designed using the proposed Radix-11 weights, as an example of high-performance AI applications. The neural network implements a speech-keyword detection algorithm, and it was designed on a TensorFlow platform. The implemented keyword detection algorithm can recognize 35 Korean words with an inferencing accuracy of 95.45%, reducing the inferencing accuracy only by 2% when compared to the 97.53% accuracy of the real-valued weight case.


Author(s):  
Xiaoyang Liu ◽  
Zhigang Zeng

AbstractThe paper presents memristor crossbar architectures for implementing layers in deep neural networks, including the fully connected layer, the convolutional layer, and the pooling layer. The crossbars achieve positive and negative weight values and approximately realize various nonlinear activation functions. Then the layers constructed by the crossbars are adopted to build the memristor-based multi-layer neural network (MMNN) and the memristor-based convolutional neural network (MCNN). Two kinds of in-situ weight update schemes, which are the fixed-voltage update and the approximately linear update, respectively, are used to train the networks. Consider variations resulted from the inherent characteristics of memristors and the errors of programming voltages, the robustness of MMNN and MCNN to these variations is analyzed. The simulation results on standard datasets show that deep neural networks (DNNs) built by the memristor crossbars work satisfactorily in pattern recognition tasks and have certain robustness to memristor variations.


Memristor circuits have become one of the potential hardware-based platforms for implementing artificial neural networks due to a lot of advantageous features. In this paper, we compare the power consumption between an analog memristor crossbar-based a binary memristor crossbar-based neural network for realizing a two-layer neural network and propose an efficient method for reducing the power consumption of the analog memristor crossbar-based neural network. A two-layer neural network is implemented using the memristor crossbar arrays, which can be used with analog synapse or binary synapse. For recognizing the test samples of MNIST dataset, the binary memristor crossbar-based neural work consumes higher power by 19% than the analog memristor-based neural network. The power consumption of the analog memristor crossbar-based neural network strongly depends on the distribution of memristance values and it can be reduced by optimizing the distribution of the memristance values. To improve the power efficiency, the bias resistance must be selected close to high resistance state. The power consumption of the analog memristor-based neural network is reduced by 86% when increasing the bias resistance from 20KΩ to 160KΩ. For the bias resistance of 160KΩ, analog memristor crossbar-based neural network consumes less power by 89% than the binary memristor crossbar-based neural network.


Materials ◽  
2019 ◽  
Vol 12 (24) ◽  
pp. 4097 ◽  
Author(s):  
Son Ngoc Truong

Memristor crossbar arrays without selector devices, such as complementary-metal oxide semiconductor (CMOS) devices, are a potential for realizing neuromorphic computing systems. However, wire resistance of metal wires is one of the factors that degrade the performance of memristor crossbar circuits. In this work, we propose a wire resistance modeling method and a parasitic resistance-adapted programming scheme to reduce the impact of wire resistance in a memristor crossbar-based neuromorphic computing system. The equivalent wire resistances for the cells are estimated by analyzing the crossbar circuit using the superposition theorem. For the conventional programming scheme, the connection matrix composed of the target memristance values is used for crossbar array programming. In the proposed parasitic resistance-adapted programming scheme, the connection matrix is updated before it is used for crossbar array programming to compensate the equivalent wire resistance. The updated connection matrix is obtained by subtracting the equivalent connection matrix from the original connection matrix. The circuit simulations are performed to test the proposed wire resistance modeling method and the parasitic resistance-adapted programming scheme. The simulation results showed that the discrepancy of the output voltages of the crossbar between the conventional wire resistance modeling method and the proposed wire resistance modeling method is as low as 2.9% when wire resistance varied from 0.5 to 3.0 Ω. The recognition rate of the memristor crossbar with the conventional programming scheme is 99%, 95%, 81%, and 65% when wire resistance is set to be 1.5, 2.0, 2.5, and 3.0 Ω, respectively. By contrast, the memristor crossbar with the proposed parasitic resistance-adapted programming scheme can maintain the recognition as high as 100% when wire resistance is as high as 3.0 Ω.


2019 ◽  
Vol 66 (7) ◽  
pp. 2937-2945 ◽  
Author(s):  
Injune Yeo ◽  
Myonglae Chu ◽  
Sang-Gyun Gi ◽  
Hyunsang Hwang ◽  
Byung-Geun Lee

Author(s):  
O. Krestinskaya ◽  
A. P. James

Randomly switching neurons ON/OFF while training and inference process is an interesting characteristic of biological neural networks, that potentially results in inherent adaptability and creativity expressed by human mind. Dropouts inspire from this random switching behaviour and in the artificial neural network they are used as a regularization techniques to reduce the impact of over-fitting during the training. The energy-efficient digital implementations of convolutional neural networks (CNN) have been on the rise for edge computing IoT applications. Pruning larger networks and optimization for performance accuracy has been the main direction of work in this field. As opposed to this approach, we propose to build a near-sensor analogue CNN with high-density memristor crossbar arrays. Since several active elements such as amplifiers are used in analogue designs, energy efficiency becomes a main challenge. To address this, we extend the idea of using dropouts in training to also the inference stage. The CNN implementations require a subsampling layer, which is implemented as a mean pooling layer in the design to ensure lower energy consumption. Along with the dropouts, we also investigate the effect of non-idealities of memristor and that of the network.


Sign in / Sign up

Export Citation Format

Share Document