Approximated Prediction Strategy for Reducing Power Consumption of Convolutional Neural Network Processor

Author(s):  
Takayuki Ujiie ◽  
Masayuki Hiromoto ◽  
Takashi Sato
Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7468
Author(s):  
Yui-Kai Weng ◽  
Shih-Hsu Huang ◽  
Hsu-Yu Kao

In a CNN (convolutional neural network) accelerator, to reduce memory traffic and power consumption, there is a need to exploit the sparsity of activation values. Therefore, some research efforts have been paid to skip ineffectual computations (i.e., multiplications by zero). Different from previous works, in this paper, we point out the similarity of activation values: (1) in the same layer of a CNN model, most feature maps are either highly dense or highly sparse; (2) in the same layer of a CNN model, feature maps in different channels are often similar. Based on the two observations, we propose a block-based compression approach, which utilizes both the sparsity and the similarity of activation values to further reduce the data volume. Moreover, we also design an encoder, a decoder and an indexing module to support the proposed approach. The encoder is used to translate output activations into the proposed block-based compression format, while both the decoder and the indexing module are used to align nonzero values for effectual computations. Compared with previous works, benchmark data consistently show that the proposed approach can greatly reduce both memory traffic and power consumption.


Memristor circuits have become one of the potential hardware-based platforms for implementing artificial neural networks due to a lot of advantageous features. In this paper, we compare the power consumption between an analog memristor crossbar-based a binary memristor crossbar-based neural network for realizing a two-layer neural network and propose an efficient method for reducing the power consumption of the analog memristor crossbar-based neural network. A two-layer neural network is implemented using the memristor crossbar arrays, which can be used with analog synapse or binary synapse. For recognizing the test samples of MNIST dataset, the binary memristor crossbar-based neural work consumes higher power by 19% than the analog memristor-based neural network. The power consumption of the analog memristor crossbar-based neural network strongly depends on the distribution of memristance values and it can be reduced by optimizing the distribution of the memristance values. To improve the power efficiency, the bias resistance must be selected close to high resistance state. The power consumption of the analog memristor-based neural network is reduced by 86% when increasing the bias resistance from 20KΩ to 160KΩ. For the bias resistance of 160KΩ, analog memristor crossbar-based neural network consumes less power by 89% than the binary memristor crossbar-based neural network.


IEEE Micro ◽  
2017 ◽  
Vol 37 (6) ◽  
pp. 30-38 ◽  
Author(s):  
Kyeongryeol Bong ◽  
Sungpill Choi ◽  
Changhyeon Kim ◽  
Hoi-Jun Yoo

Sign in / Sign up

Export Citation Format

Share Document