scholarly journals Exploiting Inherent Error Resiliency of Deep Neural Networks to Achieve Extreme Energy Efficiency Through Mixed-Signal Neurons

2019 ◽  
Vol 27 (6) ◽  
pp. 1365-1377 ◽  
Author(s):  
Baibhab Chatterjee ◽  
Priyadarshini Panda ◽  
Shovan Maity ◽  
Ayan Biswas ◽  
Kaushik Roy ◽  
...  
2021 ◽  
Vol 5 (4) ◽  
pp. 67
Author(s):  
Shirin Dora ◽  
Nikola Kasabov

Deep neural networks with rate-based neurons have exhibited tremendous progress in the last decade. However, the same level of progress has not been observed in research on spiking neural networks (SNN), despite their capability to handle temporal data, energy-efficiency and low latency. This could be because the benchmarking techniques for SNNs are based on the methods used for evaluating deep neural networks, which do not provide a clear evaluation of the capabilities of SNNs. Particularly, the benchmarking of SNN approaches with regards to energy efficiency and latency requires realization in suitable hardware, which imposes additional temporal and resource constraints upon ongoing projects. This review aims to provide an overview of the current real-world applications of SNNs and identifies steps to accelerate research involving SNNs in the future.


2021 ◽  
Vol 20 (6) ◽  
pp. 1-24
Author(s):  
Jason Servais ◽  
Ehsan Atoofian

In recent years, Deep Neural Networks (DNNs) have been deployed into a diverse set of applications from voice recognition to scene generation mostly due to their high-accuracy. DNNs are known to be computationally intensive applications, requiring a significant power budget. There have been a large number of investigations into energy-efficiency of DNNs. However, most of them primarily focused on inference while training of DNNs has received little attention. This work proposes an adaptive technique to identify and avoid redundant computations during the training of DNNs. Elements of activations exhibit a high degree of similarity, causing inputs and outputs of layers of neural networks to perform redundant computations. Based on this observation, we propose Adaptive Computation Reuse for Tensor Cores (ACRTC) where results of previous arithmetic operations are used to avoid redundant computations. ACRTC is an architectural technique, which enables accelerators to take advantage of similarity in input operands and speedup the training process while also increasing energy-efficiency. ACRTC dynamically adjusts the strength of computation reuse based on the tolerance of precision relaxation in different training phases. Over a wide range of neural network topologies, ACRTC accelerates training by 33% and saves energy by 32% with negligible impact on accuracy.


Author(s):  
Brunno F. Goldstein ◽  
Victor C. Ferreira ◽  
Sudarshan Srinivasan ◽  
Dipankar Das ◽  
Alexandre S. Nery ◽  
...  

Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

2018 ◽  
Author(s):  
Chi Zhang ◽  
Xiaohan Duan ◽  
Ruyuan Zhang ◽  
Li Tong

Sign in / Sign up

Export Citation Format

Share Document