Exploiting Inverse Power of Two Non-Uniform Quantization Method to Increase Energy Efficiency in Deep Neural Networks

2020 ◽  
Vol 47 (1) ◽  
pp. 27-35
Author(s):  
Jun-Yeong Choi ◽  
Joonhyuk Yoo
2021 ◽  
Vol 5 (4) ◽  
pp. 67
Author(s):  
Shirin Dora ◽  
Nikola Kasabov

Deep neural networks with rate-based neurons have exhibited tremendous progress in the last decade. However, the same level of progress has not been observed in research on spiking neural networks (SNN), despite their capability to handle temporal data, energy-efficiency and low latency. This could be because the benchmarking techniques for SNNs are based on the methods used for evaluating deep neural networks, which do not provide a clear evaluation of the capabilities of SNNs. Particularly, the benchmarking of SNN approaches with regards to energy efficiency and latency requires realization in suitable hardware, which imposes additional temporal and resource constraints upon ongoing projects. This review aims to provide an overview of the current real-world applications of SNNs and identifies steps to accelerate research involving SNNs in the future.


2021 ◽  
Vol 20 (6) ◽  
pp. 1-24
Author(s):  
Jason Servais ◽  
Ehsan Atoofian

In recent years, Deep Neural Networks (DNNs) have been deployed into a diverse set of applications from voice recognition to scene generation mostly due to their high-accuracy. DNNs are known to be computationally intensive applications, requiring a significant power budget. There have been a large number of investigations into energy-efficiency of DNNs. However, most of them primarily focused on inference while training of DNNs has received little attention. This work proposes an adaptive technique to identify and avoid redundant computations during the training of DNNs. Elements of activations exhibit a high degree of similarity, causing inputs and outputs of layers of neural networks to perform redundant computations. Based on this observation, we propose Adaptive Computation Reuse for Tensor Cores (ACRTC) where results of previous arithmetic operations are used to avoid redundant computations. ACRTC is an architectural technique, which enables accelerators to take advantage of similarity in input operands and speedup the training process while also increasing energy-efficiency. ACRTC dynamically adjusts the strength of computation reuse based on the tolerance of precision relaxation in different training phases. Over a wide range of neural network topologies, ACRTC accelerates training by 33% and saves energy by 32% with negligible impact on accuracy.


2019 ◽  
Vol 27 (6) ◽  
pp. 1365-1377 ◽  
Author(s):  
Baibhab Chatterjee ◽  
Priyadarshini Panda ◽  
Shovan Maity ◽  
Ayan Biswas ◽  
Kaushik Roy ◽  
...  

2020 ◽  
Vol 29 (6) ◽  
pp. 1126-1133
Author(s):  
Yong Yuan ◽  
Chen Chen ◽  
Xiyuan Hu ◽  
Silong Peng

Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

2018 ◽  
Author(s):  
Chi Zhang ◽  
Xiaohan Duan ◽  
Ruyuan Zhang ◽  
Li Tong

2008 ◽  
pp. 108-125
Author(s):  
K. Zavodov

Project-based transactions (PBTs) are a market mechanism of attracting foreign investments in order to abate greenhouse gas emissions and increase energy efficiency of the country’s enterprises. The article provides a classification and analyzes advantages and drawbacks of PBTs from the point of view of a host country. The main trends and factors determining the dynamics of the PBT market are described. Given that Russia currently lags behind the leaders of the PBT market, an incorporation of a state carbon fund is put forward with an aim of channelling PBTs through it. This paper proposes a form of PBT market regulation by incorporating an option mechanism into the contract structure of a transaction. A comparison of the new form of regulation with the tools that are currently in use in Russia and other countries demonstrates its greater economic efficiency under uncertainty.


Sign in / Sign up

Export Citation Format

Share Document