Exploiting Dynamic Voltage and Frequency Scaling in networks on chip

Author(s):  
Andrea Bianco ◽  
Paolo Giaccone ◽  
Nanfang Li
2014 ◽  
Vol E97.D (9) ◽  
pp. 2320-2329 ◽  
Author(s):  
Katherine Shu-Min LI ◽  
Yingchieh HO ◽  
Yu-Wei YANG ◽  
Liang-Bi CHEN

Electronics ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 1423 ◽  
Author(s):  
Valentino Peluso ◽  
Roberto Giorgio Rizzo ◽  
Andrea Calimera

Convolutional Neural Networks (ConvNets) can be shrunk to fit embedded CPUs adopted on mobile end-nodes, like smartphones or drones. The deployment onto such devices encompasses several algorithmic level optimizations, e.g., topology restructuring, pruning, and quantization, that reduce the complexity of the network, ensuring less resource usage and hence higher speed. Several studies revealed remarkable performance, paving the way towards real-time inference on low power cores. However, continuous execution at maximum speed is quite unrealistic due to a fast increase of the on-chip temperature. Indeed, proper thermal management is paramount to guarantee silicon reliability and a safe user experience. Power management schemes, like voltage lowering and frequency scaling, are common knobs to control the thermal stability. Obviously, this implies a performance degradation, often not considered during the training and optimization stages. The objective of this work is to present the performance assessment of embedded ConvNets under thermal management. Our study covers the behavior of two control policies, namely reactive and proactive, implemented through the Dynamic Voltage-Frequency Scaling (DVFS) mechanism available on commercial embedded CPUs. As benchmarks, we used four state-of-the-art ConvNets for computer vision flashed into the ARM Cortex-A15 CPU. With the collected results, we aim to show the existing temperature-performance trade-off and give a more realistic analysis of the maximum performance achievable. Moreover, we empirically demonstrate the strict relationship between the on-chip thermal behavior and the hyper-parameters of the ConvNet, revealing optimization margins for a thermal-aware design of neural network layers.


2020 ◽  
Author(s):  
Prachi Sharma ◽  
Arkid Bera ◽  
Anu Gupta

<div> <div> <div> <p>To curb redundant power consumption in portable embedded and real-time applications, processors are equipped with various Dynamic Voltage and Frequency Scaling (DVFS) techniques. The accuracy of the prediction of the operating frequency of any such technique determines how power-efficient it makes a processor for a variety of programs and users. But, in the recent techniques, the focus has been too much on saving power, thus, ignoring the user-satisfaction metric, i.e. performance. The DVFS technique used to save power, in turn, introduces unwanted latency due to the high complexity of the algorithm. Also, many of the modern DVFS techniques provide feedback manually triggered by the user to change the frequency to conserve energy efficiently, thus, further increasing the reaction time. In this paper, we imple- ment a novel Artificial Neural Networks-driven frequency scaling methodology, which makes it possible to save power and boost performance at the same time, implicitly i.e. without any feedback from the user. Also, to make the system more inclusive concerning the kinds of processes run on it, we trained the ANN not only for CPU-intensive programs but also on the ones that are more memory-bound, i.e. which have frequent memory accesses during its average CPU cycle. The proposed technique has been evaluated on Intel i7-4720HQ Haswell processor and has shown performance boost by up to 20%, SoC power savings up to 16%, and Performance per Watt improvement by up to 30%, as compared to the existing DVFS technique. An open-source memory-intensive benchmark kit called Mibench was used to verify the utility of the suggested technique. </p> </div> </div> </div>


2020 ◽  
Author(s):  
Prachi Sharma ◽  
Arkid Bera ◽  
Anu Gupta

<div> <div> <div> <p>To curb redundant power consumption in portable embedded and real-time applications, processors are equipped with various Dynamic Voltage and Frequency Scaling (DVFS) techniques. The accuracy of the prediction of the operating frequency of any such technique determines how power-efficient it makes a processor for a variety of programs and users. But, in the recent techniques, the focus has been too much on saving power, thus, ignoring the user-satisfaction metric, i.e. performance. The DVFS technique used to save power, in turn, introduces unwanted latency due to the high complexity of the algorithm. Also, many of the modern DVFS techniques provide feedback manually triggered by the user to change the frequency to conserve energy efficiently, thus, further increasing the reaction time. In this paper, we imple- ment a novel Artificial Neural Networks-driven frequency scaling methodology, which makes it possible to save power and boost performance at the same time, implicitly i.e. without any feedback from the user. Also, to make the system more inclusive concerning the kinds of processes run on it, we trained the ANN not only for CPU-intensive programs but also on the ones that are more memory-bound, i.e. which have frequent memory accesses during its average CPU cycle. The proposed technique has been evaluated on Intel i7-4720HQ Haswell processor and has shown performance boost by up to 20%, SoC power savings up to 16%, and Performance per Watt improvement by up to 30%, as compared to the existing DVFS technique. An open-source memory-intensive benchmark kit called Mibench was used to verify the utility of the suggested technique. </p> </div> </div> </div>


Sign in / Sign up

Export Citation Format

Share Document