Energy-Efficient Near-Threshold Parallel Computing: The PULPv2 Cluster

IEEE Micro ◽  
2017 ◽  
Vol 37 (5) ◽  
pp. 20-31 ◽  
Author(s):  
Davide Rossi ◽  
Antonio Pullini ◽  
Igor Loi ◽  
Michael Gautschi ◽  
Frank Kagan Gurkaynak ◽  
...  
2021 ◽  
Author(s):  
Vinicius Zanandrea ◽  
Douglas M. Borges ◽  
Vagner S. Rosa ◽  
Cristina Meinhardt

2020 ◽  
Vol 10 (4) ◽  
pp. 33
Author(s):  
Pramesh Pandey ◽  
Noel Daniel Gundi ◽  
Prabal Basu ◽  
Tahmoures Shabanian ◽  
Mitchell Craig Patrick ◽  
...  

AI evolution is accelerating and Deep Neural Network (DNN) inference accelerators are at the forefront of ad hoc architectures that are evolving to support the immense throughput required for AI computation. However, much more energy efficient design paradigms are inevitable to realize the complete potential of AI evolution and curtail energy consumption. The Near-Threshold Computing (NTC) design paradigm can serve as the best candidate for providing the required energy efficiency. However, NTC operation is plagued with ample performance and reliability concerns arising from the timing errors. In this paper, we dive deep into DNN architecture to uncover some unique challenges and opportunities for operation in the NTC paradigm. By performing rigorous simulations in TPU systolic array, we reveal the severity of timing errors and its impact on inference accuracy at NTC. We analyze various attributes—such as data–delay relationship, delay disparity within arithmetic units, utilization pattern, hardware homogeneity, workload characteristics—and uncover unique localized and global techniques to deal with the timing errors in NTC.


2010 ◽  
Vol 98 (2) ◽  
pp. 253-266 ◽  
Author(s):  
Ronald G. Dreslinski ◽  
Michael Wieckowski ◽  
David Blaauw ◽  
Dennis Sylvester ◽  
Trevor Mudge

Author(s):  
Chao Jin ◽  
Bronis R de Supinski ◽  
David Abramson ◽  
Heidi Poxon ◽  
Luiz DeRose ◽  
...  

Energy consumption is one of the top challenges for achieving the next generation of supercomputing. Codesign of hardware and software is critical for improving energy efficiency (EE) for future large-scale systems. Many architectural power-saving techniques have been developed, and most hardware components are approaching physical limits. Accordingly, parallel computing software, including both applications and systems, should exploit power-saving hardware innovations and manage efficient energy use. In addition, new power-aware parallel computing methods are essential to decrease energy usage further. This article surveys software-based methods that aim to improve EE for parallel computing. It reviews the methods that exploit the characteristics of parallel scientific applications, including load imbalance and mixed precision of floating-point (FP) calculations, to improve EE. In addition, this article summarizes widely used methods to improve power usage at different granularities, such as the whole system and per application. In particular, it describes the most important techniques to measure and to achieve energy-efficient usage of various parallel computing facilities, including processors, memories, and networks. Overall, this article reviews the state-of-the-art of energy-efficient methods for parallel computing to motivate researchers to achieve optimal parallel computing under a power budget constraint.


Sign in / Sign up

Export Citation Format

Share Document