accelerator architectures
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 9)

H-INDEX

8
(FIVE YEARS 0)

Author(s):  
Jaro Hokkanen ◽  
Stefan Kollet ◽  
Jiri Kraus ◽  
Andreas Herten ◽  
Markus Hrywniak ◽  
...  


2021 ◽  
Author(s):  
Gordon Moon ◽  
Hyoukjun Kwon ◽  
Geonhwa Jeong ◽  
prasanth chatarsi ◽  
Sivasankaran Rajamanickam ◽  
...  


2021 ◽  
Vol 18 (3) ◽  
pp. 1-26
Author(s):  
Daniel Thuerck ◽  
Nicolas Weber ◽  
Roberto Bifulco

A large portion of the recent performance increase in the High Performance Computing (HPC) and Machine Learning (ML) domains is fueled by accelerator cards. Many popular ML frameworks support accelerators by organizing computations as a computational graph over a set of highly optimized, batched general-purpose kernels. While this approach simplifies the kernels’ implementation for each individual accelerator, the increasing heterogeneity among accelerator architectures for HPC complicates the creation of portable and extensible libraries of such kernels. Therefore, using a generalization of the CUDA community’s warp register cache programming idiom, we propose a new programming idiom (CoRe) and a virtual architecture model (PIRCH), abstracting over SIMD and SIMT paradigms. We define and automate the mapping process from a single source to PIRCH’s intermediate representation and develop backends that issue code for three different architectures: Intel AVX512, NVIDIA GPUs, and NEC SX-Aurora. Code generated by our source-to-source compiler for batched kernels, borG, competes favorably with vendor-tuned libraries and is up to 2× faster than hand-tuned kernels across architectures.



Author(s):  
Jaro Hokkanen ◽  
Stefan Kollet ◽  
Jiri Kraus ◽  
Andreas Herten ◽  
Markus Hrywniak ◽  
...  

AbstractRapidly changing heterogeneous supercomputer architectures pose a great challenge to many scientific communities trying to leverage the latest technology in high-performance computing. Many existing projects with a long development history have resultedin a large amount of code that is not directly compatible with the latest accelerator architectures. Furthermore, due to limited resources of scientific institutions, developing and maintaining architecture-specific ports is generally unsustainable. In order to adapt to modern accelerator architectures, many projects rely on directive-based programming models or build the codebase tightly around a third-party domain-specific language or library. This introduces external dependencies out of control of theproject. The presented paper tackles the issue by proposing a lightweight application-side adaptor layer for compute kernels and memory management resulting in a versatile and inexpensive adaptation of new accelerator architectures with little drawbacks.A widely used hydrologic model demonstrates that such an approach pursued more than 20 years ago is still paying off with modern accelerator architectures as demonstrated by a very significant performance gain from NVIDIA A100 GPUs, high developer productivity, and minimally invasive implementation; all while the codebase is kept well maintainable in the long-term.





Author(s):  
Gordon Euhyun Moon ◽  
Hyoukjun Kwon ◽  
Geonhwa Jeong ◽  
Prasanth Chatarasi ◽  
Sivasankaran Rajamanickam ◽  
...  


Engineering ◽  
2020 ◽  
Vol 6 (3) ◽  
pp. 264-274 ◽  
Author(s):  
Yiran Chen ◽  
Yuan Xie ◽  
Linghao Song ◽  
Fan Chen ◽  
Tianqi Tang


Sign in / Sign up

Export Citation Format

Share Document