Optimizing Sparse Linear Algebra for Large-Scale Graph Analytics

Computer ◽  
2015 ◽  
Vol 48 (8) ◽  
pp. 26-34 ◽  
Author(s):  
Daniele Buono ◽  
John A. Gunnels ◽  
Xinyu Que ◽  
Fabio Checconi ◽  
Fabrizio Petrini ◽  
...  
2016 ◽  
Author(s):  
Stephen Kozacik ◽  
Aaron L. Paolini ◽  
Paul Fox ◽  
Eric Kelmelis

2022 ◽  
Vol 15 (2) ◽  
pp. 1-33
Author(s):  
Mikhail Asiatici ◽  
Paolo Ienne

Applications such as large-scale sparse linear algebra and graph analytics are challenging to accelerate on FPGAs due to the short irregular memory accesses, resulting in low cache hit rates. Nonblocking caches reduce the bandwidth required by misses by requesting each cache line only once, even when there are multiple misses corresponding to it. However, such reuse mechanism is traditionally implemented using an associative lookup. This limits the number of misses that are considered for reuse to a few tens, at most. In this article, we present an efficient pipeline that can process and store thousands of outstanding misses in cuckoo hash tables in on-chip SRAM with minimal stalls. This brings the same bandwidth advantage as a larger cache for a fraction of the area budget, because outstanding misses do not need a data array, which can significantly speed up irregular memory-bound latency-insensitive applications. In addition, we extend nonblocking caches to generate variable-length bursts to memory, which increases the bandwidth delivered by DRAMs and their controllers. The resulting miss-optimized memory system provides up to 25% speedup with 24× area reduction on 15 large sparse matrix-vector multiplication benchmarks evaluated on an embedded and a datacenter FPGA system.


2002 ◽  
Author(s):  
Zhaojun Bai ◽  
James Demmel ◽  
Jack Dongarra
Keyword(s):  

2021 ◽  
Author(s):  
Zhihui Du ◽  
Oliver Alvarado Rodriguez ◽  
David A. Bader
Keyword(s):  

2021 ◽  
Author(s):  
Aaron Walden ◽  
Mohammad Zubair ◽  
Christopher P. Stone ◽  
Eric J. Nielsen

Sign in / Sign up

Export Citation Format

Share Document