graph processing
Recently Published Documents


TOTAL DOCUMENTS

427
(FIVE YEARS 152)

H-INDEX

28
(FIVE YEARS 6)

2022 ◽  
Vol 27 (1) ◽  
pp. 1-30
Author(s):  
Mengke Ge ◽  
Xiaobing Ni ◽  
Xu Qi ◽  
Song Chen ◽  
Jinglei Huang ◽  
...  

Brain network is a large-scale complex network with scale-free, small-world, and modularity properties, which largely supports this high-efficiency massive system. In this article, we propose to synthesize brain-network-inspired interconnections for large-scale network-on-chips. First, we propose a method to generate brain-network-inspired topologies with limited scale-free and power-law small-world properties, which have a low total link length and extremely low average hop count approximately proportional to the logarithm of the network size. In addition, given the large-scale applications, considering the modularity of the brain-network-inspired topologies, we present an application mapping method, including task mapping and deterministic deadlock-free routing, to minimize the power consumption and hop count. Finally, a cycle-accurate simulator BookSim2 is used to validate the architecture performance with different synthetic traffic patterns and large-scale test cases, including real-world communication networks for the graph processing application. Experiments show that, compared with other topologies and methods, the brain-network-inspired network-on-chips (NoCs) generated by the proposed method present significantly lower average hop count and lower average latency. Especially in graph processing applications with a power-law and tightly coupled inter-core communication, the brain-network-inspired NoC has up to 70% lower average hop count and 75% lower average latency than mesh-based NoCs.


2021 ◽  
Vol 18 (4) ◽  
pp. 1-24
Author(s):  
Sriseshan Srikanth ◽  
Anirudh Jain ◽  
Thomas M. Conte ◽  
Erik P. Debenedictis ◽  
Jeanine Cook

Sparse data applications have irregular access patterns that stymie modern memory architectures. Although hyper-sparse workloads have received considerable attention in the past, moderately-sparse workloads prevalent in machine learning applications, graph processing and HPC have not. Where the former can bypass the cache hierarchy, the latter fit in the cache. This article makes the observation that intelligent, near-processor cache management can improve bandwidth utilization for data-irregular accesses, thereby accelerating moderately-sparse workloads. We propose SortCache, a processor-centric approach to accelerating sparse workloads by introducing accelerators that leverage the on-chip cache subsystem, with minimal programmer intervention.


2021 ◽  
Vol 18 (4) ◽  
pp. 1-24
Author(s):  
Yu Zhang ◽  
Da Peng ◽  
Xiaofei Liao ◽  
Hai Jin ◽  
Haikun Liu ◽  
...  

Many out-of-GPU-memory systems are recently designed to support iterative processing of large-scale graphs. However, these systems still suffer from long time to converge because of inefficient propagation of active vertices’ new states along graph paths. To efficiently support out-of-GPU-memory graph processing, this work designs a system LargeGraph . Different from existing out-of-GPU-memory systems, LargeGraph proposes a dependency-aware data-driven execution approach , which can significantly accelerate active vertices’ state propagations along graph paths with low data access cost and also high parallelism. Specifically, according to the dependencies between the vertices, it only loads and processes the graph data associated with dependency chains originated from active vertices for smaller access cost. Because most active vertices frequently use a small evolving set of paths for their new states’ propagation because of power-law property, this small set of paths are dynamically identified and maintained and efficiently handled on the GPU to accelerate most propagations for faster convergence, whereas the remaining graph data are handled over the CPU. For out-of-GPU-memory graph processing, LargeGraph outperforms four cutting-edge systems: Totem (5.19–11.62×), Graphie (3.02–9.41×), Garaph (2.75–8.36×), and Subway (2.45–4.15×).


2021 ◽  
Author(s):  
Jilan Lin ◽  
Shuangchen Li ◽  
Yufei Ding ◽  
Yuan Xie

2021 ◽  
Author(s):  
Abanti Basak ◽  
Zheng Qu ◽  
Jilan Lin ◽  
Alaa R. Alameldeen ◽  
Zeshan Chishti ◽  
...  
Keyword(s):  

2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Peng Fang ◽  
Fang Wang ◽  
Zhan Shi ◽  
Dan Feng ◽  
Qianxu Yi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document