TernGEMM: GEneral Matrix Multiply Library with Ternary Weights for Fast DNN Inference

Author(s):  
Seokhyeon Choi ◽  
Kyuhong Shim ◽  
Jungwook Choi ◽  
Wonyong Sung ◽  
Byonghyo Shim
Keyword(s):  
Author(s):  
D. B. Hunter

1. Introduction. Let A[λ] be the irreducible invariant matrix of a general matrix of order n × n, corresponding to a partition (λ) = (λ1, λ2, …, λr) of some integer m. The problem to be discussed here is that of determining the canonical form of A[λ] when that of A is known.


2018 ◽  
Vol 53 (1) ◽  
pp. 407-408
Author(s):  
Junhong Liu ◽  
Xin He ◽  
Weifeng Liu ◽  
Guangming Tan

2018 ◽  
Vol 7 (4) ◽  
pp. 515-528 ◽  
Author(s):  
Desmond J Higham

Abstract The friendship paradox states that, on average, our friends have more friends than we do. In network terms, the average degree over the nodes can never exceed the average degree over the neighbours of nodes. This effect, which is a classic example of sampling bias, has attracted much attention in the social science and network science literature, with variations and extensions of the paradox being defined, tested and interpreted. Here, we show that a version of the paradox holds rigorously for eigenvector centrality: on average, our friends are more important than us. We then consider general matrix-function centrality, including Katz centrality, and give sufficient conditions for the paradox to hold. We also discuss which results can be generalized to the cases of directed and weighted edges. In this way, we add theoretical support for a field that has largely been evolving through empirical testing.


2003 ◽  
Vol 95 (1) ◽  
pp. 101-121 ◽  
Author(s):  
Delin Chu ◽  
Lieven De Lathauwer ◽  
Bart De Moor

2000 ◽  
Vol 16 (3-4) ◽  
pp. 177-186 ◽  
Author(s):  
Kaihuai Qin

Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1984
Author(s):  
Wei Zhang ◽  
Zihao Jiang ◽  
Zhiguang Chen ◽  
Nong Xiao ◽  
Yang Ou

Double-precision general matrix multiplication (DGEMM) is an essential kernel for measuring the potential performance of an HPC platform. ARMv8-based system-on-chips (SoCs) have become the candidates for the next-generation HPC systems with their highly competitive performance and energy efficiency. Therefore, it is meaningful to design high-performance DGEMM for ARMv8-based SoCs. However, as ARMv8-based SoCs integrate increasing cores, modern CPU uses non-uniform memory access (NUMA). NUMA restricts the performance and scalability of DGEMM when many threads access remote NUMA domains. This poses a challenge to develop high-performance DGEMM on multi-NUMA architecture. We present a NUMA-aware method to reduce the number of cross-die and cross-chip memory access events. The critical enabler for NUMA-aware DGEMM is to leverage two levels of parallelism between and within nodes in a purely threaded implementation, which allows the task independence and data localization of NUMA nodes. We have implemented NUMA-aware DGEMM in the OpenBLAS and evaluated it on a dual-socket server with 48-core processors based on the Kunpeng920 architecture. The results show that NUMA-aware DGEMM has effectively reduced the number of cross-die and cross-chip memory access, resulting in enhancing the scalability of DGEMM significantly and increasing the performance of DGEMM by 17.1% on average, with the most remarkable improvement being 21.9%.


Sign in / Sign up

Export Citation Format

Share Document