scholarly journals Toolboxes and Templates for Large Scale Linear Algebra Problems

2002 ◽  
Author(s):  
Zhaojun Bai ◽  
James Demmel ◽  
Jack Dongarra
Keyword(s):  
2017 ◽  
Vol 46 (2) ◽  
pp. 412-440 ◽  
Author(s):  
William Horn ◽  
Manoj Kumar ◽  
Joefon Jann ◽  
José Moreira ◽  
Pratap Pattnaik ◽  
...  

2017 ◽  
Vol 27 (5) ◽  
pp. 719-744 ◽  
Author(s):  
Ahmed Elgohary ◽  
Matthias Boehm ◽  
Peter J. Haas ◽  
Frederick R. Reiss ◽  
Berthold Reinwald

2010 ◽  
Vol 58 (2) ◽  
pp. 145-150 ◽  
Author(s):  
M. Marqués ◽  
G. Quintana-Ortí ◽  
E. S. Quintana-Ortí ◽  
R. van de Geijn

Computer ◽  
2015 ◽  
Vol 48 (8) ◽  
pp. 26-34 ◽  
Author(s):  
Daniele Buono ◽  
John A. Gunnels ◽  
Xinyu Que ◽  
Fabio Checconi ◽  
Fabrizio Petrini ◽  
...  

2021 ◽  
Vol 47 (3) ◽  
pp. 1-23
Author(s):  
Ahmad Abdelfattah ◽  
Timothy Costa ◽  
Jack Dongarra ◽  
Mark Gates ◽  
Azzam Haidar ◽  
...  

This article describes a standard API for a set of Batched Basic Linear Algebra Subprograms (Batched BLAS or BBLAS). The focus is on many independent BLAS operations on small matrices that are grouped together and processed by a single routine, called a Batched BLAS routine. The matrices are grouped together in uniformly sized groups, with just one group if all the matrices are of equal size. The aim is to provide more efficient, but portable, implementations of algorithms on high-performance many-core platforms. These include multicore and many-core CPU processors, GPUs and coprocessors, and other hardware accelerators with floating-point compute facility. As well as the standard types of single and double precision, we also include half and quadruple precision in the standard. In particular, half precision is used in many very large scale applications, such as those associated with machine learning.


Sign in / Sign up

Export Citation Format

Share Document