Reducing Inter-Process Communication Overhead in Parallel Sparse Matrix-Matrix Multiplication

Author(s):  
Md Salman Ahmed ◽  
Jennifer Houser ◽  
Mohammad A. Hoque ◽  
Rezaul Raju ◽  
Phil Pfeiffer

Parallel sparse matrix-matrix multiplication algorithms (PSpGEMM) spend most of their running time on inter-process communication. In the case of distributed matrix-matrix multiplications, much of this time is spent on interchanging the partial results that are needed to calculate the final product matrix. This overhead can be reduced with a one-dimensional distributed algorithm for parallel sparse matrix-matrix multiplication that uses a novel accumulation pattern based on the logarithmic complexity of the number of processors (i.e., where is the number of processors). This algorithm's MPI communication overhead and execution time were evaluated on an HPC cluster, using randomly generated sparse matrices with dimensions up to one million by one million. The results showed a reduction of inter-process communication overhead for matrices with larger dimensions compared to another one dimensional parallel algorithm that takes run-time complexity for accumulating the results.

2016 ◽  
Author(s):  
Vineet Yadav ◽  
Anna M. Michalak

Abstract. Matrix multiplication of two sparse matrices is a fundamental operation in linear Bayesian inverse problems for computing covariance matrices of observations and a posteriori uncertainties. Applications of sparse-sparse matrix multiplication algorithms for specific use-cases in such inverse problems remain unexplored. Here we present a hybrid-parallel sparse-sparse matrix multiplication approach that is more efficient by a third in terms of execution time and operation count relative to standard sparse matrix multiplication algorithms available in most libraries. Two modifications of this hybrid-parallel algorithm are also proposed for the types of operations typical of atmospheric inverse problems, which further reduce the cost of sparse matrix multiplication by yielding only upper triangular and/or dense matrices.


2018 ◽  
Vol 7 (4.10) ◽  
pp. 157
Author(s):  
G. Somasekhar ◽  
K. Karthikeyan

The importance of big data is increasing day by day motivating the researchers towards new inventions. Often, a small amount of data is needed to achieve a solution or to draw a conclusion. The new big data techniques stem from the necessity to retrieve, store and process the required data out of huge data volumes. The present paper focuses on dealing with sparse matrices which is fre-quently needed in many sparse big data applications nowadays. It applies compact representation techniques of sparse data and moulds the required data in the mapreducible format. Then the mapreduce strategy is used to get the results quickly which saves execution time and improves scalability. Finally we established that the new algorithm performs well in sparse big data scenario compared to the existing techniques in big data processing.  


2014 ◽  
Vol 40 (5-6) ◽  
pp. 47-58 ◽  
Author(s):  
Urban Borštnik ◽  
Joost VandeVondele ◽  
Valéry Weber ◽  
Jürg Hutter

2012 ◽  
Vol 20 (3) ◽  
pp. 241-255 ◽  
Author(s):  
Eric Bavier ◽  
Mark Hoemmen ◽  
Sivasankaran Rajamanickam ◽  
Heidi Thornquist

Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples the algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.


2021 ◽  
Author(s):  
Gonzalo Berger ◽  
Manuel Freire ◽  
Renzo Marini ◽  
Ernesto Dufrechou ◽  
Pablo Ezzatti

Author(s):  
Bhushana Samyuel Neelam ◽  
Benjamin A Shimray

: The ever-increasing dependency of the utilities on networking brought several cyber vulnerabilities and burdened them with dynamic networking demands like QoS, multihoming, and mobility. As the existing network was designed without security in context, it poses several limitations in mitigating the unwanted cyber threats and struggling to provide an integrated solution for the novel networking demands. These limitations resulted in the design and deployment of various add-on protocols that made the existing network architecture a patchy and complex network. The proposed work introduces one of the future internet architectures, which seem to provide abilities to mitigate the above limitations. Recursive internetworking architecture (RINA) is one of the future internets and appears to be a reliable solution with its promising design features. RINA extended inter-process communication to distributed inter-process communication and combined it with recursion. RINA offered unique inbuilt security and the ability to meet novel networking demands with its design. It has also provided integration methods to make use of the existing network infrastructure. The present work reviews the unique architecture, abilities, and adaptability of RINA based on various research works of RINA. The contribution of this article is to expose the potential of RINA in achieving efficient networking solutions among academia and industry.


Sign in / Sign up

Export Citation Format

Share Document