scholarly journals Technical Note: Improving the computational efficiency of sparse matrix multiplication in linear atmospheric inverse problems

2016 ◽  
Author(s):  
Vineet Yadav ◽  
Anna M. Michalak

Abstract. Matrix multiplication of two sparse matrices is a fundamental operation in linear Bayesian inverse problems for computing covariance matrices of observations and a posteriori uncertainties. Applications of sparse-sparse matrix multiplication algorithms for specific use-cases in such inverse problems remain unexplored. Here we present a hybrid-parallel sparse-sparse matrix multiplication approach that is more efficient by a third in terms of execution time and operation count relative to standard sparse matrix multiplication algorithms available in most libraries. Two modifications of this hybrid-parallel algorithm are also proposed for the types of operations typical of atmospheric inverse problems, which further reduce the cost of sparse matrix multiplication by yielding only upper triangular and/or dense matrices.

2022 ◽  
Vol 41 (1) ◽  
pp. 1-10
Author(s):  
Jonas Zehnder ◽  
Stelian Coros ◽  
Bernhard Thomaszewski

We present a sparse Gauss-Newton solver for accelerated sensitivity analysis with applications to a wide range of equilibrium-constrained optimization problems. Dense Gauss-Newton solvers have shown promising convergence rates for inverse problems, but the cost of assembling and factorizing the associated matrices has so far been a major stumbling block. In this work, we show how the dense Gauss-Newton Hessian can be transformed into an equivalent sparse matrix that can be assembled and factorized much more efficiently. This leads to drastically reduced computation times for many inverse problems, which we demonstrate on a diverse set of examples. We furthermore show links between sensitivity analysis and nonlinear programming approaches based on Lagrange multipliers and prove equivalence under specific assumptions that apply for our problem setting.


2018 ◽  
Vol 7 (4.10) ◽  
pp. 157
Author(s):  
G. Somasekhar ◽  
K. Karthikeyan

The importance of big data is increasing day by day motivating the researchers towards new inventions. Often, a small amount of data is needed to achieve a solution or to draw a conclusion. The new big data techniques stem from the necessity to retrieve, store and process the required data out of huge data volumes. The present paper focuses on dealing with sparse matrices which is fre-quently needed in many sparse big data applications nowadays. It applies compact representation techniques of sparse data and moulds the required data in the mapreducible format. Then the mapreduce strategy is used to get the results quickly which saves execution time and improves scalability. Finally we established that the new algorithm performs well in sparse big data scenario compared to the existing techniques in big data processing.  


2021 ◽  
Vol 38 (2) ◽  
pp. 025005
Author(s):  
Birzhan Ayanbayev ◽  
Ilja Klebanov ◽  
Han Cheng Lie ◽  
T J Sullivan

Abstract The Bayesian solution to a statistical inverse problem can be summarised by a mode of the posterior distribution, i.e. a maximum a posteriori (MAP) estimator. The MAP estimator essentially coincides with the (regularised) variational solution to the inverse problem, seen as minimisation of the Onsager–Machlup (OM) functional of the posterior measure. An open problem in the stability analysis of inverse problems is to establish a relationship between the convergence properties of solutions obtained by the variational approach and by the Bayesian approach. To address this problem, we propose a general convergence theory for modes that is based on the Γ-convergence of OM functionals, and apply this theory to Bayesian inverse problems with Gaussian and edge-preserving Besov priors. Part II of this paper considers more general prior distributions.


Author(s):  
Md Salman Ahmed ◽  
Jennifer Houser ◽  
Mohammad A. Hoque ◽  
Rezaul Raju ◽  
Phil Pfeiffer

Parallel sparse matrix-matrix multiplication algorithms (PSpGEMM) spend most of their running time on inter-process communication. In the case of distributed matrix-matrix multiplications, much of this time is spent on interchanging the partial results that are needed to calculate the final product matrix. This overhead can be reduced with a one-dimensional distributed algorithm for parallel sparse matrix-matrix multiplication that uses a novel accumulation pattern based on the logarithmic complexity of the number of processors (i.e., where is the number of processors). This algorithm's MPI communication overhead and execution time were evaluated on an HPC cluster, using randomly generated sparse matrices with dimensions up to one million by one million. The results showed a reduction of inter-process communication overhead for matrices with larger dimensions compared to another one dimensional parallel algorithm that takes run-time complexity for accumulating the results.


Sign in / Sign up

Export Citation Format

Share Document