Applications

Author(s):  
Wesley Petersen ◽  
Peter Arbenz

Linear algebra is often the kernel of most numerical computations. It deals with vectors and matrices and simple operations like addition and multiplication on these objects. Vectors are one-dimensional arrays of say n real or complex numbers x0, x1, . . . , xn−1. We denote such a vector by x and think of it as a column vector, On a sequential computer, these numbers occupy n consecutive memory locations. This is also true, at least conceptually, on a shared memory multiprocessor computer. On distributed memory multicomputers, the primary issue is how to distribute vectors on the memory of the processors involved in the computation. Matrices are two-dimensional arrays of the form The n · m real (complex) matrix elements aij are stored in n · m (respectively 2 · n ·m if complex datatype is available) consecutive memory locations. This is achieved by either stacking the columns on top of each other or by appending row after row. The former is called column-major, the latter row-major order. The actual procedure depends on the programming language. In Fortran, matrices are stored in column-major order, in C in row-major order. There is no principal difference, but for writing efficient programs one has to respect how matrices are laid out. To be consistent with the libraries that we will use that are mostly written in Fortran, we will explicitly program in column-major order. Thus, the matrix element aij of the m×n matrix A is located i+j · m memory locations after a00. Therefore, in our C codes we will write a[i+j*m]. Notice that there is no such simple procedure for determining the memory location of an element of a sparse matrix. In Section 2.3, we outline data descriptors to handle sparse matrices. In this and later chapters we deal with one of the simplest operations one wants to do with vectors and matrices: the so-called saxpy operation (2.3). In Tables 2.1 and 2.2 are listed some of the acronyms and conventions for the basic linear algebra subprograms discussed in this book.

Author(s):  
Rob H. Bisseling

This chapter introduces irregular algorithms and presents the example of parallel sparse matrix-vector multiplication (SpMV), which is the central operation in iterative linear system solvers. The irregular sparsity pattern of the matrix does not change during the multiplication, which may be repeated many times. This justifies putting a lot of effort into finding a good data distribution. The Mondriaan distribution of a sparse matrix is a useful non-Cartesian distribution that can be found by hypergraph-based partitioning. The Mondriaan package implements such a partitioning and also the newer medium-grain partitioning method. The chapter analyses the special cases of random sparse matrices and Laplacian matrices. It uses performance profiles and geometric means to compare different partitioning methods. Furthermore, it presents the hybrid-BSP model and a hybrid-BSP SpMV, which are aimed at hybrid distributed/shared-memory architectures. The parallel SpMV can be incorporated in applications, ranging from PageRank computation to artificial neural networks.


1997 ◽  
Vol 23 (3) ◽  
pp. 379-401 ◽  
Author(s):  
Iain S. Duff ◽  
Michele Marrone ◽  
Giuseppe Radicati ◽  
Carlo Vittoli

Author(s):  
Ernesto Dufrechou ◽  
Pablo Ezzatti ◽  
Enrique S Quintana-Ortí

More than 10 years of research related to the development of efficient GPU routines for the sparse matrix-vector product (SpMV) have led to several realizations, each with its own strengths and weaknesses. In this work, we review some of the most relevant efforts on the subject, evaluate a few prominent routines that are publicly available using more than 3000 matrices from different applications, and apply machine learning techniques to anticipate which SpMV realization will perform best for each sparse matrix on a given parallel platform. Our numerical experiments confirm the methods offer such varied behaviors depending on the matrix structure that the identification of general rules to select the optimal method for a given matrix becomes extremely difficult, though some useful strategies (heuristics) can be defined. Using a machine learning approach, we show that it is possible to obtain unexpensive classifiers that predict the best method for a given sparse matrix with over 80% accuracy, demonstrating that this approach can deliver important reductions in both execution time and energy consumption.


2012 ◽  
Vol 20 (3) ◽  
pp. 241-255 ◽  
Author(s):  
Eric Bavier ◽  
Mark Hoemmen ◽  
Sivasankaran Rajamanickam ◽  
Heidi Thornquist

Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples the algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.


2015 ◽  
Vol 3 (1) ◽  
Author(s):  
Guoliang Xu ◽  
Xia Wang ◽  
Ming Li ◽  
Zhucui Jing

AbstractWe present an efficient and reliable algorithm for determining the orientations of noisy images obtained fromprojections of a three-dimensional object. Based on the linear relationship among the common line vectors in one image plane, we construct a sparse matrix, and show that the coordinates of the common line vectors are the eigenvectors of the matrix with respect to the eigenvalue 1. The projection directions and in-plane rotation angles can be determined fromthese coordinates. A robust computation method of common lines in the real space using aweighted cross-correlation function is proposed to increase the robustness of the algorithm against the noise. A small number of good leading images, which have the maximal dissimilarity, are used to increase the reliability of orientations and improve the efficiency for determining the orientations of all the images. Numerical experiments show that the proposed algorithm is effective and efficient.


Author(s):  
Simon McIntosh–Smith ◽  
Rob Hunt ◽  
James Price ◽  
Alex Warwick Vesztrocy

High-performance computing systems continue to increase in size in the quest for ever higher performance. The resulting increased electronic component count, coupled with the decrease in feature sizes of the silicon manufacturing processes used to build these components, may result in future exascale systems being more susceptible to soft errors caused by cosmic radiation than in current high-performance computing systems. Through the use of techniques such as hardware-based error-correcting codes and checkpoint-restart, many of these faults can be mitigated at the cost of increased hardware overhead, run-time, and energy consumption that can be as much as 10–20%. Some predictions expect these overheads to continue to grow over time. For extreme scale systems, these overheads will represent megawatts of power consumption and millions of dollars of additional hardware costs, which could potentially be avoided with more sophisticated fault-tolerance techniques. In this paper we present new software-based fault tolerance techniques that can be applied to one of the most important classes of software in high-performance computing: iterative sparse matrix solvers. Our new techniques enables us to exploit knowledge of the structure of sparse matrices in such a way as to improve the performance, energy efficiency, and fault tolerance of the overall solution.


2018 ◽  
Vol 12 (3) ◽  
pp. 143-157 ◽  
Author(s):  
Håvard Raddum ◽  
Pavol Zajac

Abstract We show how to build a binary matrix from the MRHS representation of a symmetric-key cipher. The matrix contains the cipher represented as an equation system and can be used to assess a cipher’s resistance against algebraic attacks. We give an algorithm for solving the system and compute its complexity. The complexity is normally close to exhaustive search on the variables representing the user-selected key. Finally, we show that for some variants of LowMC, the joined MRHS matrix representation can be used to speed up regular encryption in addition to exhaustive key search.


2011 ◽  
Vol 11 (3) ◽  
pp. 382-393 ◽  
Author(s):  
Ivan Oseledets

AbstractIn this paper, the concept of the DMRG minimization scheme is extended to several important operations in the TT-format, like the matrix-by-vector product and the conversion from the canonical format to the TT-format. Fast algorithms are implemented and a stabilization scheme based on randomization is proposed. The comparison with the direct method is performed on a sequence of matrices and vectors coming as approximate solutions of linear systems in the TT-format. A generated example is provided to show that randomization is really needed in some cases. The matrices and vectors used are available from the author or at http://spring.inm.ras.ru/osel


2018 ◽  
Vol 11 (3) ◽  
pp. 774-792
Author(s):  
Mutti-Ur Rehman ◽  
M. Fazeel Anwar

In this article we consider the matrix representations of finite symmetric groups Sn over the filed of complex numbers. These groups and their representations also appear as symmetries of certain linear control systems [5]. We compute the structure singular values (SSV) of the matrices arising from these representations. The obtained results of SSV are compared with well-known MATLAB routine mussv.


Sign in / Sign up

Export Citation Format

Share Document