scholarly journals Generation of test matrices with specified eigenvalues using floating-point arithmetic

Author(s):  
Katsuhisa Ozaki ◽  
Takeshi Ogita

AbstractThis paper concerns test matrices for numerical linear algebra using an error-free transformation of floating-point arithmetic. For specified eigenvalues given by a user, we propose methods of generating a matrix whose eigenvalues are exactly known based on, for example, Schur or Jordan normal form and a block diagonal form. It is also possible to produce a real matrix with specified complex eigenvalues. Such test matrices with exactly known eigenvalues are useful for numerical algorithms in checking the accuracy of computed results. In particular, exact errors of eigenvalues can be monitored. To generate test matrices, we first propose an error-free transformation for the product of three matrices YSX. We approximate S by ${S^{\prime }}$ S ′ to compute ${YS^{\prime }X}$ Y S ′ X without a rounding error. Next, the error-free transformation is applied to the generation of test matrices with exactly known eigenvalues. Note that the exactly known eigenvalues of the constructed matrix may differ from the anticipated given eigenvalues. Finally, numerical examples are introduced in checking the accuracy of numerical computations for symmetric and unsymmetric eigenvalue problems.

Author(s):  
Jack Dongarra ◽  
Laura Grigori ◽  
Nicholas J. Higham

A number of features of today’s high-performance computers make it challenging to exploit these machines fully for computational science. These include increasing core counts but stagnant clock frequencies; the high cost of data movement; use of accelerators (GPUs, FPGAs, coprocessors), making architectures increasingly heterogeneous; and multi- ple precisions of floating-point arithmetic, including half-precision. Moreover, as well as maximizing speed and accuracy, minimizing energy consumption is an important criterion. New generations of algorithms are needed to tackle these challenges. We discuss some approaches that we can take to develop numerical algorithms for high-performance computational science, with a view to exploiting the next generation of supercomputers. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.


Author(s):  
Hartwig Anzt ◽  
Goran Flegar ◽  
Thomas Grützmacher ◽  
Enrique S Quintana-Ortí

With the memory bandwidth of current computer architectures being significantly slower than the (floating point) arithmetic performance, many scientific computations only leverage a fraction of the computational power in today’s high-performance architectures. At the same time, memory operations are the primary energy consumer of modern architectures, heavily impacting the resource cost of large-scale applications and the battery life of mobile devices. This article tackles this mismatch between floating point arithmetic throughput and memory bandwidth by advocating a disruptive paradigm change with respect to how data are stored and processed in scientific applications. Concretely, the goal is to radically decouple the data storage format from the processing format and, ultimately, design a “modular precision ecosystem” that allows for more flexibility in terms of customized data access. For memory-bounded scientific applications, dynamically adapting the memory precision to the numerical requirements allows for attractive resource savings. In this article, we demonstrate the potential of employing a modular precision ecosystem for the block-Jacobi preconditioner and the PageRank algorithm—two applications that are popular in the communities and at the same characteristic representatives for the field of numerical linear algebra and data analytics, respectively.


2020 ◽  
Vol 39 (6) ◽  
pp. 1-16
Author(s):  
Gianmarco Cherchi ◽  
Marco Livesu ◽  
Riccardo Scateni ◽  
Marco Attene

1964 ◽  
Vol 7 (1) ◽  
pp. 10-13 ◽  
Author(s):  
Robert T. Gregory ◽  
James L. Raney

2020 ◽  
Vol 26 (4) ◽  
pp. 273-284
Author(s):  
Hao Ji ◽  
Michael Mascagni ◽  
Yaohang Li

AbstractIn this article, we consider the general problem of checking the correctness of matrix multiplication. Given three n\times n matrices 𝐴, 𝐵 and 𝐶, the goal is to verify that A\times B=C without carrying out the computationally costly operations of matrix multiplication and comparing the product A\times B with 𝐶, term by term. This is especially important when some or all of these matrices are very large, and when the computing environment is prone to soft errors. Here we extend Freivalds’ algorithm to a Gaussian Variant of Freivalds’ Algorithm (GVFA) by projecting the product A\times B as well as 𝐶 onto a Gaussian random vector and then comparing the resulting vectors. The computational complexity of GVFA is consistent with that of Freivalds’ algorithm, which is O(n^{2}). However, unlike Freivalds’ algorithm, whose probability of a false positive is 2^{-k}, where 𝑘 is the number of iterations, our theoretical analysis shows that, when A\times B\neq C, GVFA produces a false positive on set of inputs of measure zero with exact arithmetic. When we introduce round-off error and floating-point arithmetic into our analysis, we can show that the larger this error, the higher the probability that GVFA avoids false positives. Moreover, by iterating GVFA 𝑘 times, the probability of a false positive decreases as p^{k}, where 𝑝 is a very small value depending on the nature of the fault on the result matrix and the arithmetic system’s floating-point precision. Unlike deterministic algorithms, there do not exist any fault patterns that are completely undetectable with GVFA. Thus GVFA can be used to provide efficient fault tolerance in numerical linear algebra, and it can be efficiently implemented on modern computing architectures. In particular, GVFA can be very efficiently implemented on architectures with hardware support for fused multiply-add operations.


Sign in / Sign up

Export Citation Format

Share Document