scholarly journals FAST ROBUST ARITHMETICS FOR GEOMETRIC ALGORITHMS AND APPLICATIONS TO GIS

Author(s):  
T. Bartels ◽  
V. Fisikopoulos

Abstract. Geometric predicates are used in many GIS algorithms, such as the construction of Delaunay Triangulations for Triangulated Irregular Networks (TIN) or geospatial predicates. With floating-point arithmetic, these computations can incur roundoff errors that may lead to incorrect results and inconsistencies, causing computations to fail. This issue has been addressed using a combination of exact arithmetics for robustness and floating-point filters to mitigate the computational cost of exact computations. The implementation of exact computations and floating-point filters can be a difficult task, and code generation tools have been proposed to address this. We present a new C++ meta-programming framework for the generation of fast, robust predicates for arbitrary geometric predicates based on polynomial expressions. We show examples of how this approach produces correct results for GIS data sets that could lead to incorrect predicate results for naive implementations. We also show benchmark results that demonstrate that our implementation can compete with state-of-the-art solutions.

1995 ◽  
Vol 05 (01n02) ◽  
pp. 193-213 ◽  
Author(s):  
STEVEN FORTUNE

We consider the correctness of 2-d Delaunay triangulation algorithms implemented using floating-point arithmetic. The α-pseudocircle through points a, b, c consists of three circular arcs connecting ab, bc, and ac, each arc inside the circumcircle of a, b, c and forming angle α with the circumcircle; a triangulation is α-empty if the α-pseudocircle through the vertices of each triangle is empty. We show that a simple Delaunay triangulation algorithm—the flipping algorithm—can be implemented to produce O(n∈)-empty triangulations, where n is the number of point sites and ∈ is the relative error of floating-point arithmetic; its worst-case running time is O(n2). We also discuss floating-point implementation of other 2-d Delaunay triangulation algorithms.


Author(s):  
Jack Dongarra ◽  
Laura Grigori ◽  
Nicholas J. Higham

A number of features of today’s high-performance computers make it challenging to exploit these machines fully for computational science. These include increasing core counts but stagnant clock frequencies; the high cost of data movement; use of accelerators (GPUs, FPGAs, coprocessors), making architectures increasingly heterogeneous; and multi- ple precisions of floating-point arithmetic, including half-precision. Moreover, as well as maximizing speed and accuracy, minimizing energy consumption is an important criterion. New generations of algorithms are needed to tackle these challenges. We discuss some approaches that we can take to develop numerical algorithms for high-performance computational science, with a view to exploiting the next generation of supercomputers. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.


2020 ◽  
Vol 39 (6) ◽  
pp. 1-16
Author(s):  
Gianmarco Cherchi ◽  
Marco Livesu ◽  
Riccardo Scateni ◽  
Marco Attene

1964 ◽  
Vol 7 (1) ◽  
pp. 10-13 ◽  
Author(s):  
Robert T. Gregory ◽  
James L. Raney

2020 ◽  
Vol 26 (4) ◽  
pp. 273-284
Author(s):  
Hao Ji ◽  
Michael Mascagni ◽  
Yaohang Li

AbstractIn this article, we consider the general problem of checking the correctness of matrix multiplication. Given three n\times n matrices 𝐴, 𝐵 and 𝐶, the goal is to verify that A\times B=C without carrying out the computationally costly operations of matrix multiplication and comparing the product A\times B with 𝐶, term by term. This is especially important when some or all of these matrices are very large, and when the computing environment is prone to soft errors. Here we extend Freivalds’ algorithm to a Gaussian Variant of Freivalds’ Algorithm (GVFA) by projecting the product A\times B as well as 𝐶 onto a Gaussian random vector and then comparing the resulting vectors. The computational complexity of GVFA is consistent with that of Freivalds’ algorithm, which is O(n^{2}). However, unlike Freivalds’ algorithm, whose probability of a false positive is 2^{-k}, where 𝑘 is the number of iterations, our theoretical analysis shows that, when A\times B\neq C, GVFA produces a false positive on set of inputs of measure zero with exact arithmetic. When we introduce round-off error and floating-point arithmetic into our analysis, we can show that the larger this error, the higher the probability that GVFA avoids false positives. Moreover, by iterating GVFA 𝑘 times, the probability of a false positive decreases as p^{k}, where 𝑝 is a very small value depending on the nature of the fault on the result matrix and the arithmetic system’s floating-point precision. Unlike deterministic algorithms, there do not exist any fault patterns that are completely undetectable with GVFA. Thus GVFA can be used to provide efficient fault tolerance in numerical linear algebra, and it can be efficiently implemented on modern computing architectures. In particular, GVFA can be very efficiently implemented on architectures with hardware support for fused multiply-add operations.


Sign in / Sign up

Export Citation Format

Share Document