Accumulation of roundoff errors in floating point FFT

1977 ◽  
Vol 24 (3) ◽  
pp. 132-143 ◽  
Author(s):  
Tran Thong ◽  
B. Liu
1973 ◽  
Vol 20 (3) ◽  
pp. 391-398 ◽  
Author(s):  
Toyohisa Kaneko ◽  
Bede Liu

Author(s):  
George Constantinides ◽  
Fredrik Dahlqvist ◽  
Zvonimir Rakamarić ◽  
Rocco Salvia

AbstractWe present a detailed study of roundoff errors in probabilistic floating-point computations. We derive closed-form expressions for the distribution of roundoff errors associated with a random variable, and we prove that roundoff errors are generally close to being uncorrelated with their generating distribution. Based on these theoretical advances, we propose a model of IEEE floating-point arithmetic for numerical expressions with probabilistic inputs and an algorithm for evaluating this model. Our algorithm provides rigorous bounds to the output and error distributions of arithmetic expressions over random variables, evaluated in the presence of roundoff errors. It keeps track of complex dependencies between random variables using an SMT solver, and is capable of providing sound but tight probabilistic bounds to roundoff errors using symbolic affine arithmetic. We implemented the algorithm in the PAF tool, and evaluated it on FPBench, a standard benchmark suite for the analysis of roundoff errors. Our evaluation shows that PAF computes tighter bounds than current state-of-the-art on almost all benchmarks.


Author(s):  
Debasmita Lohar ◽  
Clothilde Jeangoudoux ◽  
Joshua Sobel ◽  
Eva Darulova ◽  
Maria Christakis

AbstractTools that automatically prove the absence or detect the presence of large floating-point roundoff errors or the special values NaN and Infinity greatly help developers to reason about the unintuitive nature of floating-point arithmetic. We show that state-of-the-art tools, however, support or provide non-trivial results only for relatively short programs. We propose a framework for combining different static and dynamic analyses that allows to increase their reach beyond what they can do individually. Furthermore, we show how adaptations of existing dynamic and static techniques effectively trade some soundness guarantees for increased scalability, providing conditional verification of floating-point kernels in realistic programs.


1996 ◽  
Vol 44 (4) ◽  
pp. 783-790 ◽  
Author(s):  
K. Kalliojarvi ◽  
J. Astola

Author(s):  
Anastasiia Izycheva ◽  
Eva Darulova ◽  
Helmut Seidl

AbstractWe present an automated procedure for synthesizing sound inductive invariants for floating-point numerical loops. Our procedure generates invariants of the form of a convex polynomial inequality that tightly bounds the values of loop variables. Such invariants are a prerequisite for reasoning about the safety and roundoff errors of floating-point programs. Unlike previous approaches that rely on policy iteration, linear algebra or semi-definite programming, we propose a heuristic procedure based on simulation and counterexample-guided refinement. We observe that this combination is remarkably effective and general and can handle both linear and nonlinear loop bodies, nondeterministic values as well as conditional statements. Our evaluation shows that our approach can efficiently synthesize loop invariants for existing benchmarks from literature, but that it is also able to find invariants for nonlinear loops that today’s tools cannot handle.


Sign in / Sign up

Export Citation Format

Share Document