scholarly journals ROUNDING ERRORS AND THEIR EFFECTS

2021 ◽  
Vol 21 (2) ◽  
pp. 155-173
Author(s):  
Stefan Ritter
Keyword(s):  
2021 ◽  
Vol 11 (12) ◽  
pp. 5474
Author(s):  
Tuomo Poutanen

This article addresses the process to optimally select safety factors and characteristic values for the Eurocodes. Five amendments to the present codes are proposed: (1) The load factors are fixed, γG = γQ, by making the characteristic load of the variable load changeable, it simplifies the codes and lessens the calculation work. (2) Currently, the characteristic load of the variable load is the same for all variable loads. It creates excess safety and material waste for the variable loads with low variation. This deficiency can be avoided by applying the same amendment as above. (3) Various materials fit with different accuracy in the reliability model. This article explains two options to reduce this difficulty. (4) A method to avoid rounding errors in the safety factors is explained. (5) The current safety factors are usually set by minimizing the reliability indexes regarding the target when the obtained codes include considerable safe and unsafe design cases with the variability ratio (high reliability/low) of about 1.4. The proposed three code models match the target β50 = 3.2 with high accuracy, no unsafe design cases and insignificant safe design cases with the variability ratio 1.07, 1.03 and 1.04.


Mathematics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 317
Author(s):  
Diogo Freitas ◽  
Luiz Guerreiro Lopes ◽  
Fernando Morgado-Dias

Finding arbitrary roots of polynomials is a fundamental problem in various areas of science and engineering. A myriad of methods was suggested to address this problem, such as the sequential Newton’s method and the Durand–Kerner (D–K) simultaneous iterative method. The sequential iterative methods, on the one hand, need to use a deflation procedure in order to compute approximations to all the roots of a given polynomial, which can produce inaccurate results due to the accumulation of rounding errors. On the other hand, the simultaneous iterative methods require good initial guesses to converge. However, Artificial Neural Networks (ANNs) are widely known by their capacity to find complex mappings between the dependent and independent variables. In view of this, this paper aims to determine, based on comparative results, whether ANNs can be used to compute approximations to the real and complex roots of a given polynomial, as an alternative to simultaneous iterative algorithms like the D–K method. Although the results are very encouraging and demonstrate the viability and potentiality of the suggested approach, the ANNs were not able to surpass the accuracy of the D–K method. The results indicated, however, that the use of the approximations computed by the ANNs as the initial guesses for the D–K method can be beneficial to the accuracy of this method.


2021 ◽  
Author(s):  
Christian Zeman ◽  
Christoph Schär

<p>Since their first operational application in the 1950s, atmospheric numerical models have become essential tools in weather and climate prediction. As such, they are a constant subject to changes, thanks to advances in computer systems, numerical methods, and the ever increasing knowledge about the atmosphere of Earth. Many of the changes in today's models relate to seemingly unsuspicious modifications, associated with minor code rearrangements, changes in hardware infrastructure, or software upgrades. Such changes are meant to preserve the model formulation, yet the verification of such changes is challenged by the chaotic nature of our atmosphere - any small change, even rounding errors, can have a big impact on individual simulations. Overall this represents a serious challenge to a consistent model development and maintenance framework.</p><p>Here we propose a new methodology for quantifying and verifying the impacts of minor atmospheric model changes, or its underlying hardware/software system, by using ensemble simulations in combination with a statistical hypothesis test. The methodology can assess effects of model changes on almost any output variable over time, and can also be used with different hypothesis tests.</p><p>We present first applications of the methodology with the regional weather and climate model COSMO. The changes considered include a major system upgrade of the supercomputer used, the change from double to single precision floating-point representation, changes in the update frequency of the lateral boundary conditions, and tiny changes to selected model parameters. While providing very robust results, the methodology also shows a large sensitivity to more significant model changes, making it a good candidate for an automated tool to guarantee model consistency in the development cycle.</p>


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-23
Author(s):  
Robert Rabe ◽  
Anastasiia Izycheva ◽  
Eva Darulova

Efficient numerical programs are required for proper functioning of many systems. Today’s tools offer a variety of optimizations to generate efficient floating-point implementations that are specific to a program’s input domain. However, sound optimizations are of an “all or nothing” fashion with respect to this input domain—if an optimizer cannot improve a program on the specified input domain, it will conclude that no optimization is possible. In general, though, different parts of the input domain exhibit different rounding errors and thus have different optimization potential. We present the first regime inference technique for sound optimizations that automatically infers an effective subdivision of a program’s input domain such that individual sub-domains can be optimized more aggressively. Our algorithm is general; we have instantiated it with mixed-precision tuning and rewriting optimizations to improve performance and accuracy, respectively. Our evaluation on a standard benchmark set shows that with our inferred regimes, we can, on average, improve performance by 65% and accuracy by 54% with respect to whole-domain optimizations.


2020 ◽  
Author(s):  
Konstantin Isupov ◽  
Vladimir Knyazkov

The binary32 and binary64 floating-point formats provide good performance on current hardware, but also introduce a rounding error in almost every arithmetic operation. Consequently, the accumulation of rounding errors in large computations can cause accuracy issues. One way to prevent these issues is to use multiple-precision floating-point arithmetic. This preprint, submitted to Russian Supercomputing Days 2020, presents a new library of basic linear algebra operations with multiple precision for graphics processing units. The library is written in CUDA C/C++ and uses the residue number system to represent multiple-precision significands of floating-point numbers. The supported data types, memory layout, and main features of the library are considered. Experimental results are presented showing the performance of the library.


Author(s):  
Valerii Zadiraka ◽  
Inna Shvidchenko

Introduction. When solving problems of transcomputational complexity, the problem of evaluating the rounding error is relevant, since it can be dominant in evaluating the accuracy of solving the problem. The ways to reduce it are important, as are the reserves for optimizing the algorithms for solving the problem in terms of accuracy. In this case, you need to take into account the rounding-off rules and calculation modes. The article shows how the estimates of the rounding error can be used in modern computer technologies for solving problems of computational, applied mathematics, as well as information security. The purpose of the article is to draw the attention of the specialists in computational and applied mathematics to the need to take into account the rounding error when analyzing the quality of the approximate solution of problems. This is important for mathematical modeling problems, problems using Bigdata, digital signal and image processing, cybersecurity, and many others. The article demonstrates specific estimates of the rounding error for solving a number of problems: estimating the mathematical expectation, calculating the discrete Fourier transform, using multi-digit arithmetic and using the estimates of the rounding error in algorithms for solving computer steganography problems. The results. The estimates of the rounding error of the algorithms for solving the above-mentioned classes of problems are given for different rounding-off rules and for different calculation modes. For the problem of constructing computer steganography, the use of the estimates of the rounding error in computer technologies for solving problems of hidden information transfer is shown. Conclusions. Taking into account the rounding error is an important factor in assessing the accuracy of the approximate solution of problems of the complexity above average. Keywords: rounding error, computer technology, discrete Fourier transform, multi-digit arithmetic, computer steganography.


2016 ◽  
Vol 13 (1) ◽  
pp. 190-197
Author(s):  
Baghdad Science Journal

In this paper we present the theoretical foundation of forward error analysis of numerical algorithms under;• Approximations in "built-in" functions.• Rounding errors in arithmetic floating-point operations.• Perturbations of data.The error analysis is based on linearization method. The fundamental tools of the forward error analysis are system of linear absolute and relative a prior and a posteriori error equations and associated condition numbers constituting optimal of possible cumulative round – off errors. The condition numbers enable simple general, quantitative bounds definitions of numerical stability. The theoretical results have been applied a Gaussian elimination, and have proved to be very effective means of both a priori and a posteriori error analysis.


Sign in / Sign up

Export Citation Format

Share Document