The convergence behaviour of preconditioned CG and CG-S in the presence of rounding errors

Author(s):  
H. A. Van der Vorst
2021 ◽  
Vol 11 (12) ◽  
pp. 5474
Author(s):  
Tuomo Poutanen

This article addresses the process to optimally select safety factors and characteristic values for the Eurocodes. Five amendments to the present codes are proposed: (1) The load factors are fixed, γG = γQ, by making the characteristic load of the variable load changeable, it simplifies the codes and lessens the calculation work. (2) Currently, the characteristic load of the variable load is the same for all variable loads. It creates excess safety and material waste for the variable loads with low variation. This deficiency can be avoided by applying the same amendment as above. (3) Various materials fit with different accuracy in the reliability model. This article explains two options to reduce this difficulty. (4) A method to avoid rounding errors in the safety factors is explained. (5) The current safety factors are usually set by minimizing the reliability indexes regarding the target when the obtained codes include considerable safe and unsafe design cases with the variability ratio (high reliability/low) of about 1.4. The proposed three code models match the target β50 = 3.2 with high accuracy, no unsafe design cases and insignificant safe design cases with the variability ratio 1.07, 1.03 and 1.04.


Mathematics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 317
Author(s):  
Diogo Freitas ◽  
Luiz Guerreiro Lopes ◽  
Fernando Morgado-Dias

Finding arbitrary roots of polynomials is a fundamental problem in various areas of science and engineering. A myriad of methods was suggested to address this problem, such as the sequential Newton’s method and the Durand–Kerner (D–K) simultaneous iterative method. The sequential iterative methods, on the one hand, need to use a deflation procedure in order to compute approximations to all the roots of a given polynomial, which can produce inaccurate results due to the accumulation of rounding errors. On the other hand, the simultaneous iterative methods require good initial guesses to converge. However, Artificial Neural Networks (ANNs) are widely known by their capacity to find complex mappings between the dependent and independent variables. In view of this, this paper aims to determine, based on comparative results, whether ANNs can be used to compute approximations to the real and complex roots of a given polynomial, as an alternative to simultaneous iterative algorithms like the D–K method. Although the results are very encouraging and demonstrate the viability and potentiality of the suggested approach, the ANNs were not able to surpass the accuracy of the D–K method. The results indicated, however, that the use of the approximations computed by the ANNs as the initial guesses for the D–K method can be beneficial to the accuracy of this method.


Mathematics ◽  
2021 ◽  
Vol 9 (13) ◽  
pp. 1498
Author(s):  
Karel J. in’t Hout ◽  
Jacob Snoeijer

We study the principal component analysis based approach introduced by Reisinger and Wittum (2007) and the comonotonic approach considered by Hanbali and Linders (2019) for the approximation of American basket option values via multidimensional partial differential complementarity problems (PDCPs). Both approximation approaches require the solution of just a limited number of low-dimensional PDCPs. It is demonstrated by ample numerical experiments that they define approximations that lie close to each other. Next, an efficient discretisation of the pertinent PDCPs is presented that leads to a favourable convergence behaviour.


2021 ◽  
Author(s):  
Christian Zeman ◽  
Christoph Schär

<p>Since their first operational application in the 1950s, atmospheric numerical models have become essential tools in weather and climate prediction. As such, they are a constant subject to changes, thanks to advances in computer systems, numerical methods, and the ever increasing knowledge about the atmosphere of Earth. Many of the changes in today's models relate to seemingly unsuspicious modifications, associated with minor code rearrangements, changes in hardware infrastructure, or software upgrades. Such changes are meant to preserve the model formulation, yet the verification of such changes is challenged by the chaotic nature of our atmosphere - any small change, even rounding errors, can have a big impact on individual simulations. Overall this represents a serious challenge to a consistent model development and maintenance framework.</p><p>Here we propose a new methodology for quantifying and verifying the impacts of minor atmospheric model changes, or its underlying hardware/software system, by using ensemble simulations in combination with a statistical hypothesis test. The methodology can assess effects of model changes on almost any output variable over time, and can also be used with different hypothesis tests.</p><p>We present first applications of the methodology with the regional weather and climate model COSMO. The changes considered include a major system upgrade of the supercomputer used, the change from double to single precision floating-point representation, changes in the update frequency of the lateral boundary conditions, and tiny changes to selected model parameters. While providing very robust results, the methodology also shows a large sensitivity to more significant model changes, making it a good candidate for an automated tool to guarantee model consistency in the development cycle.</p>


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Milad Mirbabaie ◽  
Stefan Stieglitz ◽  
Felix Brünker

PurposeThe purpose of this study is to investigate communication on Twitter during two unpredicted crises (the Manchester bombings and the Munich shooting) and one natural disaster (Hurricane Harvey). The study contributes to understanding the dynamics of convergence behaviour archetypes during crises.Design/methodology/approachThe authors collected Twitter data and analysed approximately 7.5 million relevant cases. The communication was examined using social network analysis techniques and manual content analysis to identify convergence behaviour archetypes (CBAs). The dynamics and development of CBAs over time in crisis communication were also investigated.FindingsThe results revealed the dynamics of influential CBAs emerging in specific stages of a crisis situation. The authors derived a conceptual visualisation of convergence behaviour in social media crisis communication and introduced the terms hidden and visible network-layer to further understanding of the complexity of crisis communication.Research limitations/implicationsThe results emphasise the importance of well-prepared emergency management agencies and support the following recommendations: (1) continuous and (2) transparent communication during the crisis event as well as (3) informing the public about central information distributors from the start of the crisis are vital.Originality/valueThe study uncovered the dynamics of crisis-affected behaviour on social media during three cases. It provides a novel perspective that broadens our understanding of complex crisis communication on social media and contributes to existing knowledge of the complexity of crisis communication as well as convergence behaviour.


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-23
Author(s):  
Robert Rabe ◽  
Anastasiia Izycheva ◽  
Eva Darulova

Efficient numerical programs are required for proper functioning of many systems. Today’s tools offer a variety of optimizations to generate efficient floating-point implementations that are specific to a program’s input domain. However, sound optimizations are of an “all or nothing” fashion with respect to this input domain—if an optimizer cannot improve a program on the specified input domain, it will conclude that no optimization is possible. In general, though, different parts of the input domain exhibit different rounding errors and thus have different optimization potential. We present the first regime inference technique for sound optimizations that automatically infers an effective subdivision of a program’s input domain such that individual sub-domains can be optimized more aggressively. Our algorithm is general; we have instantiated it with mixed-precision tuning and rewriting optimizations to improve performance and accuracy, respectively. Our evaluation on a standard benchmark set shows that with our inferred regimes, we can, on average, improve performance by 65% and accuracy by 54% with respect to whole-domain optimizations.


2010 ◽  
Vol 3 (6) ◽  
pp. 1555-1568 ◽  
Author(s):  
B. Mijling ◽  
O. N. E. Tuinder ◽  
R. F. van Oss ◽  
R. J. van der A

Abstract. The Ozone Profile Algorithm (OPERA), developed at KNMI, retrieves the vertical ozone distribution from nadir spectral satellite measurements of back scattered sunlight in the ultraviolet and visible wavelength range. To produce consistent global datasets the algorithm needs to have good global performance, while short computation time facilitates the use of the algorithm in near real time applications. To test the global performance of the algorithm we look at the convergence behaviour as diagnostic tool of the ozone profile retrievals from the GOME instrument (on board ERS-2) for February and October 1998. In this way, we uncover different classes of retrieval problems, related to the South Atlantic Anomaly, low cloud fractions over deserts, desert dust outflow over the ocean, and the intertropical convergence zone. The influence of the first guess and the external input data including the ozone cross-sections and the ozone climatologies on the retrieval performance is also investigated. By using a priori ozone profiles which are selected on the expected total ozone column, retrieval problems due to anomalous ozone distributions (such as in the ozone hole) can be avoided. By applying the algorithm adaptations the convergence statistics improve considerably, not only increasing the number of successful retrievals, but also reducing the average computation time, due to less iteration steps per retrieval. For February 1998, non-convergence was brought down from 10.7% to 2.1%, while the mean number of iteration steps (which dominates the computational time) dropped 26% from 5.11 to 3.79.


2020 ◽  
Author(s):  
Konstantin Isupov ◽  
Vladimir Knyazkov

The binary32 and binary64 floating-point formats provide good performance on current hardware, but also introduce a rounding error in almost every arithmetic operation. Consequently, the accumulation of rounding errors in large computations can cause accuracy issues. One way to prevent these issues is to use multiple-precision floating-point arithmetic. This preprint, submitted to Russian Supercomputing Days 2020, presents a new library of basic linear algebra operations with multiple precision for graphics processing units. The library is written in CUDA C/C++ and uses the residue number system to represent multiple-precision significands of floating-point numbers. The supported data types, memory layout, and main features of the library are considered. Experimental results are presented showing the performance of the library.


Sign in / Sign up

Export Citation Format

Share Document