Numerical Precision

Keyword(s):  
2019 ◽  
Vol 147 (2) ◽  
pp. 645-655 ◽  
Author(s):  
Matthew Chantry ◽  
Tobias Thornes ◽  
Tim Palmer ◽  
Peter Düben

Abstract Attempts to include the vast range of length scales and physical processes at play in Earth’s atmosphere push weather and climate forecasters to build and more efficiently utilize some of the most powerful computers in the world. One possible avenue for increased efficiency is in using less precise numerical representations of numbers. If computing resources saved can be reinvested in other ways (e.g., increased resolution or ensemble size) a reduction in precision can lead to an increase in forecast accuracy. Here we examine reduced numerical precision in the context of ECMWF’s Open Integrated Forecast System (OpenIFS) model. We posit that less numerical precision is required when solving the dynamical equations for shorter length scales while retaining accuracy of the simulation. Transformations into spectral space, as found in spectral models such as OpenIFS, enact a length scale decomposition of the prognostic fields. Utilizing this, we introduce a reduced-precision emulator into the spectral space calculations and optimize the precision necessary to achieve forecasts comparable with double and single precision. On weather forecasting time scales, larger length scales require higher numerical precision than smaller length scales. On decadal time scales, half precision is still sufficient precision for everything except the global mean quantities.


2016 ◽  
Author(s):  
Andrew Dawson ◽  
Peter Düben

Abstract. This paper describes the rpe library which has the capability to emulate the use of arbitrary reduced floating-point precision within large numerical models written in Fortran. The rpe software allows model developers to test how reduced floating-point precision affects the result of their simulations without having to make extensive code changes or port the model onto specialised hardware. The software can be used to identify parts of a program that are problematic for numerical precision and to guide changes to the program to allow a stronger reduction in precision. The development of rpe was motivated by the strong demand for more computing power. If numerical precision can be reduced for an application under consideration while still achieving results of acceptable quality, computational cost can be reduced, since a reduction in numerical precision may allow an increase in performance or a reduction in power consumption. For simulations with weather and climate models, savings due to a reduction in precision could be reinvested to allow model simulations at higher spatial resolution or complexity, or to increase the number of ensemble members to improve predictions. rpe was developed with particular focus on the community of weather and climate modelling, but the software could be used with numerical simulations from other domains.


Author(s):  
Aseem Dalal ◽  
Narendra Govil
Keyword(s):  

Let p(z) = ?n?=0 a?z? be a polynomial of degree n, M(p,R) := max|z|=R?0 |p(z)|, and M(p,1) := ||p||. Then according to a well-known result of Ankeny and Rivlin, we have for R ? 1, M(p,R) ? (Rn+1/2) ||p||. This inequality has been sharpened among others by Govil, who proved that for R ? 1, M(p,R) ? (Rn+1/2) ||p||-n/2 (||p||2-4|an|2/||p||) {(R-1)||p||/||p||+2|an|- ln (1+ (R-1)||p||/||p||+2|an|)}. In this paper, we sharpen the above inequality of Govil, which in turn sharpens inequality of Ankeny and Rivlin. We present our result in terms of the LerchPhi function ?(z,s,a), implemented in Wolfram's MATHEMATICA as LerchPhi [z,s,a], which can be evaluated to arbitrary numerical precision, and is suitable for both symbolic and numerical manipulations. Also, we present an example and by using MATLAB show that for some polynomials the improvement in bound can be considerably significant.


Author(s):  
Michael Mutingi ◽  
Charles Mbohwa

Manpower recruitment and training in uncertain and turbulent environments is a challenge to decision makers in large organizations. In the absence of numerical precision on market growth and the ensuing manpower demand, designing manpower planning policies is vital. Often times, companies incur losses due to overstaffing and/or understaffing. For instance, organizations lose business when critical human resources leave. As a result, it is essential to develop robust effective dynamic recruitment and training policies, especially in a fuzzy and dynamic environment. In this chapter, a fuzzy systems dynamics modeling approach is developed to simulate and evaluate alternative dynamic policies relating skills recruitment, skills training, and available skills from a systems thinking perspective. Fuzzy system dynamics is implemented based on fuzzy logic and system dynamics concepts in order to arrive at robust strategies for manpower decision makers. It is anticipated that fuzzy system dynamics can help organizations to design effective manpower recruitment strategies in a dynamic and uncertain environment.


2019 ◽  
Author(s):  
Oriol Tintó Prims ◽  
Mario C. Acosta ◽  
Andrew M. Moore ◽  
Miguel Castrillo ◽  
Kim Serradell ◽  
...  

Abstract. Mixed-precision approaches can provide substantial speed-ups for both computing- and memory-bound codes requiring little effort. Most scientific codes have overengineered the numerical precision leading to a situation where models are using more resources than required without having a clue about where these resources are unnecessary and where are really needed. Consequently, there is the possibility to obtain performance benefits from using a more appropriate choice of precision and the only thing that is needed is a method to determine which real variables can be represented with fewer bits without affecting the accuracy of the results. This paper presents a novel method to enable modern and legacy codes to benefit from a reduction of precision without sacrificing accuracy. It consists in a simple idea: if we can measure how reducing the precision of a group of variables affects the outputs, we can evaluate the level of precision this group of variables need. Modifying and recompiling the code for each case that has to be evaluated would require an amount of effort that makes this task prohibitive. Instead, the method presented in this paper relies on the use of a tool called Reduced Precision Emulator (RPE) that can significantly streamline the process . Using the RPE and a list of parameters containing the precisions that will be used for each real variable in the code, it is possible within a single binary to emulate the effect on the outputs of a specific choice of precision. Once we have the potential of emulating the effects of reduced precision, we can proceed with the design of the tests required to obtain knowledge about all the variables in the model. The number of possible combinations is prohibitively large and impossible to explore. The alternative of performing a screening of the variables individually can give certain insight about the precision needed by the variables, but on the other hand some more complex interactions that involve several variables may remain hidden. Instead, we use a divide-and-conquer algorithm that identifies the parts that cannot handle reduced precision and builds a set of variables that can. The method has been put to proof using two state-of-the-art ocean models, NEMO and ROMS, with very promising results. Obtaining this information is crucial to build afterwards an actual mixed precision version of the code that will bring the promised performance benefits.


2012 ◽  
Vol 21 (05) ◽  
pp. 1250037
Author(s):  
HERVÉ MOLIQUE ◽  
JERZY DUDEK

In this paper we collect a number of technical issues that arise when constructing the matrix representation of the most general nuclear mean field Hamiltonian within which "all terms allowed by general symmetries are considered not only in principle but also in practice". Such a general posing of the problem is necessary when investigating the predictive power of the mean field theories by means of the well-posed inverse problem. [J. Dudek et al., Int. J. Mod. Phys. E21 (2012) 1250053]. To our knowledge quite often ill-posed mean field inverse problems arise in practical realizations what makes reliable extrapolations into the unknown areas of nuclei impossible. The conceptual and technical issues related to the inverse problem have been discussed in the above-mentioned topic whereas here we focus on "how to calculate the matrix elements, fast and with high numerical precision when solving the inverse problem" [For space-limitation reasons we illustrate the principal techniques on the example of the central interactions].


2016 ◽  
Vol 112 ◽  
pp. 02008 ◽  
Author(s):  
Marek Matas ◽  
Jan Cepila ◽  
Jesus Guillermo Contreras Nuno

Sign in / Sign up

Export Citation Format

Share Document