scholarly journals Single-Precision in the Tangent-Linear and Adjoint Models of Incremental 4D-Var

2020 ◽  
Vol 148 (4) ◽  
pp. 1541-1552 ◽  
Author(s):  
Sam Hatfield ◽  
Andrew McRae ◽  
Tim Palmer ◽  
Peter Düben

Abstract The use of single-precision arithmetic in ECMWF’s forecasting model gave a 40% reduction in wall-clock time over double-precision, with no decrease in forecast quality. However, using reduced-precision in 4D-Var data assimilation is relatively unexplored and there are potential issues with using single-precision in the tangent-linear and adjoint models. Here, we present the results of reducing numerical precision in an incremental 4D-Var data assimilation scheme, with an underlying two-layer quasigeostrophic model. The minimizer used is the conjugate gradient method. We show how reducing precision increases the asymmetry between the tangent-linear and adjoint models. For ill-conditioned problems, this leads to a loss of orthogonality among the residuals of the conjugate gradient algorithm, which slows the convergence of the minimization procedure. However, we also show that a standard technique, reorthogonalization, eliminates these issues and therefore could allow the use of single-precision arithmetic. This work is carried out within ECMWF’s data assimilation framework, the Object Oriented Prediction System.

2020 ◽  
Author(s):  
Oriol Tintó ◽  
Stella Valentina Paronuzzi Ticco ◽  
Mario C. Acosta ◽  
Miguel Castrillo ◽  
Kim Serradell ◽  
...  

<p>One of the requirements to keep improving the science produced using NEMO is to enhance its computational performance. The interest in improving its capability to efficiently use the computational infrastructure its two-fold: on one side there are experiments that would only be possible if a certain threshold of throughput is achieved, on the other side any development that achieves an increase in efficiency would help saving resources while reducing the environmental impact of our experiments. One of the opportunities that raised interest in the last few years is the optimization of the numerical precision. Historical reasons brought many computational models to over-engineer the numerical precision: solving this miss-adjustment can payback in terms of efficiency and throughput. In this direction, a research was carried out in order to safely reduce the numerical precision in NEMO which led to a mixed-precision version of the model. The implementation has been developed following the approach proposed by Tintó et al. 2019, in which the variables that require double precision are identified automatically and the remaining ones are switched to use single-precision. The implementation will be released in 2020 and this work presents its evaluation in terms of both performance and scientific results.</p>


Geophysics ◽  
1996 ◽  
Vol 61 (2) ◽  
pp. 357-364 ◽  
Author(s):  
Horst Holstein ◽  
Ben Ketteridge

Analytical formulas for the gravity anomaly of a uniform polyhedral body are subject to numerical error that increases with distance from the target, while the anomaly decreases. This leads to a limited range of target distances in which the formulas are operational, beyond which the calculations are dominated by rounding error. We analyze the sources of error and propose a combination of numerical and analytical procedures that exhibit advantages over existing methods, namely (1) errors that diminish with distance, (2) enhanced operating range, and (3) algorithmic simplicity. The latter is achieved by avoiding the need to transform coordinates and the need to discriminate between projected observation points that lie inside, on, or outside a target facet boundary. Our error analysis is verified in computations based on a published code and on a code implementing our methods. The former requires a numerical precision of one part in [Formula: see text] (double precision) in problems of geophysical interest, whereas our code requires a precision of one part in [Formula: see text] (single precision) to give comparable results, typically in half the execution time.


2021 ◽  
Author(s):  
Stella Valentina Paronuzzi Ticco ◽  
Oriol Tintó Prims ◽  
Mario Acosta Cobos ◽  
Miguel Castrillo Melguizo

<p>At the beginning of 2021 a mixed precision version of the NEMO code was included into the official NEMO repository. The implementation followed the approach presented in Tintó et al. 2019. The proposed optimization despite being not at all trivial, is not new, and quite popular nowadays. In fact, for historical reasons many computational models over-engineer the numerical precision, which leads to an under-optimal exploitation of computational infrastructures. By solving this miss-adjustment a conspicuous payback in terms of efficiency and throughput can be gained: we are not only taking a step toward a more environmentally friendly science, sometimes we are actually pushing the horizon of experiment feasibility a little further. For being able to smoothly include the changes needed in the official release an automatic workflow has been implemented: we attempt to minimize the number of changes required and, at the same time, maximize the number of variables that can be computed using single precision. Here we present a general sketch of the tool and workflow used.<br>Starting from the original code, we automatically produce a new version of the same, where the user can specify the precision of each real variable therein declared. With this new executable, a numerical precision analysis can be performed: a search algorithm specially designed for this task will drive a workflow manager toward the creation of a list of variables that is safe to switch to single precision. The algorithm compares the result of each intermediate step of the workflow with reliable results from a double precision version of the same code, detecting which variables need to retain a higher accuracy.<br>The result of this analysis is eventually used to perform the modification needed into the code in order to produce the desired working mixed precision version, while also keeping the number of necessary changes low. Finally, the previous double precision and the new mixed precision versions will be compared, including a computational comparison and a scientific validation to prove that the new version can be used for operational configurations, without losing accuracy and increasing the computational performance dramatically.</p>


2018 ◽  
Author(s):  
Pavel Pokhilko ◽  
Evgeny Epifanovsky ◽  
Anna I. Krylov

Using single precision floating point representation reduces the size of data and computation time by a factor of two relative to double precision conventionally used in electronic structure programs. For large-scale calculations, such as those encountered in many-body theories, reduced memory footprint alleviates memory and input/output bottlenecks. Reduced size of data can lead to additional gains due to improved parallel performance on CPUs and various accelerators. However, using single precision can potentially reduce the accuracy of computed observables. Here we report an implementation of coupled-cluster and equation-of-motion coupled-cluster methods with single and double excitations in single precision. We consider both standard implementation and one using Cholesky decomposition or resolution-of-the-identity of electron-repulsion integrals. Numerical tests illustrate that when single precision is used in correlated calculations, the loss of accuracy is insignificant and pure single-precision implementation can be used for computing energies, analytic gradients, excited states, and molecular properties. In addition to pure single-precision calculations, our implementation allows one to follow a single-precision calculation by clean-up iterations, fully recovering double-precision results while retaining significant savings.


Sign in / Sign up

Export Citation Format

Share Document