Gravimetric analysis of uniform polyhedra

Geophysics ◽  
1996 ◽  
Vol 61 (2) ◽  
pp. 357-364 ◽  
Author(s):  
Horst Holstein ◽  
Ben Ketteridge

Analytical formulas for the gravity anomaly of a uniform polyhedral body are subject to numerical error that increases with distance from the target, while the anomaly decreases. This leads to a limited range of target distances in which the formulas are operational, beyond which the calculations are dominated by rounding error. We analyze the sources of error and propose a combination of numerical and analytical procedures that exhibit advantages over existing methods, namely (1) errors that diminish with distance, (2) enhanced operating range, and (3) algorithmic simplicity. The latter is achieved by avoiding the need to transform coordinates and the need to discriminate between projected observation points that lie inside, on, or outside a target facet boundary. Our error analysis is verified in computations based on a published code and on a code implementing our methods. The former requires a numerical precision of one part in [Formula: see text] (double precision) in problems of geophysical interest, whereas our code requires a precision of one part in [Formula: see text] (single precision) to give comparable results, typically in half the execution time.

2020 ◽  
Author(s):  
Oriol Tintó ◽  
Stella Valentina Paronuzzi Ticco ◽  
Mario C. Acosta ◽  
Miguel Castrillo ◽  
Kim Serradell ◽  
...  

<p>One of the requirements to keep improving the science produced using NEMO is to enhance its computational performance. The interest in improving its capability to efficiently use the computational infrastructure its two-fold: on one side there are experiments that would only be possible if a certain threshold of throughput is achieved, on the other side any development that achieves an increase in efficiency would help saving resources while reducing the environmental impact of our experiments. One of the opportunities that raised interest in the last few years is the optimization of the numerical precision. Historical reasons brought many computational models to over-engineer the numerical precision: solving this miss-adjustment can payback in terms of efficiency and throughput. In this direction, a research was carried out in order to safely reduce the numerical precision in NEMO which led to a mixed-precision version of the model. The implementation has been developed following the approach proposed by Tintó et al. 2019, in which the variables that require double precision are identified automatically and the remaining ones are switched to use single-precision. The implementation will be released in 2020 and this work presents its evaluation in terms of both performance and scientific results.</p>


2020 ◽  
Vol 148 (4) ◽  
pp. 1541-1552 ◽  
Author(s):  
Sam Hatfield ◽  
Andrew McRae ◽  
Tim Palmer ◽  
Peter Düben

Abstract The use of single-precision arithmetic in ECMWF’s forecasting model gave a 40% reduction in wall-clock time over double-precision, with no decrease in forecast quality. However, using reduced-precision in 4D-Var data assimilation is relatively unexplored and there are potential issues with using single-precision in the tangent-linear and adjoint models. Here, we present the results of reducing numerical precision in an incremental 4D-Var data assimilation scheme, with an underlying two-layer quasigeostrophic model. The minimizer used is the conjugate gradient method. We show how reducing precision increases the asymmetry between the tangent-linear and adjoint models. For ill-conditioned problems, this leads to a loss of orthogonality among the residuals of the conjugate gradient algorithm, which slows the convergence of the minimization procedure. However, we also show that a standard technique, reorthogonalization, eliminates these issues and therefore could allow the use of single-precision arithmetic. This work is carried out within ECMWF’s data assimilation framework, the Object Oriented Prediction System.


2021 ◽  
Author(s):  
Stella Valentina Paronuzzi Ticco ◽  
Oriol Tintó Prims ◽  
Mario Acosta Cobos ◽  
Miguel Castrillo Melguizo

<p>At the beginning of 2021 a mixed precision version of the NEMO code was included into the official NEMO repository. The implementation followed the approach presented in Tintó et al. 2019. The proposed optimization despite being not at all trivial, is not new, and quite popular nowadays. In fact, for historical reasons many computational models over-engineer the numerical precision, which leads to an under-optimal exploitation of computational infrastructures. By solving this miss-adjustment a conspicuous payback in terms of efficiency and throughput can be gained: we are not only taking a step toward a more environmentally friendly science, sometimes we are actually pushing the horizon of experiment feasibility a little further. For being able to smoothly include the changes needed in the official release an automatic workflow has been implemented: we attempt to minimize the number of changes required and, at the same time, maximize the number of variables that can be computed using single precision. Here we present a general sketch of the tool and workflow used.<br>Starting from the original code, we automatically produce a new version of the same, where the user can specify the precision of each real variable therein declared. With this new executable, a numerical precision analysis can be performed: a search algorithm specially designed for this task will drive a workflow manager toward the creation of a list of variables that is safe to switch to single precision. The algorithm compares the result of each intermediate step of the workflow with reliable results from a double precision version of the same code, detecting which variables need to retain a higher accuracy.<br>The result of this analysis is eventually used to perform the modification needed into the code in order to produce the desired working mixed precision version, while also keeping the number of necessary changes low. Finally, the previous double precision and the new mixed precision versions will be compared, including a computational comparison and a scientific validation to prove that the new version can be used for operational configurations, without losing accuracy and increasing the computational performance dramatically.</p>


2018 ◽  
Author(s):  
Pavel Pokhilko ◽  
Evgeny Epifanovsky ◽  
Anna I. Krylov

Using single precision floating point representation reduces the size of data and computation time by a factor of two relative to double precision conventionally used in electronic structure programs. For large-scale calculations, such as those encountered in many-body theories, reduced memory footprint alleviates memory and input/output bottlenecks. Reduced size of data can lead to additional gains due to improved parallel performance on CPUs and various accelerators. However, using single precision can potentially reduce the accuracy of computed observables. Here we report an implementation of coupled-cluster and equation-of-motion coupled-cluster methods with single and double excitations in single precision. We consider both standard implementation and one using Cholesky decomposition or resolution-of-the-identity of electron-repulsion integrals. Numerical tests illustrate that when single precision is used in correlated calculations, the loss of accuracy is insignificant and pure single-precision implementation can be used for computing energies, analytic gradients, excited states, and molecular properties. In addition to pure single-precision calculations, our implementation allows one to follow a single-precision calculation by clean-up iterations, fully recovering double-precision results while retaining significant savings.


2020 ◽  
Author(s):  
Alessandro Cotronei ◽  
Thomas Slawig

Abstract. We converted the radiation part of the atmospheric model ECHAM to single precision arithmetic. We analyzed different conversion strategies and finally used a step by step change of all modules, subroutines and functions. We found out that a small code portion still requires higher precision arithmetic. We generated code that can be easily changed from double to single precision and vice versa, basically using a simple switch in one module. We compared the output of the single precision version in the coarse resolution with observational data and with the original double precision code. The results of both versions are comparable. We extensively tested different parallelization options with respect to the possible performance gain, in both coarse and low resolution. The single precision radiation itself was accelerated by about 40%, whereas the speed-up for the whole ECHAM model using the converted radiation achieved 18% in the best configuration. We further measured the energy consumption, which could also be reduced.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 2990-2993

Duplication of the coasting element numbers is the big activity in automated signal handling. So the exhibition of drifting problem multipliers count on a primary undertaking in any computerized plan. Coasting factor numbers are spoken to utilizing IEEE 754 modern day in single precision(32-bits), Double precision(sixty four-bits) and Quadruple precision(128-bits) organizations. Augmentation of those coasting component numbers can be completed via using Vedic generation. Vedic arithmetic encompass sixteen wonderful calculations or Sutras. Urdhva Triyagbhyam Sutra is most usually applied for growth of twofold numbers. This paper indicates the compare of tough work finished via exceptional specialists in the direction of the plan of IEEE 754 ultra-modern-day unmarried accuracy skimming thing multiplier the usage of Vedic technological statistics.


2021 ◽  
Author(s):  
Sam Hatfield ◽  
Kristian Mogensen ◽  
Peter Dueben ◽  
Nils Wedi ◽  
Michail Diamantakis

<p>Earth-System models traditionally use double-precision, 64 bit floating-point numbers to perform arithmetic. According to orthodoxy, we must use such a relatively high level of precision in order to minimise the potential impact of rounding errors on the physical fidelity of the model. However, given the inherently imperfect formulation of our models, and the computational benefits of lower precision arithmetic, we must question this orthodoxy. At ECMWF, a single-precision, 32 bit variant of the atmospheric model IFS has been undergoing rigorous testing in preparation for operations for around 5 years. The single-precision simulations have been found to have effectively the same forecast skill as the double-precision simulations while finishing in 40% less time, thanks to the memory and cache benefits of single-precision numbers. Following these positive results, other modelling groups are now also considering single-precision as a way to accelerate their simulations.</p><p>In this presentation I will present the rationale behind the move to lower-precision floating-point arithmetic and up-to-date results from the single-precision atmospheric model at ECMWF, which will be operational imminently. I will then provide an update on the development of the single-precision ocean component at ECMWF, based on the NEMO ocean model, including a verification of quarter-degree simulations. I will also present new results from running ECMWF's coupled atmosphere-ocean-sea-ice-wave forecasting system entirely with single-precision. Finally I will discuss the feasibility of even lower levels of precision, like half-precision, which are now becoming available through GPU- and ARM-based systems such as Summit and Fugaku, respectively. The use of reduced-precision floating-point arithmetic will be an essential consideration for developing high-resolution, storm-resolving Earth-System models.</p>


2013 ◽  
Vol 411-414 ◽  
pp. 1670-1673
Author(s):  
Sheng Chang ◽  
Heng Cai ◽  
Hao Wang ◽  
Jin He ◽  
Qi Jun Huang

Single precision can only achieve 6-7 decimal places, which does not satisfy accuracy demand in many calculations. Double precision can get 13-14 decimal places, but the resource cost is high. In this paper, effects of data bit-width on digital logic design are studied. The accuracy of different bit-width is determined. Then, addition operation, multiplication operation and matrix multiplication with different bit-width are tested on FPGA platform. The results show bit width and circuit design platform have obvious effect on resource cost and circuit efficiency. Finally, a bit-width based circuit design optimization method is proposed.


Sign in / Sign up

Export Citation Format

Share Document