A Guiding Principles for Choosing Numerical Precision in Atmospheric Model based on CESM

Author(s):  
Jiayi Lai

<p><span>The next generation of weather and climate models will have an unprecedented level of resolution and model complexity, while also increasing the requirements for calculation and memory speed. Reducing the accuracy of certain variables and using mixed precision methods in atmospheric models can greatly improve Computing and memory speed. However, in order to ensure the accuracy of the results, most models have over-designed numerical accuracy, which results in that occupied resources have being much larger than the required resources. Previous studies have shown that the necessary precision for an accurate weather model has clear scale dependence, with large spatial scales requiring higher precision than small scales. Even at large scales the necessary precision is far below that of double precision. However, it is difficult to find a guided method to assign different precisions to different variables, so that it can save unnecessary waste. This paper will take CESM1.2.1 as a research object to conduct a large number of tests to reduce accuracy, and propose a new discrimination method similar to the CFL criterion. This method can realize the correlation verification of a single variable, thereby determining which variables can use a lower level of precision without degrading the accuracy of the results.</span></p>

2021 ◽  
pp. 1-43
Author(s):  
E. Adam Paxton ◽  
Matthew Chantry ◽  
Milan Klöwer ◽  
Leo Saffin ◽  
Tim Palmer

AbstractMotivated by recent advances in operational weather forecasting, we study the efficacy of low-precision arithmetic for climate simulations. We develop a framework to measure rounding error in a climate model which provides a stress-test for a low-precision version of the model, and we apply our method to a variety of models including the Lorenz system; a shallow water approximation for ow over a ridge; and a coarse resolution spectral global atmospheric model with simplified parameterisations (SPEEDY). Although double precision (52 significant bits) is standard across operational climate models, in our experiments we find that single precision (23 sbits) is more than enough and that as low as half precision (10 sbits) is often sufficient. For example, SPEEDY can be run with 12 sbits across the code with negligible rounding error, and with 10 sbits if minor errors are accepted, amounting to less than 0.1 mm/6hr for average grid-point precipitation, for example. Our test is based on the Wasserstein metric and this provides stringent non-parametric bounds on rounding error accounting for annual means as well as extreme weather events. In addition, by testing models using both round-to-nearest (RN) and stochastic rounding (SR) we find that SR can mitigate rounding error across a range of applications, and thus our results also provide some evidence that SR could be relevant to next-generation climate models. Further research is needed to test if our results can be generalised to higher resolutions and alternative numerical schemes. However, the results open a promising avenue towards the use of low-precision hardware for improved climate modelling.


2020 ◽  
Author(s):  
Oriol Tintó ◽  
Stella Valentina Paronuzzi Ticco ◽  
Mario C. Acosta ◽  
Miguel Castrillo ◽  
Kim Serradell ◽  
...  

<p>One of the requirements to keep improving the science produced using NEMO is to enhance its computational performance. The interest in improving its capability to efficiently use the computational infrastructure its two-fold: on one side there are experiments that would only be possible if a certain threshold of throughput is achieved, on the other side any development that achieves an increase in efficiency would help saving resources while reducing the environmental impact of our experiments. One of the opportunities that raised interest in the last few years is the optimization of the numerical precision. Historical reasons brought many computational models to over-engineer the numerical precision: solving this miss-adjustment can payback in terms of efficiency and throughput. In this direction, a research was carried out in order to safely reduce the numerical precision in NEMO which led to a mixed-precision version of the model. The implementation has been developed following the approach proposed by Tintó et al. 2019, in which the variables that require double precision are identified automatically and the remaining ones are switched to use single-precision. The implementation will be released in 2020 and this work presents its evaluation in terms of both performance and scientific results.</p>


1995 ◽  
Vol 21 ◽  
pp. 83-90 ◽  
Author(s):  
Biao Chen ◽  
David H. Bromwich ◽  
Keith M. Hines ◽  
Xuguang Pan

The simulation of the northern and southern polar climates for 1979–88 by 14 global climate models (GCMs), using the observed monthly averaged sea-surface temperatures and sea-ice extents as boundary conditions, is part of an international effort to determine the systematic errors of atmospheric models under realistic conditions, the so-called Atmospheric Model Intercomparison Project (AMIP), In this study, intercomparison of the models’ simulation of polar climate is discussed in terms of selected surface and vertically integrated monthly averaged quantities, such as sea-level pressure, cloudiness, precipitable water, precipitation and evaporation/sublimation. The results suggest that the accuracy of model-simulated climate features in high latitudes primarily depends on the horizontal resolution and the treatment of physical processes in the GCMs. AMIP offers an unprecedented opportunity for the comprehensive evaluation and validation of current atmospheric models and provides valuable information for model improvement.


1995 ◽  
Vol 21 ◽  
pp. 83-90 ◽  
Author(s):  
Biao Chen ◽  
David H. Bromwich ◽  
Keith M. Hines ◽  
Xuguang Pan

The simulation of the northern and southern polar climates for 1979–88 by 14 global climate models (GCMs), using the observed monthly averaged sea-surface temperatures and sea-ice extents as boundary conditions, is part of an international effort to determine the systematic errors of atmospheric models under realistic conditions, the so-called Atmospheric Model Intercomparison Project (AMIP), In this study, intercomparison of the models’ simulation of polar climate is discussed in terms of selected surface and vertically integrated monthly averaged quantities, such as sea-level pressure, cloudiness, precipitable water, precipitation and evaporation/sublimation. The results suggest that the accuracy of model-simulated climate features in high latitudes primarily depends on the horizontal resolution and the treatment of physical processes in the GCMs. AMIP offers an unprecedented opportunity for the comprehensive evaluation and validation of current atmospheric models and provides valuable information for model improvement.


2021 ◽  
Author(s):  
Stella Valentina Paronuzzi Ticco ◽  
Oriol Tintó Prims ◽  
Mario Acosta Cobos ◽  
Miguel Castrillo Melguizo

<p>At the beginning of 2021 a mixed precision version of the NEMO code was included into the official NEMO repository. The implementation followed the approach presented in Tintó et al. 2019. The proposed optimization despite being not at all trivial, is not new, and quite popular nowadays. In fact, for historical reasons many computational models over-engineer the numerical precision, which leads to an under-optimal exploitation of computational infrastructures. By solving this miss-adjustment a conspicuous payback in terms of efficiency and throughput can be gained: we are not only taking a step toward a more environmentally friendly science, sometimes we are actually pushing the horizon of experiment feasibility a little further. For being able to smoothly include the changes needed in the official release an automatic workflow has been implemented: we attempt to minimize the number of changes required and, at the same time, maximize the number of variables that can be computed using single precision. Here we present a general sketch of the tool and workflow used.<br>Starting from the original code, we automatically produce a new version of the same, where the user can specify the precision of each real variable therein declared. With this new executable, a numerical precision analysis can be performed: a search algorithm specially designed for this task will drive a workflow manager toward the creation of a list of variables that is safe to switch to single precision. The algorithm compares the result of each intermediate step of the workflow with reliable results from a double precision version of the same code, detecting which variables need to retain a higher accuracy.<br>The result of this analysis is eventually used to perform the modification needed into the code in order to produce the desired working mixed precision version, while also keeping the number of necessary changes low. Finally, the previous double precision and the new mixed precision versions will be compared, including a computational comparison and a scientific validation to prove that the new version can be used for operational configurations, without losing accuracy and increasing the computational performance dramatically.</p>


2010 ◽  
Vol 10 (14) ◽  
pp. 6749-6763 ◽  
Author(s):  
N. C. Dickson ◽  
K. M. Gierens ◽  
H. L. Rogers ◽  
R. L. Jones

Abstract. The global observation, assimilation and prediction in numerical models of ice super-saturated (ISS) regions (ISSR) are crucial if the climate impact of aircraft condensation trails (contrails) is to be fully understood, and if, for example, contrail formation is to be avoided through aircraft operational measures. Given their small scales compared to typical atmospheric model grid sizes, statistical representations of the spatial scales of ISSR are required, in both horizontal and vertical dimensions, if global occurrence of ISSR is to be adequately represented in climate models. This paper uses radiosonde launches made by the UK Meteorological Office, from the British Isles, Gibraltar, St. Helena and the Falkland Islands between January 2002 and December 2006, to investigate the probabilistic occurrence of ISSR. Each radiosonde profile is divided into 50- and 100-hPa pressure layers, to emulate the coarse vertical resolution of some atmospheric models. Then the high resolution observations contained within each thick pressure layer are used to calculate an average relative humidity and an ISS fraction for each individual thick pressure layer. These relative humidity pressure layer descriptions are then linked through a probability function to produce an s-shaped curve which empirically describes the ISS fraction in any average relative humidity pressure layer. Using this empirical understanding of the s-shaped relationship a mathematical model was developed to represent the ISS fraction within any arbitrary thick pressure layer. Two models were developed to represent both 50- and 100-hPa pressure layers with each reconstructing their respective s-shapes within 8–10% of the empirical curves. These new models can be used, to represent the small scale structures of ISS events, in modelled data where only low vertical resolution is available. This will be useful in understanding, and improving the global distribution, both observed and forecasted, of ice super-saturation.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mulalo M. Muluvhahothe ◽  
Grant S. Joseph ◽  
Colleen L. Seymour ◽  
Thinandavha C. Munyai ◽  
Stefan H. Foord

AbstractHigh-altitude-adapted ectotherms can escape competition from dominant species by tolerating low temperatures at cooler elevations, but climate change is eroding such advantages. Studies evaluating broad-scale impacts of global change for high-altitude organisms often overlook the mitigating role of biotic factors. Yet, at fine spatial-scales, vegetation-associated microclimates provide refuges from climatic extremes. Using one of the largest standardised data sets collected to date, we tested how ant species composition and functional diversity (i.e., the range and value of species traits found within assemblages) respond to large-scale abiotic factors (altitude, aspect), and fine-scale factors (vegetation, soil structure) along an elevational gradient in tropical Africa. Altitude emerged as the principal factor explaining species composition. Analysis of nestedness and turnover components of beta diversity indicated that ant assemblages are specific to each elevation, so species are not filtered out but replaced with new species as elevation increases. Similarity of assemblages over time (assessed using beta decay) did not change significantly at low and mid elevations but declined at the highest elevations. Assemblages also differed between northern and southern mountain aspects, although at highest elevations, composition was restricted to a set of species found on both aspects. Functional diversity was not explained by large scale variables like elevation, but by factors associated with elevation that operate at fine scales (i.e., temperature and habitat structure). Our findings highlight the significance of fine-scale variables in predicting organisms’ responses to changing temperature, offering management possibilities that might dilute climate change impacts, and caution when predicting assemblage responses using climate models, alone.


2014 ◽  
Vol 27 (10) ◽  
pp. 3848-3868 ◽  
Author(s):  
John T. Allen ◽  
David J. Karoly ◽  
Kevin J. Walsh

Abstract The influence of a warming climate on the occurrence of severe thunderstorm environments in Australia was explored using two global climate models: Commonwealth Scientific and Industrial Research Organisation Mark, version 3.6 (CSIRO Mk3.6), and the Cubic-Conformal Atmospheric Model (CCAM). These models have previously been evaluated and found to be capable of reproducing a useful climatology for the twentieth-century period (1980–2000). Analyzing the changes between the historical period and high warming climate scenarios for the period 2079–99 has allowed estimation of the potential convective future for the continent. Based on these simulations, significant increases to the frequency of severe thunderstorm environments will likely occur for northern and eastern Australia in a warmed climate. This change is a response to increasing convective available potential energy from higher continental moisture, particularly in proximity to warm sea surface temperatures. Despite decreases to the frequency of environments with high vertical wind shear, it appears unlikely that this will offset increases to thermodynamic energy. The change is most pronounced during the peak of the convective season, increasing its length and the frequency of severe thunderstorm environments therein, particularly over the eastern parts of the continent. The implications of this potential increase are significant, with the overall frequency of potential severe thunderstorm days per year likely to rise over the major population centers of the east coast by 14% for Brisbane, 22% for Melbourne, and 30% for Sydney. The limitations of this approach are then discussed in the context of ways to increase the confidence of predictions of future severe convection.


2021 ◽  
Author(s):  
Christian Zeman ◽  
Christoph Schär

<p>Since their first operational application in the 1950s, atmospheric numerical models have become essential tools in weather and climate prediction. As such, they are a constant subject to changes, thanks to advances in computer systems, numerical methods, and the ever increasing knowledge about the atmosphere of Earth. Many of the changes in today's models relate to seemingly unsuspicious modifications, associated with minor code rearrangements, changes in hardware infrastructure, or software upgrades. Such changes are meant to preserve the model formulation, yet the verification of such changes is challenged by the chaotic nature of our atmosphere - any small change, even rounding errors, can have a big impact on individual simulations. Overall this represents a serious challenge to a consistent model development and maintenance framework.</p><p>Here we propose a new methodology for quantifying and verifying the impacts of minor atmospheric model changes, or its underlying hardware/software system, by using ensemble simulations in combination with a statistical hypothesis test. The methodology can assess effects of model changes on almost any output variable over time, and can also be used with different hypothesis tests.</p><p>We present first applications of the methodology with the regional weather and climate model COSMO. The changes considered include a major system upgrade of the supercomputer used, the change from double to single precision floating-point representation, changes in the update frequency of the lateral boundary conditions, and tiny changes to selected model parameters. While providing very robust results, the methodology also shows a large sensitivity to more significant model changes, making it a good candidate for an automated tool to guarantee model consistency in the development cycle.</p>


2014 ◽  
Vol 1 (2) ◽  
pp. 1283-1312
Author(s):  
M. Abbas ◽  
A. Ilin ◽  
A. Solonen ◽  
J. Hakkarainen ◽  
E. Oja ◽  
...  

Abstract. In this work, we consider the Bayesian optimization (BO) approach for tuning parameters of complex chaotic systems. Such problems arise, for instance, in tuning the sub-grid scale parameterizations in weather and climate models. For such problems, the tuning procedure is generally based on a performance metric which measures how well the tuned model fits the data. This tuning is often a computationally expensive task. We show that BO, as a tool for finding the extrema of computationally expensive objective functions, is suitable for such tuning tasks. In the experiments, we consider tuning parameters of two systems: a simplified atmospheric model and a low-dimensional chaotic system. We show that BO is able to tune parameters of both the systems with a low number of objective function evaluations and without the need of any gradient information.


Sign in / Sign up

Export Citation Format

Share Document