Profiling and scalability of the high resolution NCEP model for weather and climate simulations

Author(s):  
R Phani ◽  
A. K. Sahai ◽  
A. Suryachandra Rao ◽  
Jeelani Smd
2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Masayoshi Ishii ◽  
Nobuhito Mori

Abstract A large-ensemble climate simulation database, which is known as the database for policy decision-making for future climate changes (d4PDF), was designed for climate change risk assessments. Since the completion of the first set of climate simulations in 2015, the database has been growing continuously. It contains the results of ensemble simulations conducted over a total of thousands years respectively for past and future climates using high-resolution global (60 km horizontal mesh) and regional (20 km mesh) atmospheric models. Several sets of future climate simulations are available, in which global mean surface air temperatures are forced to be higher by 4 K, 2 K, and 1.5 K relative to preindustrial levels. Nonwarming past climate simulations are incorporated in d4PDF along with the past climate simulations. The total data volume is approximately 2 petabytes. The atmospheric models satisfactorily simulate the past climate in terms of climatology, natural variations, and extreme events such as heavy precipitation and tropical cyclones. In addition, data users can obtain statistically significant changes in mean states or weather and climate extremes of interest between the past and future climates via a simple arithmetic computation without any statistical assumptions. The database is helpful in understanding future changes in climate states and in attributing past climate events to global warming. Impact assessment studies for climate changes have concurrently been performed in various research areas such as natural hazard, hydrology, civil engineering, agriculture, health, and insurance. The database has now become essential for promoting climate and risk assessment studies and for devising climate adaptation policies. Moreover, it has helped in establishing an interdisciplinary research community on global warming across Japan.


Water ◽  
2019 ◽  
Vol 11 (6) ◽  
pp. 1296 ◽  
Author(s):  
Huiying Ren ◽  
Z. Jason Hou ◽  
Mark Wigmosta ◽  
Ying Liu ◽  
L. Ruby Leung

Changes in extreme precipitation events may require revisions of civil engineering standards to prevent water infrastructures from performing below the designated guidelines. Climate change may invalidate the intensity-duration-frequency (IDF) computation that is based on the assumption of data stationarity. Efforts in evaluating non-stationarity in the annual maxima series are inadequate, mostly due to the lack of long data records and convenient methods for detecting trends in the higher moments. In this study, using downscaled high resolution climate simulations of the historical and future periods under different carbon emission scenarios, we tested two solutions to obtain reliable IDFs under non-stationarity: (1) identify quasi-stationary time windows from the time series of interest to compute the IDF curves using data for the corresponding time windows; (2) introduce a parameter representing the trend in the means of the extreme value distributions. Focusing on a mountainous site, the Walker Watershed, the spatial heterogeneity and variability of IDFs or extremes are evaluated, particularly in terms of the terrain and elevation impacts. We compared observations-based IDFs that use the stationarity assumption with the two approaches that consider non-stationarity. The IDFs directly estimated based on the traditional stationarity assumption may underestimate the 100-year 24-h events by 10% to 60% towards the end of the century at most grids, resulting in significant under-designing of the engineering infrastructure at the study site. Strong spatial heterogeneity and variability in the IDF estimates suggest a preference for using high resolution simulation data for the reliable estimation of exceedance probability over data from sparsely distributed weather stations. Discrepancies among the three IDFs analyses due to non-stationarity are comparable to the spatial variability of the IDFs, underscoring a need to use an ensemble of non-stationary approaches to achieve unbiased and comprehensive IDF estimates.


2016 ◽  
Author(s):  
Paolo Davini ◽  
Jost von Hardenberg ◽  
Susanna Corti ◽  
Hannah M. Christensen ◽  
Stephan Juricke ◽  
...  

Abstract. The Climate SPHINX (Stochastic Physics HIgh resolutioN eXperiments) project is a comprehensive set of ensemble simulations aimed at evaluating the sensitivity of present and future climate to model resolution and stochastic parameterisation. The EC-Earth Earth-System Model is used to explore the impact of stochastic physics in a large ensemble of 30-year climate integrations at five different atmospheric horizontal resolutions (from 125 km up to 16 km). The project includes more than 120 simulations in both a historical scenario (1979–2008) and a climate change projection (2039–2068), together with coupled transient runs (1850–2100). A total of 20.4 million core hours have been used, made available from a single year grant from PRACE (the Partnership for Advanced Computing in Europe), and close to 1.5 PBytes of output data have been produced on SuperMUC IBM Petascale System at the Leibniz Supercomputing Center (LRZ) in Garching, Germany. About 140 TBytes of post-processed data are stored on the CINECA supercomputing center archives and are freely accessible to the community thanks to an EUDAT Data Pilot project. This paper presents the technical and scientific setup of the experiments, including the details on the forcing used for the simulations performed, defining the SPHINX v1.0 protocol. In addition, an overview of preliminary results is given: an improvement in the simulation of Euro-Atlantic atmospheric blocking following resolution increases is observed. It is also shown that including stochastic parameterisation in the low resolution runs helps to improve some aspects of the tropical climate – specifically the Madden-Julian Oscillation and the tropical rainfall variability. These findings show the importance of representing the impact of small scale processes on the large scale climate variability either explicitly (with high resolution simulations) or stochastically (in low resolution simulations).


1997 ◽  
Vol 7 (1) ◽  
pp. 3 ◽  
Author(s):  
R. A. Pielke ◽  
T. J. Lee ◽  
J. H. Copeland ◽  
J. L. Eastman ◽  
C. L. Ziegler ◽  
...  

2019 ◽  
Vol 12 (11) ◽  
pp. 4571-4584 ◽  
Author(s):  
Zhiqiang Li ◽  
Yulun Zhou ◽  
Bingcheng Wan ◽  
Hopun Chung ◽  
Bo Huang ◽  
...  

Abstract. The veracity of urban climate simulation models should be systematically evaluated to demonstrate the trustworthiness of these models against possible model uncertainties. However, existing studies paid insufficient attention to model evaluation; most studies only provided some simple comparison lines between modelled variables and their corresponding observed ones on the temporal dimension. Challenges remain since such simple comparisons cannot concretely prove that the simulation of urban climate behaviours is reliable. Studies without systematic model evaluations, being ambiguous or arbitrary to some extent, may lead to some seemingly new but scientifically misleading findings. To tackle these challenges, this article proposes a methodological framework for the model evaluation of high-resolution urban climate simulations and demonstrates its effectiveness with a case study in the area of Shenzhen and Hong Kong SAR, China. It is intended to (again) remind urban climate modellers of the necessity of conducting systematic model evaluations with urban-scale climatology modelling and reduce these ambiguous or arbitrary modelling practices.


2019 ◽  
Author(s):  
Francine Schevenhoven ◽  
Frank Selten ◽  
Alberto Carrassi ◽  
Noel Keenlyside

Abstract. Recent studies demonstrate that weather and climate predictions potentially improve by dynamically combining different models into a so called "supermodel". Here we focus on the weighted supermodel – the supermodel's time derivative is a weighted superposition of the time-derivatives of the imperfect models, referred to as weighted supermodeling. A crucial step is to train the weights of the supermodel on the basis of historical observations. Here we apply two different training methods to a supermodel of up to four different versions of the global atmosphere-ocean-land model SPEEDO. The standard version is regarded as truth. The first training method is based on an idea called Cross Pollination in Time (CPT), where models exchange states during the training. The second method is a synchronization based learning rule, originally developed for parameter estimation. We demonstrate that both training methods yield climate simulations and weather predictions of superior quality as compared to the individual model versions. Supermodel predictions also outperform predictions based on the commonly used Multi-Model Ensemble (MME) mean. Furthermore we find evidence that negative weights can improve predictions in cases where model errors do not cancel (for instance all models are warm with respect to the truth). In principle the proposed training schemes are applicable to state-of-the-art models and historical observations. A prime advantage of the proposed training schemes is that in the present context relatively short training periods suffice to find good solutions. Additional work needs to be done to assess the limitations due to incomplete and noisy data, to combine models that are structurally different (different resolution and state representation for instance) and to evaluate cases for which the truth falls outside of the model class.


Sign in / Sign up

Export Citation Format

Share Document