scholarly journals A More Powerful Reality Test for Climate Models

Eos ◽  
2016 ◽  
Vol 97 ◽  
Author(s):  
Peter Gleckler ◽  
Charles Doutriaux ◽  
Paul Durack ◽  
Karl Taylor ◽  
Yuying Zhang ◽  
...  

A new climate model evaluation package will deliver objective comparisons between models and observations for research and model development and provide a framework for community engagement.

2020 ◽  
Vol 101 (10) ◽  
pp. E1619-E1627
Author(s):  
C. Zhang ◽  
S. Xie ◽  
C. Tao ◽  
S. Tang ◽  
T. Emmenegger ◽  
...  

AbstractThe U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program User Facility produces ground-based long-term continuous unique measurements for atmospheric state, precipitation, turbulent fluxes, radiation, aerosol, cloud, and the land surface, which are collected at multiple sites. These comprehensive datasets have been widely used to calibrate climate models and are proven to be invaluable for climate model development and improvement. This article introduces an evaluation package to facilitate the use of ground-based ARM measurements in climate model evaluation. The ARM data-oriented metrics and diagnostics package (ARM-DIAGS) includes both ARM observational datasets and a Python-based analysis toolkit for computation and visualization. The observational datasets are compiled from multiple ARM data products and specifically tailored for use in climate model evaluation. In addition, ARM-DIAGS also includes simulation data from models participating the Coupled Model Intercomparison Project (CMIP), which will allow climate-modeling groups to compare a new, candidate version of their model to existing CMIP models. The analysis toolkit is designed to make the metrics and diagnostics quickly available to the model developers.


2019 ◽  
Vol 20 (7) ◽  
pp. 1339-1357 ◽  
Author(s):  
Peter B. Gibson ◽  
Duane E. Waliser ◽  
Huikyo Lee ◽  
Baijun Tian ◽  
Elias Massoud

Abstract Climate model evaluation is complicated by the presence of observational uncertainty. In this study we analyze daily precipitation indices and compare multiple gridded observational and reanalysis products with regional climate models (RCMs) from the North American component of the Coordinated Regional Climate Downscaling Experiment (NA-CORDEX) multimodel ensemble. In the context of model evaluation, observational product differences across the contiguous United States (CONUS) are also deemed nontrivial for some indices, especially for annual counts of consecutive wet days and for heavy precipitation indices. Multidimensional scaling (MDS) is used to directly include this observational spread into the model evaluation procedure, enabling visualization and interpretation of model differences relative to a “cloud” of observational uncertainty. Applying MDS to the evaluation of NA-CORDEX RCMs reveals situations of added value from dynamical downscaling, situations of degraded performance from dynamical downscaling, and the sensitivity of model performance to model resolution. On precipitation days, higher-resolution RCMs typically simulate higher mean and extreme precipitation rates than their lower-resolution pairs, sometimes improving model fidelity with observations. These results document the model spread and biases in daily precipitation extremes across the full NA-CORDEX model ensemble. The often-large divergence between in situ observations, satellite data, and reanalysis, shown here for CONUS, is especially relevant for data-sparse regions of the globe where satellite and reanalysis products are extensively relied upon. This highlights the need to carefully consider multiple observational products when evaluating climate models.


2018 ◽  
Author(s):  
Huikyo Lee ◽  
Alexander Goodman ◽  
Lewis McGibbney ◽  
Duane Waliser ◽  
Jinwon Kim ◽  
...  

Abstract. The Regional Climate Model Evaluation System (RCMES) is an enabling tool of the National Aeronautics and Space Administration to support the United States National Climate Assessment. As a comprehensive system for evaluating climate models on regional and continental scales using observational datasets from a variety of sources, RCMES is designed to yield information on the performance of climate models and guide their improvement. Here we present a user-oriented document describing the latest version of RCMES, its development process and future plans for improvements. The main objective of RCMES is to facilitate the climate model evaluation process at regional scales. RCMES provides a framework for performing systematic evaluations of climate simulations, such as those from the Coordinated Regional Climate Downscaling Experiment (CORDEX), using in-situ observations as well as satellite and reanalysis data products. The main components of RCMES are: 1) a database of observations widely used for climate model evaluation, 2) various data loaders to import climate models and observations in different formats, 3) a versatile processor to subset and regrid the loaded datasets, 4) performance metrics designed to assess and quantify model skill, 5) plotting routines to visualize the performance metrics, 6) a toolkit for statistically downscaling climate model simulations, and 7) two installation packages to maximize convenience of users without Python skills. RCMES website is maintained up to date with brief explanation of these components. Although there are other open-source software (OSS) toolkits that facilitate analysis and evaluation of climate models, there is a need for climate scientists to participate in the development and customization of OSS to study regional climate change. To establish infrastructure and to ensure software sustainability, development of RCMES is an open, publicly accessible process enabled by leveraging the Apache Software Foundation's OSS library, Apache Open Climate Workbench (OCW). The OCW software that powers RCMES includes a Python OSS library for common climate model evaluation tasks as well as a set of user-friendly interfaces for quickly configuring a model evaluation task. OCW also allows users to build their own climate data analysis tools, such as the statistical downscaling toolkit provided as a part of RCMES.


2018 ◽  
Vol 11 (11) ◽  
pp. 4435-4449 ◽  
Author(s):  
Huikyo Lee ◽  
Alexander Goodman ◽  
Lewis McGibbney ◽  
Duane E. Waliser ◽  
Jinwon Kim ◽  
...  

Abstract. The Regional Climate Model Evaluation System (RCMES) is an enabling tool of the National Aeronautics and Space Administration to support the United States National Climate Assessment. As a comprehensive system for evaluating climate models on regional and continental scales using observational datasets from a variety of sources, RCMES is designed to yield information on the performance of climate models and guide their improvement. Here, we present a user-oriented document describing the latest version of RCMES, its development process, and future plans for improvements. The main objective of RCMES is to facilitate the climate model evaluation process at regional scales. RCMES provides a framework for performing systematic evaluations of climate simulations, such as those from the Coordinated Regional Climate Downscaling Experiment (CORDEX), using in situ observations, as well as satellite and reanalysis data products. The main components of RCMES are (1) a database of observations widely used for climate model evaluation, (2) various data loaders to import climate models and observations on local file systems and Earth System Grid Federation (ESGF) nodes, (3) a versatile processor to subset and regrid the loaded datasets, (4) performance metrics designed to assess and quantify model skill, (5) plotting routines to visualize the performance metrics, (6) a toolkit for statistically downscaling climate model simulations, and (7) two installation packages to maximize convenience of users without Python skills. RCMES website is maintained up to date with a brief explanation of these components. Although there are other open-source software (OSS) toolkits that facilitate analysis and evaluation of climate models, there is a need for climate scientists to participate in the development and customization of OSS to study regional climate change. To establish infrastructure and to ensure software sustainability, development of RCMES is an open, publicly accessible process enabled by leveraging the Apache Software Foundation's OSS library, Apache Open Climate Workbench (OCW). The OCW software that powers RCMES includes a Python OSS library for common climate model evaluation tasks as well as a set of user-friendly interfaces for quickly configuring a model evaluation task. OCW also allows users to build their own climate data analysis tools, such as the statistical downscaling toolkit provided as a part of RCMES.


2021 ◽  
Author(s):  
Thordis Thorarinsdottir ◽  
Jana Sillmann ◽  
Marion Haugen ◽  
Nadine Gissibl ◽  
Marit Sandstad

<p>Reliable projections of extremes in near-surface air temperature (SAT) by climate models become more and more important as global warming is leading to significant increases in the hottest days and decreases in coldest nights around the world with considerable impacts on various sectors, such as agriculture, health and tourism.</p><p>Climate model evaluation has traditionally been performed by comparing summary statistics that are derived from simulated model output and corresponding observed quantities using, for instance, the root mean squared error (RMSE) or mean bias as also used in the model evaluation chapter of the fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5). Both RMSE and mean bias compare averages over time and/or space, ignoring the variability, or the uncertainty, in the underlying values. Particularly when interested in the evaluation of climate extremes, climate models should be evaluated by comparing the probability distribution of model output to the corresponding distribution of observed data.</p><p>To address this shortcoming, we use the integrated quadratic distance (IQD) to compare distributions of simulated indices to the corresponding distributions from a data product. The IQD is the proper divergence associated with the proper continuous ranked probability score (CRPS) as it fulfills essential decision-theoretic properties for ranking competing models and testing equality in performance, while also assessing the full distribution.</p><p>The IQD is applied to evaluate CMIP5 and CMIP6 simulations of monthly maximum (TXx) and minimum near-surface air temperature (TNn) over the data-dense regions Europe and North America against both observational and reanalysis datasets. There is not a notable difference between the model generations CMIP5 and CMIP6 when the model simulations are compared against the observational dataset HadEX2. However, the CMIP6 models show a better agreement with the reanalysis ERA5 than CMIP5 models, with a few exceptions. Overall, the climate models show higher skill when compared against ERA5 than when compared against HadEX2. While the model rankings vary with region, season and index, the model evaluation is robust against changes in the grid resolution considered in the analysis.</p>


2021 ◽  
Author(s):  
Christian Zeman ◽  
Christoph Schär

<p>Since their first operational application in the 1950s, atmospheric numerical models have become essential tools in weather and climate prediction. As such, they are a constant subject to changes, thanks to advances in computer systems, numerical methods, and the ever increasing knowledge about the atmosphere of Earth. Many of the changes in today's models relate to seemingly unsuspicious modifications, associated with minor code rearrangements, changes in hardware infrastructure, or software upgrades. Such changes are meant to preserve the model formulation, yet the verification of such changes is challenged by the chaotic nature of our atmosphere - any small change, even rounding errors, can have a big impact on individual simulations. Overall this represents a serious challenge to a consistent model development and maintenance framework.</p><p>Here we propose a new methodology for quantifying and verifying the impacts of minor atmospheric model changes, or its underlying hardware/software system, by using ensemble simulations in combination with a statistical hypothesis test. The methodology can assess effects of model changes on almost any output variable over time, and can also be used with different hypothesis tests.</p><p>We present first applications of the methodology with the regional weather and climate model COSMO. The changes considered include a major system upgrade of the supercomputer used, the change from double to single precision floating-point representation, changes in the update frequency of the lateral boundary conditions, and tiny changes to selected model parameters. While providing very robust results, the methodology also shows a large sensitivity to more significant model changes, making it a good candidate for an automated tool to guarantee model consistency in the development cycle.</p>


Sign in / Sign up

Export Citation Format

Share Document