scholarly journals Probabilistic evaluation of competing climate models

Author(s):  
Amy Braverman ◽  
Snigdhansu Chatterjee ◽  
Megan Heyman ◽  
Noel Cressie

Abstract. Climate models produce output over decades or longer at high spatial and temporal resolution. Starting values, boundary conditions, greenhouse gas emissions, and so forth make the climate model an uncertain representation of the climate system. A standard paradigm for assessing the quality of climate model simulations is to compare what these models produce for past and present time periods, to observations of the past and present. Many of these comparisons are based on simple summary statistics called metrics. In this article, we propose an alternative: evaluation of competing climate models through probabilities derived from tests of the hypothesis that climate-model-simulated and observed time sequences share common climate-scale signals. The probabilities are based on the behavior of summary statistics of climate model output and observational data over ensembles of pseudo-realizations. These are obtained by partitioning the original time sequences into signal and noise components, and using a parametric bootstrap to create pseudo-realizations of the noise sequences. The statistics we choose come from working in the space of decorrelated and dimension-reduced wavelet coefficients. Here, we compare monthly sequences of CMIP5 model output of average global near-surface temperature anomalies to similar sequences obtained from the well-known HadCRUT4 data set as an illustration.

2021 ◽  
Author(s):  
Thordis Thorarinsdottir ◽  
Jana Sillmann ◽  
Marion Haugen ◽  
Nadine Gissibl ◽  
Marit Sandstad

<p>Reliable projections of extremes in near-surface air temperature (SAT) by climate models become more and more important as global warming is leading to significant increases in the hottest days and decreases in coldest nights around the world with considerable impacts on various sectors, such as agriculture, health and tourism.</p><p>Climate model evaluation has traditionally been performed by comparing summary statistics that are derived from simulated model output and corresponding observed quantities using, for instance, the root mean squared error (RMSE) or mean bias as also used in the model evaluation chapter of the fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5). Both RMSE and mean bias compare averages over time and/or space, ignoring the variability, or the uncertainty, in the underlying values. Particularly when interested in the evaluation of climate extremes, climate models should be evaluated by comparing the probability distribution of model output to the corresponding distribution of observed data.</p><p>To address this shortcoming, we use the integrated quadratic distance (IQD) to compare distributions of simulated indices to the corresponding distributions from a data product. The IQD is the proper divergence associated with the proper continuous ranked probability score (CRPS) as it fulfills essential decision-theoretic properties for ranking competing models and testing equality in performance, while also assessing the full distribution.</p><p>The IQD is applied to evaluate CMIP5 and CMIP6 simulations of monthly maximum (TXx) and minimum near-surface air temperature (TNn) over the data-dense regions Europe and North America against both observational and reanalysis datasets. There is not a notable difference between the model generations CMIP5 and CMIP6 when the model simulations are compared against the observational dataset HadEX2. However, the CMIP6 models show a better agreement with the reanalysis ERA5 than CMIP5 models, with a few exceptions. Overall, the climate models show higher skill when compared against ERA5 than when compared against HadEX2. While the model rankings vary with region, season and index, the model evaluation is robust against changes in the grid resolution considered in the analysis.</p>


2016 ◽  
Vol 12 (8) ◽  
pp. 1645-1662 ◽  
Author(s):  
Emmanuele Russo ◽  
Ulrich Cubasch

Abstract. The improvement in resolution of climate models has always been mentioned as one of the most important factors when investigating past climatic conditions, especially in order to evaluate and compare the results against proxy data. Despite this, only a few studies have tried to directly estimate the possible advantages of highly resolved simulations for the study of past climate change. Motivated by such considerations, in this paper we present a set of high-resolution simulations for different time slices of the mid-to-late Holocene performed over Europe using the state-of-the-art regional climate model COSMO-CLM. After proposing and testing a model configuration suitable for paleoclimate applications, the aforementioned mid-to-late Holocene simulations are compared against a new pollen-based climate reconstruction data set, covering almost all of Europe, with two main objectives: testing the advantages of high-resolution simulations for paleoclimatic applications, and investigating the response of temperature to variations in the seasonal cycle of insolation during the mid-to-late Holocene. With the aim of giving physically plausible interpretations of the mismatches between model and reconstructions, possible uncertainties of the pollen-based reconstructions are taken into consideration. Focusing our analysis on near-surface temperature, we can demonstrate that concrete advantages arise in the use of highly resolved data for the comparison against proxy-reconstructions and the investigation of past climate change. Additionally, our results reinforce previous findings showing that summertime temperatures during the mid-to-late Holocene were driven mainly by changes in insolation and that the model is too sensitive to such changes over Southern Europe, resulting in drier and warmer conditions. However, in winter, the model does not correctly reproduce the same amplitude of changes evident in the reconstructions, even if it captures the main pattern of the pollen data set over most of the domain for the time periods under investigation. Through the analysis of variations in atmospheric circulation we suggest that, even though the wintertime discrepancies between the two data sets in some areas are most likely due to high pollen uncertainties, in general the model seems to underestimate the changes in the amplitude of the North Atlantic Oscillation, overestimating the contribution of secondary modes of variability.


2020 ◽  
Author(s):  
Kevin Sieck ◽  
Christine Nam ◽  
Laurens M. Bouwer ◽  
Diana Rechid ◽  
Daniela Jacob

Abstract. This paper presents a novel data set of regional climate model simulations over Europe that significantly improves our ability to detect changes in weather extremes under low and moderate levels of global warming. The data set provides a unique and physically consistent data set, as it is derived from a large ensemble of regional climate model simulations. These simulations were driven by two global climate models from the international HAPPI consortium. The set consists of 100 × 10-year simulations and 25 × 10-year simulations, respectively. These large ensembles allow for regional climate change and weather extremes to be investigated with an improved signal-to-noise ratio compared to previous climate simulations. The changes in four climate indices for temperature targets of 1.5 °C and 2.0 °C global warming are quantified: number of days per year with daily mean near-surface apparent temperature of > 28 °C (ATG28); the yearly maximum 5-day sum of precipitation (RX5day); the daily precipitation intensity of the 50-yr return period (RI50yr); and the annual Consecutive Dry Days (CDD). This work shows that even for a small signal in projected global mean temperature, changes of extreme temperature and precipitation indices can be robustly estimated. For temperature related indices changes in percentiles can also be estimated with high confidence. Such data can form the basis for tailor-made climate information that can aid adaptive measures at a policy-relevant scales, indicating potential impacts at low levels of global warming at steps of 0.5 °C.


2005 ◽  
Vol 5 ◽  
pp. 119-125 ◽  
Author(s):  
S. Kotlarski ◽  
A. Block ◽  
U. Böhm ◽  
D. Jacob ◽  
K. Keuler ◽  
...  

Abstract. The ERA15 Reanalysis (1979-1993) has been dynamically downscaled over Central Europe using 4 different regional climate models. The regional simulations were analysed with respect to 2m temperature and total precipitation, the main input parameters for hydrological applications. Model results were validated against three reference data sets (ERA15, CRU, DWD) and uncertainty ranges were derived. For mean annual 2 m temperature over Germany, the simulation bias lies between -1.1°C and +0.9°C depending on the combination of model and reference data set. The bias of mean annual precipitation varies between -31 and +108 mm/year. Differences between RCM results are of the same magnitude as differences between the reference data sets.


2021 ◽  
Author(s):  
Jeremy Carter ◽  
Amber Leeson ◽  
Andrew Orr ◽  
Christoph Kittel ◽  
Melchior van Wessem

<p>Understanding the surface climatology of the Antarctic ice sheet is essential if we are to adequately predict its response to future climate change. This includes both primary impacts such as increased ice melting and secondary impacts such as ice shelf collapse events. Given its size, and inhospitable environment, weather stations on Antarctica are sparse. Thus, we rely on regional climate models to 1) develop our understanding of how the climate of Antarctica varies in both time and space and 2) provide data to use as context for remote sensing studies and forcing for dynamical process models. Given that there are a number of different regional climate models available that explicitly simulate Antarctic climate, understanding inter- and intra model variability is important.</p><p>Here, inter- and intra-model variability in Antarctic-wide regional climate model output is assessed for: snowfall; rainfall; snowmelt and near-surface air temperature within a cloud-based virtual lab framework. State-of-the-art regional climate model runs from the Antarctic-CORDEX project using the RACMO, MAR and MetUM models are used, together with the ERA5 and ERA-Interim reanalyses products. Multiple simulations using the same model and domain boundary but run at either different spatial resolutions or with different driving data are used. Traditional analysis techniques are exploited and the question of potential added value from more modern and involved methods such as the use of Gaussian Processes is investigated. The advantages of using a virtual lab in a cloud based environment for increasing transparency and reproducibility, are demonstrated, with a view to ultimately make the code and methods used widely available for other research groups.</p>


2017 ◽  
Author(s):  
Laura Revell ◽  
Andrea Stenke ◽  
Beiping Luo ◽  
Stefanie Kremser ◽  
Eugene Rozanov ◽  
...  

Abstract. To simulate the impacts of volcanic eruptions on the stratosphere, chemistry-climate models that do not include an online aerosol module require temporally and spatially resolved aerosol size parameters for heterogeneous chemistry and aerosol radiative properties as a function of wavelength. For phase 1 of the Chemistry-Climate Model Initiative (CCMI-1) and, later, for phase 6 of the Coupled Model Intercomparison Project (CMIP6) two such stratospheric aerosol data sets were compiled, whose functional capability and representativeness are compared here. For CCMI-1, the SAGE-4λ data set was compiled, which hinges on the measurements at four wavelengths of the SAGE (Stratospheric Aerosol and Gas Experiment) II satellite instrument and uses ground-based Lidar measurements for gap-filling immediately after the Mt. Pinatubo eruption, when the stratosphere was optically opaque for SAGE II. For CMIP6, the new SAGE-3λ data set was compiled, which excludes the least reliable SAGE II wavelength and uses CLAES (Cryogenic Limb Array Etalon Spectrometer) measurements on UARS, the Upper Atmosphere Research Satellite, for gap-filling following the Mt. Pinatubo eruption instead of ground-based Lidars. Here, we performed SOCOLv3 (Solar Climate Ozone Links version 3) chemistry-climate model simulations of the recent past (1986–2005) to investigate the impact of the Mt. Pinatubo eruption in 1991 on stratospheric temperature and ozone and how this response differs depending on which aerosol data set is applied. The use of SAGE-4λ results in heating and ozone loss being overestimated in the lower stratosphere compared to observations in the post-eruption period by approximately 3 K and 0.2 ppmv, respectively. However, less heating occurs in the model simulations based on SAGE-3λ, because the improved gap-filling procedures after the eruption lead to less aerosol loading in the tropical lower stratosphere. As a result, simulated temperature anomalies in the model simulations based on SAGE-3λ for CMIP6 are in excellent agreement with MERRA and ERA-Interim reanalyses in the post-eruption period. Less heating in the simulations with SAGE-3λ means that the rate of tropical upwelling does not strengthen as much as it does in the simulations with SAGE-4λ, which limits dynamical uplift of ozone and therefore provides more time for ozone to accumulate in tropical mid-stratospheric air. Ozone loss following the Mt. Pinatubo eruption is overestimated by 0.1 ppmv in the model simulations based on SAGE-3λ, which is a better agreement with observations than in the simulations based on SAGE-4λ. Overall, the CMIP6 stratospheric aerosol data set, SAGE-3λ, allows SOCOLv3 to more accurately simulate the post-Pinatubo eruption period.


2008 ◽  
Vol 21 (22) ◽  
pp. 6052-6059 ◽  
Author(s):  
B. Timbal ◽  
P. Hope ◽  
S. Charles

Abstract The consistency between rainfall projections obtained from direct climate model output and statistical downscaling is evaluated. Results are averaged across an area large enough to overcome the difference in spatial scale between these two types of projections and thus make the comparison meaningful. Undertaking the comparison using a suite of state-of-the-art coupled climate models for two forcing scenarios presents a unique opportunity to test whether statistical linkages established between large-scale predictors and local rainfall under current climate remain valid in future climatic conditions. The study focuses on the southwest corner of Western Australia, a region that has experienced recent winter rainfall declines and for which climate models project, with great consistency, further winter rainfall reductions due to global warming. Results show that as a first approximation the magnitude of the modeled rainfall decline in this region is linearly related to the model global warming (a reduction of about 9% per degree), thus linking future rainfall declines to future emission paths. Two statistical downscaling techniques are used to investigate the influence of the choice of technique on projection consistency. In addition, one of the techniques was assessed using different large-scale forcings, to investigate the impact of large-scale predictor selection. Downscaled and direct model projections are consistent across the large number of models and two scenarios considered; that is, there is no tendency for either to be biased; and only a small hint that large rainfall declines are reduced in downscaled projections. Among the two techniques, a nonhomogeneous hidden Markov model provides greater consistency with climate models than an analog approach. Differences were due to the choice of the optimal combination of predictors. Thus statistically downscaled projections require careful choice of large-scale predictors in order to be consistent with physically based rainfall projections. In particular it was noted that a relative humidity moisture predictor, rather than specific humidity, was needed for downscaled projections to be consistent with direct model output projections.


2017 ◽  
Vol 10 (2) ◽  
pp. 889-901 ◽  
Author(s):  
Daniel J. Lunt ◽  
Matthew Huber ◽  
Eleni Anagnostou ◽  
Michiel L. J. Baatsen ◽  
Rodrigo Caballero ◽  
...  

Abstract. Past warm periods provide an opportunity to evaluate climate models under extreme forcing scenarios, in particular high ( >  800 ppmv) atmospheric CO2 concentrations. Although a post hoc intercomparison of Eocene ( ∼  50  Ma) climate model simulations and geological data has been carried out previously, models of past high-CO2 periods have never been evaluated in a consistent framework. Here, we present an experimental design for climate model simulations of three warm periods within the early Eocene and the latest Paleocene (the EECO, PETM, and pre-PETM). Together with the CMIP6 pre-industrial control and abrupt 4 ×  CO2 simulations, and additional sensitivity studies, these form the first phase of DeepMIP – the Deep-time Model Intercomparison Project, itself a group within the wider Paleoclimate Modelling Intercomparison Project (PMIP). The experimental design specifies and provides guidance on boundary conditions associated with palaeogeography, greenhouse gases, astronomical configuration, solar constant, land surface processes, and aerosols. Initial conditions, simulation length, and output variables are also specified. Finally, we explain how the geological data sets, which will be used to evaluate the simulations, will be developed.


2021 ◽  
Author(s):  
Michael Steininger ◽  
Daniel Abel ◽  
Katrin Ziegler ◽  
Anna Krause ◽  
Heiko Paeth ◽  
...  

<p>Climate models are an important tool for the assessment of prospective climate change effects but they suffer from systematic and representation errors, especially for precipitation. Model output statistics (MOS) reduce these errors by fitting the model output to observational data with machine learning. In this work, we explore the feasibility and potential of deep learning with convolutional neural networks (CNNs) for MOS. We propose the CNN architecture ConvMOS specifically designed for reducing errors in climate model outputs and apply it to the climate model REMO. Our results show a considerable reduction of errors and mostly improved performance compared to three commonly used MOS approaches.</p>


2021 ◽  
Author(s):  
Antoine Doury ◽  
Samuel Somot ◽  
Sébastien Gadat ◽  
Aurélien Ribes ◽  
Lola Corre

Abstract Providing reliable information on climate change at local scale remains a challenge of first importance for impact studies and policymakers. Here, we propose a novel hybrid downscaling method combining the strengths of both empirical statistical downscaling methods and Regional Climate Models (RCMs). The aim of this tool is to enlarge the size of high-resolution RCM simulation ensembles at low cost.We build a statistical RCM-emulator by estimating the downscaling function included in the RCM. This framework allows us to learn the relationship between large-scale predictors and a local surface variable of interest over the RCM domain in present and future climate. Furthermore, the emulator relies on a neural network architecture, which grants computational efficiency. The RCM-emulator developed in this study is trained to produce daily maps of the near-surface temperature at the RCM resolution (12km). The emulator demonstrates an excellent ability to reproduce the complex spatial structure and daily variability simulated by the RCM and in particular the way the RCM refines locally the low-resolution climate patterns. Training in future climate appears to be a key feature of our emulator. Moreover, there is a huge computational benefit in running the emulator rather than the RCM, since training the emulator takes about 2 hours on GPU, and the prediction is nearly instantaneous. However, further work is needed to improve the way the RCM-emulator reproduces some of the temperature extremes, the intensity of climate change, and to extend the proposed methodology to different regions, GCMs, RCMs, and variables of interest.


Sign in / Sign up

Export Citation Format

Share Document