scholarly journals Partitioning climate projection uncertainty with multiple large ensembles and CMIP5/6

2020 ◽  
Vol 11 (2) ◽  
pp. 491-508 ◽  
Author(s):  
Flavio Lehner ◽  
Clara Deser ◽  
Nicola Maher ◽  
Jochem Marotzke ◽  
Erich M. Fischer ◽  
...  

Abstract. Partitioning uncertainty in projections of future climate change into contributions from internal variability, model response uncertainty and emissions scenarios has historically relied on making assumptions about forced changes in the mean and variability. With the advent of multiple single-model initial-condition large ensembles (SMILEs), these assumptions can be scrutinized, as they allow a more robust separation between sources of uncertainty. Here, the framework from Hawkins and Sutton (2009) for uncertainty partitioning is revisited for temperature and precipitation projections using seven SMILEs and the Coupled Model Intercomparison Project CMIP5 and CMIP6 archives. The original approach is shown to work well at global scales (potential method bias < 20 %), while at local to regional scales such as British Isles temperature or Sahel precipitation, there is a notable potential method bias (up to 50 %), and more accurate partitioning of uncertainty is achieved through the use of SMILEs. Whenever internal variability and forced changes therein are important, the need to evaluate and improve the representation of variability in models is evident. The available SMILEs are shown to be a good representation of the CMIP5 model diversity in many situations, making them a useful tool for interpreting CMIP5. CMIP6 often shows larger absolute and relative model uncertainty than CMIP5, although part of this difference can be reconciled with the higher average transient climate response in CMIP6. This study demonstrates the added value of a collection of SMILEs for quantifying and diagnosing uncertainty in climate projections.

2020 ◽  
Author(s):  
Flavio Lehner ◽  
Clara Deser ◽  
Nicola Maher ◽  
Jochem Marotzke ◽  
Erich Fischer ◽  
...  

&lt;p&gt;Partitioning uncertainty in projections of future climate change into contributions from internal variability, model response uncertainty, and emissions scenarios has historically relied on making assumptions about forced changes in the mean and variability. With the advent of multiple Single-Model Initial-Condition Large Ensembles (SMILEs), these assumptions can be scrutinized, as they allow a more robust separation between sources of uncertainty. Here, we revisit the framework from Hawkins and Sutton (2009) for uncertainty partitioning for temperature and precipitation projections using seven SMILEs and the Climate Model Intercomparison Projects CMIP5 and CMIP6 archives. We also investigate forced changes in variability itself, something that is newly possible with SMILEs. The available SMILEs are shown to be a good representation of the CMIP5 model diversity in many situations, making them a useful tool for interpreting CMIP5. CMIP6 often shows larger absolute and relative model uncertainty than CMIP5, although part of this difference can be reconciled with the higher average transient climate response in CMIP6. This study demonstrates the added value of a collection of SMILEs for quantifying and diagnosing uncertainty in climate projections.&lt;/p&gt;


2020 ◽  
Author(s):  
Flavio Lehner ◽  
Clara Deser ◽  
Nicola Maher ◽  
Jochem Marotzke ◽  
Erich Fischer ◽  
...  

Abstract. Partitioning uncertainty in projections of future climate change into contributions from internal variability, model response uncertainty, and emissions scenarios has historically relied on making assumptions about forced changes in the mean and variability. With the advent of multiple Single-Model Initial-Condition Large Ensembles (SMILEs), these assumptions can be scrutinized, as they allow a more robust separation between sources of uncertainty. Here, the iconic framework from Hawkins and Sutton (2009) for uncertainty partitioning is revisited for temperature and precipitation projections using seven SMILEs and the Climate Model Intercomparison Projects CMIP5 and CMIP6 archives. The original approach is shown to work well at global scales (potential method error


2014 ◽  
Vol 27 (8) ◽  
pp. 2931-2947 ◽  
Author(s):  
Ed Hawkins ◽  
Buwen Dong ◽  
Jon Robson ◽  
Rowan Sutton ◽  
Doug Smith

Abstract Decadal climate predictions exhibit large biases, which are often subtracted and forgotten. However, understanding the causes of bias is essential to guide efforts to improve prediction systems, and may offer additional benefits. Here the origins of biases in decadal predictions are investigated, including whether analysis of these biases might provide useful information. The focus is especially on the lead-time-dependent bias tendency. A “toy” model of a prediction system is initially developed and used to show that there are several distinct contributions to bias tendency. Contributions from sampling of internal variability and a start-time-dependent forcing bias can be estimated and removed to obtain a much improved estimate of the true bias tendency, which can provide information about errors in the underlying model and/or errors in the specification of forcings. It is argued that the true bias tendency, not the total bias tendency, should be used to adjust decadal forecasts. The methods developed are applied to decadal hindcasts of global mean temperature made using the Hadley Centre Coupled Model, version 3 (HadCM3), climate model, and it is found that this model exhibits a small positive bias tendency in the ensemble mean. When considering different model versions, it is shown that the true bias tendency is very highly correlated with both the transient climate response (TCR) and non–greenhouse gas forcing trends, and can therefore be used to obtain observationally constrained estimates of these relevant physical quantities.


2021 ◽  
Vol 12 (2) ◽  
pp. 709-723
Author(s):  
Philip Goodwin ◽  
B. B. Cael

Abstract. Future climate change projections, impacts, and mitigation targets are directly affected by how sensitive Earth's global mean surface temperature is to anthropogenic forcing, expressed via the climate sensitivity (S) and transient climate response (TCR). However, the S and TCR are poorly constrained, in part because historic observations and future climate projections consider the climate system under different response timescales with potentially different climate feedback strengths. Here, we evaluate S and TCR by using historic observations of surface warming, available since the mid-19th century, and ocean heat uptake, available since the mid-20th century, to constrain a model with independent climate feedback components acting over multiple response timescales. Adopting a Bayesian approach, our prior uses a constrained distribution for the instantaneous Planck feedback combined with wide-ranging uniform distributions of the strengths of the fast feedbacks (acting over several days) and multi-decadal feedbacks. We extract posterior distributions by applying likelihood functions derived from different combinations of observational datasets. The resulting TCR distributions when using two preferred combinations of historic datasets both find a TCR of 1.5 (1.3 to 1.8 at 5–95 % range) ∘C. We find the posterior probability distribution for S for our preferred dataset combination evolves from S of 2.0 (1.6 to 2.5) ∘C on a 20-year response timescale to S of 2.3 (1.4 to 6.4) ∘C on a 140-year response timescale, due to the impact of multi-decadal feedbacks. Our results demonstrate how multi-decadal feedbacks allow a significantly higher upper bound on S than historic observations are otherwise consistent with.


2013 ◽  
Vol 9 (1) ◽  
pp. 393-421 ◽  
Author(s):  
L. Fernández-Donado ◽  
J. F. González-Rouco ◽  
C. C. Raible ◽  
C. M. Ammann ◽  
D. Barriopedro ◽  
...  

Abstract. Understanding natural climate variability and its driving factors is crucial to assessing future climate change. Therefore, comparing proxy-based climate reconstructions with forcing factors as well as comparing these with paleoclimate model simulations is key to gaining insights into the relative roles of internal versus forced variability. A review of the state of modelling of the climate of the last millennium prior to the CMIP5–PMIP3 (Coupled Model Intercomparison Project Phase 5–Paleoclimate Modelling Intercomparison Project Phase 3) coordinated effort is presented and compared to the available temperature reconstructions. Simulations and reconstructions broadly agree on reproducing the major temperature changes and suggest an overall linear response to external forcing on multidecadal or longer timescales. Internal variability is found to have an important influence at hemispheric and global scales. The spatial distribution of simulated temperature changes during the transition from the Medieval Climate Anomaly to the Little Ice Age disagrees with that found in the reconstructions. Thus, either internal variability is a possible major player in shaping temperature changes through the millennium or the model simulations have problems realistically representing the response pattern to external forcing. A last millennium transient climate response (LMTCR) is defined to provide a quantitative framework for analysing the consistency between simulated and reconstructed climate. Beyond an overall agreement between simulated and reconstructed LMTCR ranges, this analysis is able to single out specific discrepancies between some reconstructions and the ensemble of simulations. The disagreement is found in the cases where the reconstructions show reduced covariability with external forcings or when they present high rates of temperature change.


2020 ◽  
Author(s):  
Lukas Brunner ◽  
Carol McSweeney ◽  
Daniel Befort ◽  
Chris O'Reilly ◽  
Ben Booth ◽  
...  

&lt;p&gt;Political decisions, adaptation planning, and impact assessments need reliable estimates of future climate change and related uncertainties. Different approaches to constrain, filter, or weight climate model simulations into probabilistic projections have been proposed to provide such estimates. Here six methods are applied to European climate projections using a consistent framework in order to allow a quantitative comparison. &amp;#160;Focus is given to summer temperature and precipitation change in three different spatial regimes in Europe in the period 2041-2060 relative to 1995-2014. The analysis draws on projections from several large initial condition ensembles, the CMIP5 multi-model ensemble, and perturbed physics ensembles, all using the high-emission scenario RCP8.5. &amp;#160;&lt;br&gt;The methods included are diverse in their approach to quantifying uncertainty, and include those which apply weighting schemes based on baseline performance and inter-model relationships, so-called ASK (Allen, Stott and Kettleborough) techniques which use optimal fingerprinting to scale the scale the response to external forcings, to those found in observations and Bayesian approaches to estimating probability distributions. Some of the key differences between methods are the uncertainties covered, the treatment of internal variability, and variables and regions used to inform the methods. In spite of these considerable methodological differences, the median projection from the multi-model methods agree on a statistically significant increase in temperature by mid-century by about 2.5&amp;#176;C in the European average. The estimates of spread, in contrast, differ substantially between methods. Part of this large difference in the spread reflects the fact that different methods attempt to capture different sources of uncertainty, and some are more comprehensive in this respect than others. This study, therefore, highlights the importance of providing clear context about how different methods affect the distribution of projections, particularly the in the upper and lower percentiles that are of interest to 'risk averse' stakeholders. Methods find less agreement in precipitation change with most methods indicating a slight increase in northern Europe and a drying in the central and Mediterranean regions, but with considerably different amplitudes. Further work is needed to understand how the underlying differences between methods lead to such diverse results for precipitation.&amp;#160;&lt;/p&gt;


2020 ◽  
Author(s):  
Raphaël Hébert ◽  
Shaun Lovejoy ◽  
Bruno Tremblay

AbstractWe directly exploit the stochasticity of the internal variability, and the linearity of the forced response to make global temperature projections based on historical data and a Green’s function, or Climate Response Function (CRF). To make the problem tractable, we take advantage of the temporal scaling symmetry to define a scaling CRF characterized by the scaling exponent H, which controls the long-range memory of the climate, i.e. how fast the system tends toward a steady-state, and an inner scale $$\tau \approx 2$$ τ ≈ 2   years below which the higher-frequency response is smoothed out. An aerosol scaling factor and a non-linear volcanic damping exponent were introduced to account for the large uncertainty in these forcings. We estimate the model and forcing parameters by Bayesian inference which allows us to analytically calculate the transient climate response and the equilibrium climate sensitivity as: $$1.7^{+0.3} _{-0.2}$$ 1 . 7 - 0.2 + 0.3   K and $$2.4^{+1.3} _{-0.6}$$ 2 . 4 - 0.6 + 1.3   K respectively (likely range). Projections to 2100 according to the RCP 2.6, 4.5 and 8.5 scenarios yield warmings with respect to 1880–1910 of: $$1.5^{+0.4}_{-0.2}K$$ 1 . 5 - 0.2 + 0.4 K , $$2.3^{+0.7}_{-0.5}$$ 2 . 3 - 0.5 + 0.7   K and $$4.2^{+1.3}_{-0.9}$$ 4 . 2 - 0.9 + 1.3   K. These projection estimates are lower than the ones based on a Coupled Model Intercomparison Project phase 5 multi-model ensemble; more importantly, their uncertainties are smaller and only depend on historical temperature and forcing series. The key uncertainty is due to aerosol forcings; we find a modern (2005) forcing value of $$[-1.0, -0.3]\, \,\,\mathrm{Wm} ^{-2}$$ [ - 1.0 , - 0.3 ] Wm - 2 (90 % confidence interval) with median at $$-0.7 \,\,\mathrm{Wm} ^{-2}$$ - 0.7 Wm - 2 . Projecting to 2100, we find that to keep the warming below 1.5 K, future emissions must undergo cuts similar to RCP 2.6 for which the probability to remain under 1.5 K is 48 %. RCP 4.5 and RCP 8.5-like futures overshoot with very high probability.


2020 ◽  
Author(s):  
Deborah Verfaillie ◽  
Francisco J. Doblas-Reyes ◽  
Markus G. Donat ◽  
Nuria Pérez-Zanón ◽  
Balakrishnan Solaraju-Murali ◽  
...  

&lt;p&gt;Decadal climate predictions and forced climate projections both provide potentially useful information to users for the next ten years. They only differ in the former being initialised with observations, while the latter is not. Bringing together initialised decadal climate predictions and non-initialised climate projections in order to provide seamless climate information for users over the next decades is a new challenging area of research. This can be achieved by comparing the forecast quality of global initialised and non-initialised simulations in their common prediction time horizons (up to 10 years ahead), and quantify in how far initialisation improves the forecast quality. Forecast quality has been usually explored through skill assessment. However, the impact of initialisation on the reliability, which quantifies the agreement between the predicted probabilities and observed relative frequencies of a given event, of decadal predictions has not yet been investigated sufficiently. Hence, users of probabilistic predictions are particularly sensitive to the potential lack of reliability which would imply that the probabilities are not trustworthy and this can have negative consequences for decision-making. In this communication, initialised decadal hindcasts (or retrospective forecasts) from 12 forecasting systems of the Coupled Model Intercomparison Project Phase 5 are compared to the corresponding non-initialised historical simulations in terms of reliability over their common period 1961-2005. We show that reliability varies greatly depending on the region or model ensemble analysed and on the correction applied. In particular, the North Atlantic and Europe stand out as regions where there is some added-value of initialised decadal hindcasts over non-initialised historical simulations in terms of reliability, mainly because of smaller biases and/or a better representation of the trend. Furthermore, we show that post-processed data display more reliable results, indicating that bias correction and calibration are fundamental to obtain reliable climate information.&lt;/p&gt;


2019 ◽  
Vol 5 (4) ◽  
pp. 308-321 ◽  
Author(s):  
Xiao-Tong Zheng

Abstract Purpose of Review Understanding the changes in climate variability in a warming climate is crucial for reliable projections of future climate change. This article reviews the recent progress in studies of how climate modes in the Indo-Pacific respond to greenhouse warming, including the consensus and uncertainty across climate models. Recent Findings Recent studies revealed a range of robust changes in the properties of climate modes, often associated with the mean state changes in the tropical Indo-Pacific. In particular, the intermodel diversity in the ocean warming pattern is a prominent source of uncertainty in mode changes. The internal variability also plays an important role in projected changes in climate modes. Summary Model biases and intermodel variability remain major challenges for reducing uncertainty in projecting climate mode changes in warming climate. Improved models and research linking simulated present-day climate and future changes are essential for reliable projections of climate mode changes. In addition, large ensembles should be used for each model to reduce the uncertainty from internal variability and isolate the forced response to global warming.


2012 ◽  
Vol 8 (4) ◽  
pp. 4003-4073 ◽  
Author(s):  
L. Fernández-Donado ◽  
J. F. González-Rouco ◽  
C. C. Raible ◽  
C. M. Ammann ◽  
D. Barriopedro ◽  
...  

Abstract. The understanding of natural climate variability and its driving factors is crucial to assess future climate change. Therefore, comparing proxy-based climate reconstructions with forcing factors as well as comparing these with paleoclimate model simulations is key to gain insights into the relative roles of internal versus forced variability. A review of the state of modeling of the last millennium climate previous to the CMIP5-PMIP3 coordinated effort is presented and compared to the available temperature reconstructions. Simulations and reconstructions broadly agree on reproducing the major temperature changes and suggest an overall linear response to external forcing on multidecadal or longer timescales. Internal variability is found to have an important influence at hemispheric and global scales. The spatial distribution of simulated temperature changes during the transition of the Medieval Climate Anomaly to the Little Ice Age disagrees with that found in the reconstructions, thus advocating for internal variability as a possible major player in shaping temperature changes through the millennium. A paleo transient climate response (PTCR) is defined to provide a quantitative framework for analysing the consistency between simulated and reconstructed climate. Beyond an overall agreement between simulated and reconstructed PTCR ranges, this analysis is able to single out specific discrepancies between some reconstructions and the ensemble of simulations. The disagreement is found in the cases where the reconstructions show reduced covariability with external forcings or when they present high rates of temperature change.


Sign in / Sign up

Export Citation Format

Share Document