scenario uncertainty
Recently Published Documents


TOTAL DOCUMENTS

41
(FIVE YEARS 13)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
Vol 14 (12) ◽  
pp. 7659-7672
Author(s):  
Duncan Watson-Parris ◽  
Andrew Williams ◽  
Lucia Deaconu ◽  
Philip Stier

Abstract. Large computer models are ubiquitous in the Earth sciences. These models often have tens or hundreds of tuneable parameters and can take thousands of core hours to run to completion while generating terabytes of output. It is becoming common practice to develop emulators as fast approximations, or surrogates, of these models in order to explore the relationships between these inputs and outputs, understand uncertainties, and generate large ensembles datasets. While the purpose of these surrogates may differ, their development is often very similar. Here we introduce ESEm: an open-source tool providing a general workflow for emulating and validating a wide variety of models and outputs. It includes efficient routines for sampling these emulators for the purpose of uncertainty quantification and model calibration. It is built on well-established, high-performance libraries to ensure robustness, extensibility and scalability. We demonstrate the flexibility of ESEm through three case studies using ESEm to reduce parametric uncertainty in a general circulation model and explore precipitation sensitivity in a cloud-resolving model and scenario uncertainty in the CMIP6 multi-model ensemble.


2021 ◽  
Author(s):  
Duncan Watson-Parris ◽  
Andrew Williams ◽  
Lucia Deaconu ◽  
Philip Stier

Abstract. Large computer models are ubiquitous in the earth sciences. These models often have tens or hundreds of tuneable parameters and can take thousands of core-hours to run to completion while generating terabytes of output. It is becoming common practice to develop emulators as fast approximations, or surrogates, of these models in order to explore the relationships between these inputs and outputs, understand uncertainties and generate large ensembles datasets. While the purpose of these surrogates may differ, their development is often very similar. Here we introduce ESEm: an open-source tool providing a general workflow for emulating and validating a wide variety of models and outputs. It includes efficient routines for sampling these emulators for the purpose of uncertainty quantification and model calibration. It is built on well-established, high-performance libraries to ensure robustness, extensibility and scalability. We demonstrate the flexibility of ESEm through three case-studies using ESEm to reduce parametric uncertainty in a general circulation model, explore precipitation sensitivity in a cloud resolving model and scenario uncertainty in the CMIP6 multi-model ensemble.


2021 ◽  
Author(s):  
Lea Beusch ◽  
Lukas Gudmundsson ◽  
Sonia I. Seneviratne

<p>Earth System Models (ESMs) are invaluable tools to study the climate system’s response to greenhouse gas emissions. But their projections are affected by three major sources of uncertainty: (i) internal variability, i.e., natural climate variability, (ii) ESM structural uncertainty, i.e., uncertainty in the response of the climate system to given greenhouse gas concentrations, and (iii) emission scenario uncertainty, i.e., which emission pathway the world chooses. The large computational cost of running full ESMs limits the exploration of this uncertainty phase space since it is only feasible to create a limited number of ESM runs. However, climate change impact and integrated assessment models, which require ESM projections as their input, could profit from a more complete sampling of the climate change uncertainty phase space. In this contribution, we present MESMER (Beusch et al., 2020), a Modular ESM Emulator with spatially Resolved output, which allows for a computationally efficient exploration of the uncertainty space of yearly temperatures. MESMER approximates ESM land temperature fields at a negligible computational cost by expressing grid-point-level temperatures as a function of global mean temperature and an overlaid spatio-temporally correlated variability term. Within MESMER all three major sources of uncertainty can be accounted for. Stochastic simulation of natural climate variability allows to account for internal variability. ESM structural uncertainty can be addressed by calibrating MESMER on different ESMs from the Coupled Model Intercomparison Project (CMIP) archives. Finally, emission scenario uncertainty can be accounted for by ingesting forced global mean temperature trajectories from global climate model emulators, such as MAGICC or FaIR. MESMER is a flexible statistical tool which is under active development and in the process of becoming an open-source software.</p><p>Beusch, L., Gudmundsson, L., and Seneviratne, S. I. (ESD, 2020): https://doi.org/10.5194/esd-11-139-2020</p>


2021 ◽  
Author(s):  
Goratz Beobide-Arsuaga ◽  
Tobias Bayr ◽  
Annika Reintges ◽  
Mojib Latif

AbstractThere is a long-standing debate on how the El Niño/Southern Oscillation (ENSO) amplitude may change during the twenty-first century in response to global warming. Here we identify the sources of uncertainty in the ENSO amplitude projections in models participating in the Coupled Model Intercomparison Phase 5 (CMIP5) and Phase 6 (CMIP6), and quantify scenario uncertainty, model uncertainty and uncertainty due to internal variability. The model projections exhibit a large spread, ranging from increasing standard deviation of up to 0.6 °C to diminishing standard deviation of up to − 0.4 °C by the end of the twenty-first century. The ensemble-mean ENSO amplitude change is close to zero. Internal variability is the main contributor to the uncertainty during the first three decades; model uncertainty dominates thereafter, while scenario uncertainty is relatively small throughout the twenty-first century. The total uncertainty increases from CMIP5 to CMIP6: while model uncertainty is reduced, scenario uncertainty is considerably increased. The models with “realistic” ENSO dynamics have been analyzed separately and categorized into models with too small, moderate and too large ENSO amplitude in comparison to instrumental observations. The smallest uncertainties are observed in the sub-ensemble exhibiting realistic ENSO dynamics and moderate ENSO amplitude. However, the global warming signal in ENSO-amplitude change is undetectable in all sub-ensembles. The zonal wind-SST feedback is identified as an important factor determining ENSO amplitude change: global warming signal in ENSO amplitude and zonal wind-SST feedback strength are highly correlated across the CMIP5 and CMIP6 models.


2020 ◽  
Author(s):  
Wouter Edeling ◽  
Arabnejad Hamid ◽  
Robert Sinclair ◽  
Diana Suleimenova ◽  
Krishnakumar Gopalakrishnan ◽  
...  

Abstract The severe acute respiratory syndrome coronavirus 2 (SARS-CoV2) virus has rapidly spread worldwide since December 2019, and early modelling work of this pandemic has assisted in identifying effective government interventions. The UK government relied in part on the CovidSim model developed by the MRC Centre for Global Infectious Disease Analysis at Imperial College London, to model various non-pharmaceutical intervention strategies, and guide its government policy in seeking to deal with the rapid spread of the COVID-19 pandemic during March and April 2020. CovidSim is subject to different sources of uncertainty, namely parametric uncertainty in the inputs, model structure uncertainty (i.e., missing epidemiological processes) and scenario uncertainty, which relates to uncertainty in the set of conditions under which the model is applied. We have undertaken an extensive parametric sensitivity analysis and uncertainty quantification of the current CovidSim code. From the over 900 parameters that are provided as input to CovidSim, we identified a key subset of 19 parameters to which the code output is most sensitive. We find that the uncertainty in the code is substantial, in the sense that imperfect knowledge in these inputs will be magnified to the outputs, up to the extent of ca. 300%. Most of this uncertainty can be traced back to the sensitivity of three parameters. Compounding this, the model can display significant bias with respect to observed data, such that the output variance does not capture this validation data with high probability. We conclude that quantifying the parametric input uncertainty is not sufficient, and that the effect of model structure and scenario uncertainty cannot be ignored when validating the model in a probabilistic sense.


2020 ◽  
Author(s):  
Wouter Edeling ◽  
Arabnejad Hamid ◽  
Robert Sinclair ◽  
Diana Suleimenova ◽  
Krishnakumar Gopalakrishnan ◽  
...  

Abstract The severe acute respiratory syndrome coronavirus 2 (SARS-CoV2) virus has rapidly spread worldwide since December 2019, and early modelling work of this pandemic has assisted in identifying effective government interventions. The UK government relied in part on the CovidSim model developed by the MRC Centre for Global Infectious Disease Analysis at Imperial College London, to model various non-pharmaceutical intervention strategies, and guide its government policy in seeking to deal with the rapid spread of the COVID-19 pandemic during March and April 2020. CovidSim is subject to different sources of uncertainty, namely parametric uncertainty in the inputs, model structure uncertainty (i.e. missing epidemiological processes) and scenario uncertainty, which relates to uncertainty in the set of conditions under which the model is applied. We have undertaken an extensive parametric sensitivity analysis and uncertainty quantification of the current CovidSim code. From the over 900 parameters that are provided as input to CovidSim, we identified a key subset of 20 parameters to which the code output is most sensitive. We find that the uncertainty in the code is substantial, in the sense that imperfect knowledge in these inputs will be magnified to the outputs, up to the extent of ca. 300%. Most of this uncertainty can be traced back to the sensitivity of three parameters. Compounding this, the model can display significant bias with respect to observed data, such that the output variance does not capture this validation data with high probability. We conclude that quantifying the parametric input uncertainty is not sufficient, and that the effect of model structure and scenario uncertainty cannot be ignored when validating the model in a probabilistic sense.


2020 ◽  
Author(s):  
Nadine Mengis ◽  
H. Damon Matthews

<div> <div> <div> <div> <p>Estimates of the 1.5°C carbon budget vary widely among recent studies. One key contribution to this range is the non-CO<sub>2</sub> climate forcing scenario uncertainty. Based on a partitioning of historical non-CO<sub>2</sub> forcing, we show that there is currently a net negative non-CO<sub>2</sub> forcing from fossil fuel combustion (FFC) mainly due to the co-emission of aerosols, and a net positive non-CO<sub>2</sub> climate forcing from land-use change (LUC) and agricultural activities. We then perform a set of future simulations in which we prescribed a 1.5°C temperature stabilization trajectory, and diagnosed the resulting 1.5°C carbon budgets. Using the results of our historical partitioning, we prescribed changing non-CO<sub>2</sub> forcing scenarios that are consistent with our model’s simulated decrease in FFC CO<sub>2</sub> emissions. We compared the diagnosed carbon budgets from these idealized scenarios to those resulting from the default RCP scenario non-CO<sub>2</sub> forcing, as well as from a scenario in which we assumed proportionality between future CO<sub>2</sub> and non-CO<sub>2</sub> forcing. We find a large range of carbon budget estimates across scenarios, with the largest budget emerging from the scenario with assumed proportionality of CO<sub>2</sub> and non-CO<sub>2</sub> forcing. Furthermore, our adjusted-RCP scenarios, in which the non-CO<sub>2</sub> forcing is consistent with model-diagnosed FFC CO<sub>2</sub> emissions, produced carbon budgets that are smaller than the corresponding default RCP scenarios. Our results suggest that ambitious mitigation scenarios will likely be characterized by an increasing contribution of non-CO<sub>2</sub> forcing, and that an assumption of continued proportionality between CO<sub>2</sub> and non-CO<sub>2</sub> forcing would lead to an overestimate of the remaining carbon budget required to avoid low-temperature targets. Maintaining such proportionality under ambitious fossil fuel mitigation would require mitigation of non-CO<sub>2</sub> emissions from agriculture and other non-FFC sources at a rate that is substantially faster than is found in the standard RCP scenarios.</p> </div> </div> </div> </div>


Sign in / Sign up

Export Citation Format

Share Document