scholarly journals Improving climate model accuracy by exploring parameter space with an O(10<sup>5</sup>) member ensemble and emulator

2018 ◽  
Author(s):  
Sihan Li ◽  
David E. Rupp ◽  
Linnia Hawkins ◽  
Philip W. Mote ◽  
Doug McNeall ◽  
...  

Abstract. Understanding the unfolding challenges of climate change relies on climate models, many of which have large summer warm and dry biases over Northern Hemisphere continental mid-latitudes. This work, using the example of the model used in the updated version of the weather@home distributed climate model framework, shows the potential for improving climate model simulations through a multi-phased parameter refinement approach, particularly over northwestern United States(NWUS). Each phase consists of 1) creating a perturbed physics ensemble with the coupled global – regional atmospheric model, 2) building statistical emulators that estimate climate metrics as functions of parameter values, 3) and using the emulators to further refine the parameter space. The refinement process includes sensitivity analyses to identify the most influential parameters for various model output metrics; results are then used to cull parameters with little influence. Three phases of this iterative process are carried out before the results are considered to be satisfactory; that is, a handful of parameter sets are identified that meet acceptable bias reduction criteria. Results not only indicate that 74 % of the NWUS regional warm biases can be reduced by refining global atmospheric parameters that control convection and hydrometeor transport, and land surface parameters that affect plant photosynthesis, transpiration and evaporation, but also suggest that this iterative approach to perturbed physics has an important role to play in the evolution of physical parameterizations.

2019 ◽  
Vol 12 (7) ◽  
pp. 3017-3043 ◽  
Author(s):  
Sihan Li ◽  
David E. Rupp ◽  
Linnia Hawkins ◽  
Philip W. Mote ◽  
Doug McNeall ◽  
...  

Abstract. Understanding the unfolding challenges of climate change relies on climate models, many of which have large summer warm and dry biases over Northern Hemisphere continental midlatitudes. This work, with the example of the model used in the updated version of the weather@home distributed climate model framework, shows the potential for improving climate model simulations through a multiphased parameter refinement approach, particularly over the northwestern United States (NWUS). Each phase consists of (1) creating a perturbed parameter ensemble with the coupled global–regional atmospheric model, (2) building statistical emulators that estimate climate metrics as functions of parameter values, (3) and using the emulators to further refine the parameter space. The refinement process includes sensitivity analyses to identify the most influential parameters for various model output metrics; results are then used to cull parameters with little influence. Three phases of this iterative process are carried out before the results are considered to be satisfactory; that is, a handful of parameter sets are identified that meet acceptable bias reduction criteria. Results not only indicate that 74 % of the NWUS regional warm biases can be reduced by refining global atmospheric parameters that control convection and hydrometeor transport, as well as land surface parameters that affect plant photosynthesis, transpiration, and evaporation, but also suggest that this iterative approach to perturbed parameters has an important role to play in the evolution of physical parameterizations.


2017 ◽  
Vol 10 (2) ◽  
pp. 889-901 ◽  
Author(s):  
Daniel J. Lunt ◽  
Matthew Huber ◽  
Eleni Anagnostou ◽  
Michiel L. J. Baatsen ◽  
Rodrigo Caballero ◽  
...  

Abstract. Past warm periods provide an opportunity to evaluate climate models under extreme forcing scenarios, in particular high ( >  800 ppmv) atmospheric CO2 concentrations. Although a post hoc intercomparison of Eocene ( ∼  50  Ma) climate model simulations and geological data has been carried out previously, models of past high-CO2 periods have never been evaluated in a consistent framework. Here, we present an experimental design for climate model simulations of three warm periods within the early Eocene and the latest Paleocene (the EECO, PETM, and pre-PETM). Together with the CMIP6 pre-industrial control and abrupt 4 ×  CO2 simulations, and additional sensitivity studies, these form the first phase of DeepMIP – the Deep-time Model Intercomparison Project, itself a group within the wider Paleoclimate Modelling Intercomparison Project (PMIP). The experimental design specifies and provides guidance on boundary conditions associated with palaeogeography, greenhouse gases, astronomical configuration, solar constant, land surface processes, and aerosols. Initial conditions, simulation length, and output variables are also specified. Finally, we explain how the geological data sets, which will be used to evaluate the simulations, will be developed.


2016 ◽  
Author(s):  
Daniel J. Lunt ◽  
Matthew Huber ◽  
Michiel L. J. Baatsen ◽  
Rodrigo Caballero ◽  
Rob DeConto ◽  
...  

Abstract. Past warm periods provide an opportunity to evaluate climate models under extreme forcing scenarios, in particular high (> 800 ppmv) atmospheric CO2 concentrations. Although a post-hoc intercomparison of Eocene (~50 million years ago, Ma) climate model simulations and geological data has been carried out previously, models of past high-CO2 periods have never been evaluated in a consistent framework. Here, we present an experimental design for climate model simulations of three warm periods within the latest Paleocene and the early Eocene. Together these form the first phase of DeepMIP – the deeptime model intercomparison project, itself a group within the wider Paleoclimate Modelling Intercomparison Project (PMIP). The experimental design consists of three core paleo simulations and a set of optional sensitivity studies. The experimental design specifies and provides guidance on boundary conditions associated with palaeogeography, greenhouse gases, orbital configuration, solar constant, land surface parameters, and aerosols. Initial conditions, simulation length, and output variables are also specified. Finally, we explain how the geological datasets, which will be used to evaluate the simulations, will be developed.


2021 ◽  
Author(s):  
Christian Zeman ◽  
Christoph Schär

&lt;p&gt;Since their first operational application in the 1950s, atmospheric numerical models have become essential tools in weather and climate prediction. As such, they are a constant subject to changes, thanks to advances in computer systems, numerical methods, and the ever increasing knowledge about the atmosphere of Earth. Many of the changes in today's models relate to seemingly unsuspicious modifications, associated with minor code rearrangements, changes in hardware infrastructure, or software upgrades. Such changes are meant to preserve the model formulation, yet the verification of such changes is challenged by the chaotic nature of our atmosphere - any small change, even rounding errors, can have a big impact on individual simulations. Overall this represents a serious challenge to a consistent model development and maintenance framework.&lt;/p&gt;&lt;p&gt;Here we propose a new methodology for quantifying and verifying the impacts of minor atmospheric model changes, or its underlying hardware/software system, by using ensemble simulations in combination with a statistical hypothesis test. The methodology can assess effects of model changes on almost any output variable over time, and can also be used with different hypothesis tests.&lt;/p&gt;&lt;p&gt;We present first applications of the methodology with the regional weather and climate model COSMO. The changes considered include a major system upgrade of the supercomputer used, the change from double to single precision floating-point representation, changes in the update frequency of the lateral boundary conditions, and tiny changes to selected model parameters. While providing very robust results, the methodology also shows a large sensitivity to more significant model changes, making it a good candidate for an automated tool to guarantee model consistency in the development cycle.&lt;/p&gt;


2021 ◽  
Vol 17 (4) ◽  
pp. 1665-1684
Author(s):  
Leonore Jungandreas ◽  
Cathy Hohenegger ◽  
Martin Claussen

Abstract. Global climate models experience difficulties in simulating the northward extension of the monsoonal precipitation over north Africa during the mid-Holocene as revealed by proxy data. A common feature of these models is that they usually operate on grids that are too coarse to explicitly resolve convection, but convection is the most essential mechanism leading to precipitation in the West African Monsoon region. Here, we investigate how the representation of tropical deep convection in the ICOsahedral Nonhydrostatic (ICON) climate model affects the meridional distribution of monsoonal precipitation during the mid-Holocene by comparing regional simulations of the summer monsoon season (July to September; JAS) with parameterized and explicitly resolved convection. In the explicitly resolved convection simulation, the more localized nature of precipitation and the absence of permanent light precipitation as compared to the parameterized convection simulation is closer to expectations. However, in the JAS mean, the parameterized convection simulation produces more precipitation and extends further north than the explicitly resolved convection simulation, especially between 12 and 17∘ N. The higher precipitation rates in the parameterized convection simulation are consistent with a stronger monsoonal circulation over land. Furthermore, the atmosphere in the parameterized convection simulation is less stably stratified and notably moister. The differences in atmospheric water vapor are the result of substantial differences in the probability distribution function of precipitation and its resulting interactions with the land surface. The parametrization of convection produces light and large-scale precipitation, keeping the soils moist and supporting the development of convection. In contrast, less frequent but locally intense precipitation events lead to high amounts of runoff in the explicitly resolved convection simulations. The stronger runoff inhibits the moistening of the soil during the monsoon season and limits the amount of water available to evaporation in the explicitly resolved convection simulation.


2016 ◽  
Vol 7 (4) ◽  
pp. 917-935 ◽  
Author(s):  
Doug McNeall ◽  
Jonny Williams ◽  
Ben Booth ◽  
Richard Betts ◽  
Peter Challenor ◽  
...  

Abstract. Uncertainty in the simulation of the carbon cycle contributes significantly to uncertainty in the projections of future climate change. We use observations of forest fraction to constrain carbon cycle and land surface input parameters of the global climate model FAMOUS, in the presence of an uncertain structural error. Using an ensemble of climate model runs to build a computationally cheap statistical proxy (emulator) of the climate model, we use history matching to rule out input parameter settings where the corresponding climate model output is judged sufficiently different from observations, even allowing for uncertainty. Regions of parameter space where FAMOUS best simulates the Amazon forest fraction are incompatible with the regions where FAMOUS best simulates other forests, indicating a structural error in the model. We use the emulator to simulate the forest fraction at the best set of parameters implied by matching the model to the Amazon, Central African, South East Asian, and North American forests in turn. We can find parameters that lead to a realistic forest fraction in the Amazon, but that using the Amazon alone to tune the simulator would result in a significant overestimate of forest fraction in the other forests. Conversely, using the other forests to tune the simulator leads to a larger underestimate of the Amazon forest fraction. We use sensitivity analysis to find the parameters which have the most impact on simulator output and perform a history-matching exercise using credible estimates for simulator discrepancy and observational uncertainty terms. We are unable to constrain the parameters individually, but we rule out just under half of joint parameter space as being incompatible with forest observations. We discuss the possible sources of the discrepancy in the simulated Amazon, including missing processes in the land surface component and a bias in the climatology of the Amazon.


2021 ◽  
Author(s):  
Yoann Robin ◽  
Aurélien Ribes

&lt;p&gt;We describe a statistical method to derive event attribution diagnoses combining climate model simulations and observations. We fit nonstationary Generalized Extreme Value (GEV) distributions to extremely hot temperatures from an ensemble of Coupled Model Intercomparison Project phase 5 (CMIP)&lt;br&gt;models. In order to select a common statistical model, we discuss which GEV parameters have to be nonstationary and which do not. Our tests suggest that the location and scale parameters of GEV distributions should be considered nonstationary. Then, a multimodel distribution is constructed and constrained by observations using a Bayesian method. This new method is applied to the July 2019 French heatwave. Our results show that&lt;br&gt;both the probability and the intensity of that event have increased significantly in response to human influence.&lt;br&gt;Remarkably, we find that the heat wave considered might not have been possible without climate change. Our&lt;br&gt;results also suggest that combining model data with observations can improve the description of hot temperature&lt;br&gt;distribution.&lt;/p&gt;


2019 ◽  
Vol 12 (7) ◽  
pp. 3149-3206 ◽  
Author(s):  
Christopher J. Hollis ◽  
Tom Dunkley Jones ◽  
Eleni Anagnostou ◽  
Peter K. Bijl ◽  
Margot J. Cramwinckel ◽  
...  

Abstract. The early Eocene (56 to 48 million years ago) is inferred to have been the most recent time that Earth's atmospheric CO2 concentrations exceeded 1000 ppm. Global mean temperatures were also substantially warmer than those of the present day. As such, the study of early Eocene climate provides insight into how a super-warm Earth system behaves and offers an opportunity to evaluate climate models under conditions of high greenhouse gas forcing. The Deep Time Model Intercomparison Project (DeepMIP) is a systematic model–model and model–data intercomparison of three early Paleogene time slices: latest Paleocene, Paleocene–Eocene thermal maximum (PETM) and early Eocene climatic optimum (EECO). A previous article outlined the model experimental design for climate model simulations. In this article, we outline the methodologies to be used for the compilation and analysis of climate proxy data, primarily proxies for temperature and CO2. This paper establishes the protocols for a concerted and coordinated effort to compile the climate proxy records across a wide geographic range. The resulting climate “atlas” will be used to constrain and evaluate climate models for the three selected time intervals and provide insights into the mechanisms that control these warm climate states. We provide version 0.1 of this database, in anticipation that this will be expanded in subsequent publications.


1998 ◽  
Vol 27 ◽  
pp. 565-570 ◽  
Author(s):  
William M. Connolley ◽  
Siobhan P. O'Farrell

We compare observed temperature variations in Antarctica with climate-model runs over the last century. The models used are three coupled global climate models (GCMs) — the UKMO, the CSIRO and the MPI forced by the CO2 increases observed over the last century, and an atmospheric model experiment forced with observed sea-surface temperatures and sea-ice extents over the last century. Despite some regions of agreement, in general the GCM runs appear to be incompatible with each other and with the observations, although the short observational record and high natural variability make verification difficult. One of the best places for a more detailed study is the Antarctic Peninsula where the density of stations is higher and station records are longer than elsewhere in Antarctica. Observations show that this area has seen larger temperature rises than anywhere else in Antarctica. None of the three GCMs simulate such large temperature changes in the Peninsula region, in either climate-change runs radiatively forced by CO2 increases or control runs which assess the level of model variability.


2010 ◽  
Vol 23 (15) ◽  
pp. 4121-4132 ◽  
Author(s):  
Dorian S. Abbot ◽  
Itay Halevy

Abstract Most previous global climate model simulations could only produce the termination of Snowball Earth episodes at CO2 partial pressures of several tenths of a bar, which is roughly an order of magnitude higher than recent estimates of CO2 levels during and shortly after Snowball events. These simulations have neglected the impact of dust aerosols on radiative transfer, which is an assumption of potentially grave importance. In this paper it is argued, using the Dust Entrainment and Deposition (DEAD) box model driven by GCM results, that atmospheric dust aerosol concentrations may have been one to two orders of magnitude higher during a Snowball Earth event than today. It is furthermore asserted on the basis of calculations using NCAR’s Single Column Atmospheric Model (SCAM)—a radiative–convective model with sophisticated aerosol, cloud, and radiative parameterizations—that when the surface albedo is high, such increases in dust aerosol loading can produce several times more surface warming than an increase in the partial pressure of CO2 from 10−4 to 10−1 bar. Therefore the conclusion is reached that including dust aerosols in simulations may reconcile the CO2 levels required for Snowball termination in climate models with observations.


Sign in / Sign up

Export Citation Format

Share Document