scholarly journals How large does a large ensemble need to be?

2020 ◽  
Vol 11 (4) ◽  
pp. 885-901
Author(s):  
Sebastian Milinski ◽  
Nicola Maher ◽  
Dirk Olonscheck

Abstract. Initial-condition large ensembles with ensemble sizes ranging from 30 to 100 members have become a commonly used tool for quantifying the forced response and internal variability in various components of the climate system. However, there is no consensus on the ideal or even sufficient ensemble size for a large ensemble. Here, we introduce an objective method to estimate the required ensemble size that can be applied to any given application and demonstrate its use on the examples of global mean near-surface air temperature, local temperature and precipitation, and variability in the El Niño–Southern Oscillation (ENSO) region and central United States for the Max Planck Institute Grand Ensemble (MPI-GE). Estimating the required ensemble size is relevant not only for designing or choosing a large ensemble but also for designing targeted sensitivity experiments with a model. Where possible, we base our estimate of the required ensemble size on the pre-industrial control simulation, which is available for every model. We show that more ensemble members are needed to quantify variability than the forced response, with the largest ensemble sizes needed to detect changes in internal variability itself. Finally, we highlight that the required ensemble size depends on both the acceptable error to the user and the studied quantity.

2019 ◽  
Author(s):  
Sebastian Milinski ◽  
Nicola Maher ◽  
Dirk Olonscheck

Abstract. Initial-condition large ensembles with ensemble sizes ranging from 30 to 100 members have become a commonly used tool to quantify the forced response and internal variability in various components of the climate system. However, there is no consensus on the ideal or even sufficient ensemble size for a large ensemble. Here, we introduce an objective method to estimate the required ensemble size that can be applied to any given application and demonstrate its use on the examples of global mean surface temperature, local surface temperature and precipitation and variability in the ENSO region and central America. Where possible, we base our estimate of the required ensemble size on the pre-industrial control simulation, which is available for every model. First, we determine how much of an available ensemble size is interpretable without a substantial impact of resampling ensemble members. Then, we show that more ensemble members are needed to quantify variability than the forced response, with the largest ensemble sizes needed to detect changes in internal variability itself. Finally, we highlight that the required ensemble size depends on both the acceptable error to the user and the studied quantity.


2017 ◽  
Vol 30 (19) ◽  
pp. 7585-7598 ◽  
Author(s):  
Karen A. McKinnon ◽  
Andrew Poppick ◽  
Etienne Dunn-Sigouin ◽  
Clara Deser

Abstract Estimates of the climate response to anthropogenic forcing contain irreducible uncertainty due to the presence of internal variability. Accurate quantification of this uncertainty is critical for both contextualizing historical trends and determining the spread of climate projections. The contribution of internal variability to uncertainty in trends can be estimated in models as the spread across an initial condition ensemble. However, internal variability simulated by a model may be inconsistent with observations due to model biases. Here, statistical resampling methods are applied to observations in order to quantify uncertainty in historical 50-yr (1966–2015) winter near-surface air temperature trends over North America related to incomplete sampling of internal variability. This estimate is compared with the simulated trend uncertainty in the NCAR CESM1 Large Ensemble (LENS). The comparison suggests that uncertainty in trends due to internal variability is largely overestimated in LENS, which has an average amplification of variability of 32% across North America. The amplification of variability is greatest in the western United States and Alaska. The observationally derived estimate of trend uncertainty is combined with the forced signal from LENS to produce an “Observational Large Ensemble” (OLENS). The members of OLENS indicate the range of observationally constrained, spatially consistent temperature trends that could have been observed over the past 50 years if a different sequence of internal variability had unfolded. The smaller trend uncertainty in OLENS suggests that is easier to detect the historical climate change signal in observations than in any given member of LENS.


2020 ◽  
Author(s):  
Sebastian Milinski ◽  
Nicola Maher ◽  
Dirk Olonscheck

<p>Initial-condition large ensembles with ensemble sizes ranging from 30 to 100 members have become a commonly used tool to quantify the forced response and internal variability in various components of the climate system. However, there is no consensus on the ideal or even sufficient ensemble size for a large ensemble.</p><p>Here, we introduce an objective method to estimate the required ensemble size. This method can be applied to any given application. We demonstrate its use on the examples that represent typical applications of large ensembles: quantifying the forced response, quantifying internal variability, and detecting a forced change in internal variability.</p><p>We analyse forced trends in global mean surface temperature, local surface temperature and precipitation in the MPI Grand Ensemble (Maher et al., 2019). We find that 10 ensemble members are sufficient to quantify the forced response in historical surface temperature over the ocean, but more than 50 members are necessary over land at higher latitudes. </p><p>Next, we apply our method to identify the required ensemble size to sample internal variability of surface temperature over central North America and over the Niño 3.4 region. A moderate ensemble size of 10 members is sufficient to quantify variability over North America, while a large ensemble with close to 50 members is necessary for the Niño 3.4 region.</p><p>Finally, we use the example of September Arctic sea ice area to investigate forced changes in internal variability. In a strong warming scenario, the variability in sea ice area is increasing because more open water near the coastlines allows for more variability compared to a mostly ice-covered Arctic Ocean (Goosse et al., 2009; Olonscheck and Notz, 2017). We show that at least 5 ensemble members are necessary to detect an increase in sea ice variability in a 1% CO<sub>2</sub> experiment. To also quantify the magnitude of the forced change in variability, more than 50 members are necessary.</p><p>These numbers might be highly model dependent. Therefore, the suggested method can also be used with a long control run to estimate the required ensemble size for a model that does not provide a large number of realisations. Therefore, our analysis framework does not only provide valuable information before running a large ensemble, but can also be used to test the robustness of results based on small ensembles or individual realisations.</p><p><em><strong>References</strong><br>Goosse, H., O. Arzel, C. M. Bitz, A. de Montety, and M. Vancoppenolle (2009), Increased variability of the Arctic summer ice extent in a warmer climate, Geophys. Res. Lett., 36(23), 401–5, doi:10.1029/2009GL040546.</em></p><p><em>Olonscheck, D., and D. Notz (2017), Consistently Estimating Internal Climate Variability from Climate Model Simulations, J Climate, 30(23), 9555–9573, doi:10.1175/JCLI-D-16-0428.1.</em></p><p><em>Milinski, S., N. Maher, and D. Olonscheck (2019), How large does a large ensemble need to be? Earth Syst. Dynam. Discuss., 2019, 1–19, doi:10.5194/esd-2019-70.</em></p>


2018 ◽  
Vol 31 (17) ◽  
pp. 6783-6802 ◽  
Author(s):  
Karen A. McKinnon ◽  
Clara Deser

Recent observed climate trends result from a combination of external radiative forcing and internally generated variability. To better contextualize these trends and forecast future ones, it is necessary to properly model the spatiotemporal properties of the internal variability. Here, a statistical model is developed for terrestrial temperature and precipitation, and global sea level pressure, based upon monthly gridded observational datasets that span 1921–2014. The model is used to generate a synthetic ensemble, each member of which has a unique sequence of internal variability but with statistical properties similar to the observational record. This synthetic ensemble is combined with estimates of the externally forced response from climate models to produce an observational large ensemble (OBS-LE). The 1000 members of the OBS-LE display considerable diversity in their 50-yr regional climate trends, indicative of the importance of internal variability on multidecadal time scales. For example, unforced atmospheric circulation trends associated with the northern annular mode can induce winter temperature trends over Eurasia that are comparable in magnitude to the forced trend over the past 50 years. Similarly, the contribution of internal variability to winter precipitation trends is large across most of the globe, leading to substantial regional uncertainties in the amplitude and, in some cases, the sign of the 50-yr trend. The OBS-LE provides a real-world counterpart to initial-condition model ensembles. The approach could be expanded to using paleo-proxy data to simulate longer-term variability.


2014 ◽  
Vol 955-959 ◽  
pp. 3887-3892 ◽  
Author(s):  
Huang He Gu ◽  
Zhong Bo Yu ◽  
Ji Gan Wang

This study projects the future extreme climate changes over Huang-Huai-Hai (3H) region in China using a regional climate model (RegCM4). The RegCM4 performs well in “current” climate (1970-1999) simulations by compared with the available surface station data, focusing on near-surface air temperature and precipitation. Future climate changes are evaluated based on experiments driven by European-Hamburg general climate model (ECHAM5) in A1B future scenario (2070-2099). The results show that the annual temperature increase about 3.4 °C-4.2 °C and the annual precipitation increase about 5-15% in most of 3H region at the end of 21st century. The model predicts a generally less frost days, longer growing season, more hot days, no obvious change in heat wave duration index, larger maximum five-day rainfall, more heavy rain days, and larger daily rainfall intensity. The results indicate a higher risk of floods in the future warmer climate. In addition, the consecutive dry days in Huai River Basin will increase, indicating more serve drought and floods conditions in this region.


2021 ◽  
Vol 15 (3) ◽  
pp. 1645-1662
Author(s):  
Alan Huston ◽  
Nicholas Siler ◽  
Gerard H. Roe ◽  
Erin Pettit ◽  
Nathan J. Steiger

Abstract. Changes in glacier length reflect the integrated response to local fluctuations in temperature and precipitation resulting from both external forcing (e.g., volcanic eruptions or anthropogenic CO2) and internal climate variability. In order to interpret the climate history reflected in the glacier moraine record, the influence of both sources of climate variability must therefore be considered. Here we study the last millennium of glacier-length variability across the globe using a simple dynamic glacier model, which we force with temperature and precipitation time series from a 13-member ensemble of simulations from a global climate model. The ensemble allows us to quantify the contributions to glacier-length variability from external forcing (given by the ensemble mean) and internal variability (given by the ensemble spread). Within this framework, we find that internal variability is the predominant source of length fluctuations for glaciers with a shorter response time (less than a few decades). However, for glaciers with longer response timescales (more than a few decades) external forcing has a greater influence than internal variability. We further find that external forcing also dominates when the response of glaciers from widely separated regions is averaged. Single-forcing simulations indicate that, for this climate model, most of the forced response over the last millennium, pre-anthropogenic warming, has been driven by global-scale temperature change associated with volcanic aerosols.


2020 ◽  
Author(s):  
Alan Huston ◽  
Nicholas Siler ◽  
Gerard H. Roe ◽  
Erin Pettit ◽  
Nathan J. Steiger

Abstract. Changes in glacier length reflect the integrated response to local fluctuations in temperature and precipitation resulting from both external forcing (e.g., volcanic eruptions or anthropogenic CO2) and internal climate variability. In order to interpret the climate history reflected in the glacier moraine record, therefore, the influence of both sources of climate variability must be considered. Here we study the last millennium of glacier length variability across the globe using a simple dynamic glacier model, which we force with temperature and precipitation time series from a 13-member ensemble of simulations from a global climate model. The ensemble allows us to quantify the contributions to glacier length variability from external forcing (given by the ensemble mean) and internal variability (given by the ensemble spread). Within this framework, we find that internal variability drives most length changes in mountain glaciers that have a response timescale of less than a few decades. However, for glaciers with longer response timescales (more than a few decades) external forcing has a greater influence than internal variability. We further find that external forcing also dominates when the response of glaciers from widely separated regions is averaged. Single-forcing simulations indicate that most of the forced response over the last millennium, pre-anthropogenic warming, has been driven by global-scale temperature change associated with volcanic aerosols.


2009 ◽  
Vol 48 (3) ◽  
pp. 429-449 ◽  
Author(s):  
Yves Durand ◽  
Martin Laternser ◽  
Gérald Giraud ◽  
Pierre Etchevers ◽  
Bernard Lesaffre ◽  
...  

Abstract Since the early 1990s, Météo-France has used an automatic system combining three numerical models to simulate meteorological parameters, snow cover stratification, and avalanche risk at various altitudes, aspects, and slopes for a number of mountainous regions in France. Given the lack of sufficient directly observed long-term snow data, this “SAFRAN”–Crocus–“MEPRA” (SCM) model chain, usually applied to operational avalanche forecasting, has been used to carry out and validate retrospective snow and weather climate analyses for the 1958–2002 period. The SAFRAN 2-m air temperature and precipitation climatology shows that the climate of the French Alps is temperate and is mainly determined by atmospheric westerly flow conditions. Vertical profiles of temperature and precipitation averaged over the whole period for altitudes up to 3000 m MSL show a relatively linear variation with altitude for different mountain areas with no constraint of that kind imposed by the analysis scheme itself. Over the observation period 1958–2002, the overall trend corresponds to an increase in the annual near-surface air temperature of about 1°C. However, variations are large at different altitudes and for different seasons and regions. This significantly positive trend is most obvious in the 1500–2000-m MSL altitude range, especially in the northwest regions, and exhibits a significant relationship with the North Atlantic Oscillation index over long periods. Precipitation data are diverse, making it hard to identify clear trends within the high year-to-year variability.


2016 ◽  
Vol 6 (1) ◽  
Author(s):  
Y. T. Eunice Lo ◽  
Andrew J. Charlton-Perez ◽  
Fraser C. Lott ◽  
Eleanor J. Highwood

Abstract Sulphate aerosol injection has been widely discussed as a possible way to engineer future climate. Monitoring it would require detecting its effects amidst internal variability and in the presence of other external forcings. We investigate how the use of different detection methods and filtering techniques affects the detectability of sulphate aerosol geoengineering in annual-mean global-mean near-surface air temperature. This is done by assuming a future scenario that injects 5 Tg yr−1 of sulphur dioxide into the stratosphere and cross-comparing simulations from 5 climate models. 64% of the studied comparisons would require 25 years or more for detection when no filter and the multi-variate method that has been extensively used for attributing climate change are used, while 66% of the same comparisons would require fewer than 10 years for detection using a trend-based filter. This highlights the high sensitivity of sulphate aerosol geoengineering detectability to the choice of filter. With the same trend-based filter but a non-stationary method, 80% of the comparisons would require fewer than 10 years for detection. This does not imply sulphate aerosol geoengineering should be deployed, but suggests that both detection methods could be used for monitoring geoengineering in global, annual mean temperature should it be needed.


2020 ◽  
Author(s):  
Jun Meng ◽  
Jingfang Fan ◽  
Josef Ludescher ◽  
Ankit Agarwala ◽  
Xiaosong Chen ◽  
...  

<p>The El Niño Southern Oscillation (ENSO) is one of the most prominent interannual climate phenomena. An early and reliable ENSO forecasting remains a crucial goal, due to its serious implications for economy, society, and ecosystem. Despite the development of various dynamical and statistical prediction models in the recent decades, the “spring predictability barrier” (SPB) remains a great challenge for long (over 6-month) lead-time forecasting. To overcome this barrier, here we develop an analysis tool, the System Sample Entropy (SysSampEn), to measure the complexity (disorder) of the system composed of temperature anomaly time series in the Niño 3.4 region. When applying this tool to several near surface air temperature and sea surface temperature datasets, we find that in all datasets a strong positive correlation exists between the magnitude of El Niño and the previous calendar year’s SysSampEn (complexity). We show that this correlation allows to forecast the magnitude of an El Niño with a prediction horizon of 1 year and high accuracy (i.e., Root Mean Square Error = 0.23°C for the average of the individual datasets forecasts). For the 2018 El Niño event, our method forecasts a weak El Niño with a magnitude of 1.11±0.23°C.  Our framework presented here not only facilitates a long–term forecasting of the El Niño magnitude but can potentially also be used as a measure for the complexity of other natural or engineering complex systems.</p>


Sign in / Sign up

Export Citation Format

Share Document