scholarly journals Extreme metrics from large ensembles: investigating the effects of ensemble size on their estimates

2021 ◽  
Vol 12 (4) ◽  
pp. 1427-1501
Author(s):  
Claudia Tebaldi ◽  
Kalyn Dorheim ◽  
Michael Wehner ◽  
Ruby Leung

Abstract. We consider the problem of estimating the ensemble sizes required to characterize the forced component and the internal variability of a number of extreme metrics. While we exploit existing large ensembles, our perspective is that of a modeling center wanting to estimate a priori such sizes on the basis of an existing small ensemble (we assume the availability of only five members here). We therefore ask if such a small-size ensemble is sufficient to estimate accurately the population variance (i.e., the ensemble internal variability) and then apply a well-established formula that quantifies the expected error in the estimation of the population mean (i.e., the forced component) as a function of the sample size n, here taken to mean the ensemble size. We find that indeed we can anticipate errors in the estimation of the forced component for temperature and precipitation extremes as a function of n by plugging into the formula an estimate of the population variance derived on the basis of five members. For a range of spatial and temporal scales, forcing levels (we use simulations under Representative Concentration Pathway 8.5) and two models considered here as our proof of concept, it appears that an ensemble size of 20 or 25 members can provide estimates of the forced component for the extreme metrics considered that remain within small absolute and percentage errors. Additional members beyond 20 or 25 add only marginal precision to the estimate, and this remains true when statistical inference through extreme value analysis is used. We then ask about the ensemble size required to estimate the ensemble variance (a measure of internal variability) along the length of the simulation and – importantly – about the ensemble size required to detect significant changes in such variance along the simulation with increased external forcings. Using the F test, we find that estimates on the basis of only 5 or 10 ensemble members accurately represent the full ensemble variance even when the analysis is conducted at the grid-point scale. The detection of changes in the variance when comparing different times along the simulation, especially for the precipitation-based metrics, requires larger sizes but not larger than 15 or 20 members. While we recognize that there will always exist applications and metric definitions requiring larger statistical power and therefore ensemble sizes, our results suggest that for a wide range of analysis targets and scales an effective estimate of both forced component and internal variability can be achieved with sizes below 30 members. This invites consideration of the possibility of exploring additional sources of uncertainty, such as physics parameter settings, when designing ensemble simulations.

2021 ◽  
Author(s):  
Claudia Tebaldi ◽  
Kalyn Dorheim ◽  
Michael Wehner ◽  
Ruby Leung

Abstract. We consider the problem of estimating the ensemble sizes required to characterize the forced component and the internal variability of a range of extreme metrics. While we exploit existing large ensembles contributed to the CLIVAR Large Ensemble Project, our perspective is that of a modeling center wanting to estimate a-priori such sizes on the basis of an existing small ensemble (we use five members here). We therefore ask if such small-size ensemble is sufficient to estimate the population variance in a way accurate enough to apply a well established formula that quantifies the expected error as a function of n (the ensemble size). We find that indeed we can anticipate errors in the estimation of the forced component for temperature and precipitation extreme metrics as a function of n by applying the population variance derived by five members in the formula. For a range of spatial and temporal scales, forcing levels (we use RCP8.5 simulations), and both models considered here as our proof of concept, CESM1-CAM5 and CanESM2, it appears that an ensemble size of 20 or 25 members can provide estimates of the forced component for the extreme metrics considered that remain within small absolute and percentage errors. Additional members beyond 20 or 25 add only marginal precision to the estimate, which remains true when extreme value analysis is used. We then ask about the ensemble size required to estimate the ensemble variance (a measure of internal variability) along the length of the simulation, and – importantly – about the ensemble size required to detect significant changes in such variance along the simulation with increased external forcings. When an F-test is applied to the ratio of the variances in question, one estimated on the basis of only 5 or 10 ensemble members, one estimated using the full ensemble (up to 50 members in our study) we do not obtain significant results even when the analysis is conducted at the grid-point scale. While we recognize that there will always exist applications and metric definitions requiring larger statistical power and therefore ensemble sizes, our results suggest that for a wide range of analysis targets and scales an effective estimate of both forced component and internal variability can be achieved with sizes below 30 members. This invites consideration of the possibility of exploring additional sources of uncertainty, like physics parameter settings, when designing ensemble simulations.


Extremes ◽  
2021 ◽  
Author(s):  
Laura Fee Schneider ◽  
Andrea Krajina ◽  
Tatyana Krivobokova

AbstractThreshold selection plays a key role in various aspects of statistical inference of rare events. In this work, two new threshold selection methods are introduced. The first approach measures the fit of the exponential approximation above a threshold and achieves good performance in small samples. The second method smoothly estimates the asymptotic mean squared error of the Hill estimator and performs consistently well over a wide range of processes. Both methods are analyzed theoretically, compared to existing procedures in an extensive simulation study and applied to a dataset of financial losses, where the underlying extreme value index is assumed to vary over time.


2015 ◽  
Vol 3 (1) ◽  
Author(s):  
Alexandre Lekina ◽  
Fateh Chebana ◽  
Taha B. M. J. Ouarda

AbstractIn Bivariate Frequency Analysis (BFA) of hydrological events, the study and quantification of the dependence between several variables of interest is commonly carried out through Pearson’s correlation (r), Kendall’s tau (τ) or Spearman’s rho (ρ). These measures provide an overall evaluation of the dependence. However, in BFA, the focus is on the extreme events which occur on the tail of the distribution. Therefore, these measures are not appropriate to quantify the dependence in the tail distribution. To quantify such a risk, in Extreme Value Analysis (EVA), a number of concepts and methods are available but are not appropriately employed in hydrological BFA. In the present paper, we study the tail dependence measures with their nonparametric estimations. In order to cover a wide range of possible cases, an application dealing with bivariate flood characteristics (peak flow, flood volume and event duration) is carried out on three gauging sites in Canada. Results show that r, τ and ρ are inadequate to quantify the extreme risk and to reflect the dependence characteristics in the tail. In addition, the upper tail dependence measure, commonly employed in hydrology, is shown not to be always appropriate especially when considered alone: it can lead to an overestimation or underestimation of the risk. Therefore, for an effective risk assessment, it is recommended to consider more than one tail dependence measure.


2020 ◽  
Author(s):  
Sebastian Milinski ◽  
Nicola Maher ◽  
Dirk Olonscheck

<p>Initial-condition large ensembles with ensemble sizes ranging from 30 to 100 members have become a commonly used tool to quantify the forced response and internal variability in various components of the climate system. However, there is no consensus on the ideal or even sufficient ensemble size for a large ensemble.</p><p>Here, we introduce an objective method to estimate the required ensemble size. This method can be applied to any given application. We demonstrate its use on the examples that represent typical applications of large ensembles: quantifying the forced response, quantifying internal variability, and detecting a forced change in internal variability.</p><p>We analyse forced trends in global mean surface temperature, local surface temperature and precipitation in the MPI Grand Ensemble (Maher et al., 2019). We find that 10 ensemble members are sufficient to quantify the forced response in historical surface temperature over the ocean, but more than 50 members are necessary over land at higher latitudes. </p><p>Next, we apply our method to identify the required ensemble size to sample internal variability of surface temperature over central North America and over the Niño 3.4 region. A moderate ensemble size of 10 members is sufficient to quantify variability over North America, while a large ensemble with close to 50 members is necessary for the Niño 3.4 region.</p><p>Finally, we use the example of September Arctic sea ice area to investigate forced changes in internal variability. In a strong warming scenario, the variability in sea ice area is increasing because more open water near the coastlines allows for more variability compared to a mostly ice-covered Arctic Ocean (Goosse et al., 2009; Olonscheck and Notz, 2017). We show that at least 5 ensemble members are necessary to detect an increase in sea ice variability in a 1% CO<sub>2</sub> experiment. To also quantify the magnitude of the forced change in variability, more than 50 members are necessary.</p><p>These numbers might be highly model dependent. Therefore, the suggested method can also be used with a long control run to estimate the required ensemble size for a model that does not provide a large number of realisations. Therefore, our analysis framework does not only provide valuable information before running a large ensemble, but can also be used to test the robustness of results based on small ensembles or individual realisations.</p><p><em><strong>References</strong><br>Goosse, H., O. Arzel, C. M. Bitz, A. de Montety, and M. Vancoppenolle (2009), Increased variability of the Arctic summer ice extent in a warmer climate, Geophys. Res. Lett., 36(23), 401–5, doi:10.1029/2009GL040546.</em></p><p><em>Olonscheck, D., and D. Notz (2017), Consistently Estimating Internal Climate Variability from Climate Model Simulations, J Climate, 30(23), 9555–9573, doi:10.1175/JCLI-D-16-0428.1.</em></p><p><em>Milinski, S., N. Maher, and D. Olonscheck (2019), How large does a large ensemble need to be? Earth Syst. Dynam. Discuss., 2019, 1–19, doi:10.5194/esd-2019-70.</em></p>


2019 ◽  
Author(s):  
Sebastian Milinski ◽  
Nicola Maher ◽  
Dirk Olonscheck

Abstract. Initial-condition large ensembles with ensemble sizes ranging from 30 to 100 members have become a commonly used tool to quantify the forced response and internal variability in various components of the climate system. However, there is no consensus on the ideal or even sufficient ensemble size for a large ensemble. Here, we introduce an objective method to estimate the required ensemble size that can be applied to any given application and demonstrate its use on the examples of global mean surface temperature, local surface temperature and precipitation and variability in the ENSO region and central America. Where possible, we base our estimate of the required ensemble size on the pre-industrial control simulation, which is available for every model. First, we determine how much of an available ensemble size is interpretable without a substantial impact of resampling ensemble members. Then, we show that more ensemble members are needed to quantify variability than the forced response, with the largest ensemble sizes needed to detect changes in internal variability itself. Finally, we highlight that the required ensemble size depends on both the acceptable error to the user and the studied quantity.


2014 ◽  
Vol 58 (3) ◽  
pp. 193-207 ◽  
Author(s):  
C Photiadou ◽  
MR Jones ◽  
D Keellings ◽  
CF Dewes

Atmosphere ◽  
2019 ◽  
Vol 10 (9) ◽  
pp. 499 ◽  
Author(s):  
Artem Shikhovtsev ◽  
Pavel Kovadlo ◽  
Vladimir Lukin

The paper focuses on the development of the method to estimate the mean characteristics of the atmospheric turbulence. Using an approach based on the shape of the energy spectrum of atmospheric turbulence over a wide range of spatial and temporal scales, the vertical profiles of optical turbulence are calculated. The temporal variability of the vertical profiles of turbulence under different low-frequency atmospheric disturbances is considered.


Sign in / Sign up

Export Citation Format

Share Document