Sampling errors on convective scales: What can we learn from a 1000-member ensemble?

Author(s):  
Tobias Necker ◽  
Martin Weissmann ◽  
Yvonne Ruckstuhl ◽  
Juan Ruiz ◽  
Takemasa Miyoshi ◽  
...  

<p>Current regional forecasting systems particularly aim at the forecast of convective events and related hazards. Most weather centers apply high-resolution ensemble forecasts that resolve convection explicitly but can only afford a limited ensemble size of less than 100 members. Given that the degrees of freedom of atmospheric models are several magnitudes higher implies sampling errors. Sampling errors and fast error growth on convective scales in turn lead to a low predictability. Consequently, improving initial conditions and subsequent forecasts requires a better understanding of error correlations in both space and time.<br>For this purpose, we conducted the first convective-scale 1000-member ensemble simulation over central Europe. Several 1000-member ensemble forecasts are investigated during a high impact weather period in summer 2016 using ensemble sensitivity analysis. Spatial and spatiotemporal correlations are used to quantify sampling errors on convective scales. Correlations of the 1000-member ensemble forecast serve as truth to assess the performance of different localization approaches. Those approaches include a standard distance-based localization technique and a statistical sampling error correction method as proposed by Anderson (2012). Our study highlights advantages and disadvantages of existing methods and emphasises the need of different localization approaches for different scales and variables. Several results are published in Necker et al (2020a) and (2020b).</p>

2019 ◽  
Vol 148 (3) ◽  
pp. 1229-1249 ◽  
Author(s):  
Tobias Necker ◽  
Martin Weissmann ◽  
Yvonne Ruckstuhl ◽  
Jeffrey Anderson ◽  
Takemasa Miyoshi

Abstract State-of-the-art ensemble prediction systems usually provide ensembles with only 20–250 members for estimating the uncertainty of the forecast and its spatial and spatiotemporal covariance. Given that the degrees of freedom of atmospheric models are several magnitudes higher, the estimates are therefore substantially affected by sampling errors. For error covariances, spurious correlations lead to random sampling errors, but also a systematic overestimation of the correlation. A common approach to mitigate the impact of sampling errors for data assimilation is to localize correlations. However, this is a challenging task given that physical correlations in the atmosphere can extend over long distances. Besides data assimilation, sampling errors pose an issue for the investigation of spatiotemporal correlations using ensemble sensitivity analysis. Our study evaluates a statistical approach for correcting sampling errors. The applied sampling error correction is a lookup table–based approach and therefore computationally very efficient. We show that this approach substantially improves both the estimates of spatial correlations for data assimilation as well as spatiotemporal correlations for ensemble sensitivity analysis. The evaluation is performed using the first convective-scale 1000-member ensemble simulation for central Europe. Correlations of the 1000-member ensemble forecast serve as truth to assess the performance of the sampling error correction for smaller subsets of the full ensemble. The sampling error correction strongly reduced both random and systematic errors for all evaluated variables, ensemble sizes, and lead times.


2017 ◽  
Author(s):  
Ross Noel Bannister ◽  
Stefano Migliorini ◽  
Alison Clare Rudd ◽  
Laura Hart Baker

Abstract. Ensemble-based predictions are increasingly used as an aid to weather forecasting and to data assimilation, where the aim is to capture the range of possible outcomes consistent with the underlying uncertainties. Constraints on computing resources mean that ensembles have a relatively small size, which can lead to an incomplete range of possible outcomes, and to inherent sampling errors. This paper discusses how an existing ensemble can be relatively easily increased in size, it develops a range of standard and extended diagnostics to help determine whether a given ensemble is large enough to be useful for forecasting and data assimilation purposes, and it applies the diagnostics to a convective-scale case study for illustration. Diagnostics include the effect of ensemble size on various aspects of rainfall forecasts, kinetic energy spectra, and (co)-variance statistics in the spatial and spectral domains. The work here extends the Met Office's 24 ensemble members to 93. It is found that the extra members do develop a significant degree of linear independence, they increase the ensemble spread (although with caveats to do with non-Gaussianity), they reduce sampling error in many statistical quantities (namely variances, correlations, and length-scales), and improve the effective spatial resolution of the ensemble. The extra members though do not improve the probabilistic rain rate forecasts. It is assumed that the 93-member ensemble approximates the error-free statistics, which is a practical assumption, but the data suggests that this number of members is ultimately not enough to justify this assumption, and therefore more ensembles are likely required for such convective-scale systems to further reduce sampling errors, especially for ensemble data assimilation purposes.


2019 ◽  
Vol 76 (9) ◽  
pp. 2653-2672 ◽  
Author(s):  
John R. Lawson

Abstract Thunderstorms are difficult to predict because of their small length scale and fast predictability destruction. A cell’s predictability is constrained by properties of the flow in which it is embedded (e.g., vertical wind shear), and associated instabilities (e.g., convective available potential energy). To assess how predictability of thunderstorms changes with environment, two groups of 780 idealized simulations (each using a different microphysics scheme) were performed over a range of buoyancy and shear profiles. Results were not sensitive to the scheme chosen. The gradient in diagnostics (updraft speed, storm speed, etc.) across shear–buoyancy phase space represents sensitivity to small changes in initial conditions: a proxy for inherent predictability. Storm evolution is split into two groups, separated by a U-shaped bifurcation in phase space, comprising 1) cells that continue strengthening after 1 h versus 2) those that weaken. Ensemble forecasts in regimes near this bifurcation are hence expected to have larger uncertainty, and adequate dispersion and reliability is essential. Predictability loss takes two forms: (i) chaotic error growth from the largest and most powerful storms, and (ii) tipping points at the U-shaped perimeter of the stronger storms. The former is associated with traditional forecast error between corresponding grid points, and is here counterintuitive; the latter is associated with object-based error, and matches the mental filtering performed by human forecasters for the convective scale.


2015 ◽  
Vol 143 (5) ◽  
pp. 1583-1600 ◽  
Author(s):  
Florian Harnisch ◽  
Christian Keil

Abstract A kilometer-scale ensemble data assimilation system (KENDA) based on a local ensemble transform Kalman filter (LETKF) has been developed for the Consortium for Small-Scale Modeling (COSMO) limited-area model. The data assimilation system provides an analysis ensemble that can be used to initialize ensemble forecasts at a horizontal grid resolution of 2.8 km. Convective-scale ensemble forecasts over Germany using ensemble initial conditions derived by the KENDA system are evaluated and compared to operational forecasts with downscaled initial conditions for a short summer period during June 2012. The choice of the inflation method applied in the LETKF significantly affects the ensemble analysis and forecast. Using a multiplicative background covariance inflation does not produce enough spread in the analysis ensemble leading to a degradation of the ensemble forecasts. Inflating the analysis ensemble instead by either multiplicative analysis covariance inflation or relaxation inflation methods enhances the analysis spread and is able to provide initial conditions that produce more consistent ensemble forecasts. The forecast quality for short forecast lead times up to 3 h is improved, and 21-h forecasts also benefit from the increased spread. Doubling the ensemble size has not only a clear positive impact on the analysis but also on the short-term ensemble forecasts, while a simple representation of model error perturbing parameters of the model physics has only a small impact. Precipitation and surface wind speed ensemble forecasts using the high-resolution KENDA-derived initial conditions are competitive compared to the operationally used downscaled initial conditions.


2020 ◽  
Vol 148 (9) ◽  
pp. 3631-3652
Author(s):  
Pin-Ying Wu ◽  
Shu-Chih Yang ◽  
Chih-Chien Tsai ◽  
Hsiang-Wen Cheng

ABSTRACT Sampling error stems from the use of ensemble-based data assimilation (EDA) with a limited ensemble size and can result in spurious background error covariances, leading to false analysis corrections. The WRF-LETKF radar assimilation system (WLRAS) is performed separately with 256 and 40 members to investigate the characteristics of convective-scale sampling errors in the EDA and its impact on precipitation prediction based on a heavy rainfall event on 16 June 2008. The results suggest that the sampling errors for this event are sensitive to the relationships between the simulated observations and model variables, the intensity of reflectivity, and how the prevailing wind projects to the radial wind in the areas that the radar cannot resolve U or V wind. The sampling errors lead to an underprediction of heavy rainfall when the horizontal localization radius is inadequately large, but this can be mitigated when a more accurate moisture condition is provided. In addition, being able to use a larger vertical localization plays an important role in providing necessary adjustments for representing the vertical thermodynamical structure of convection, which further improves precipitation prediction. A strategy mitigating the impact of sampling errors associated with the limitation of radial wind measurement by inflating the observation error over sensitive areas can bring benefits to precipitation prediction.


2016 ◽  
Vol 73 (6) ◽  
pp. 2403-2426 ◽  
Author(s):  
Jidong Gao ◽  
Chenghao Fu ◽  
David J. Stensrud ◽  
John S. Kain

Abstract An ensemble of the three-dimensional variational data assimilation (En3DA) method for convective-scale weather has been developed. It consists of an ensemble of three-dimensional variational data assimilations and forecasts in which member differences are introduced by perturbing initial conditions and/or observations, and it uses flow-dependent error covariances generated by the ensemble forecasts. The method is applied to the assimilation of simulated radar data for a supercell storm. Results indicate that the flow-dependent ensemble covariances are effective in enabling convective-scale analyses, as the most important features of the simulated storm, including the low-level cold pool and midlevel mesocyclone, are well analyzed. Several groups of sensitivity experiments are conducted to test the robustness of the method. The first group demonstrates that incorporating a mass continuity equation as a weak constraint into the En3DA algorithm can improve the quality of the analyses when radial velocity observations contain large errors. In the second group of experiments, the sensitivity of analyses to the microphysical parameterization scheme is explored. Results indicate that the En3DA analyses are quite sensitive to differences in the microphysics scheme, suggesting that ensemble forecasts with multiple microphysics schemes could reduce uncertainty related to model physics errors. Experiments also show that assimilating reflectivity observations can reduce spinup time and that it has a small positive impact on the quality of the wind field analysis. Of the threshold values tested for assimilating reflectivity observations, 15 dBZ provides the best analysis. The final group of experiments demonstrates that it is not necessary to perturb radial velocity observations for every ensemble number in order to improve the quality of the analysis.


2020 ◽  
Vol 148 (12) ◽  
pp. 4995-5014
Author(s):  
Austin Coleman ◽  
Brian Ancell

AbstractEnsemble sensitivity analysis (ESA) is a useful and computationally inexpensive tool for analyzing how features in the flow at early forecast times affect different relevant forecast features later in the forecast. Given the frequency of observations measured between model initialization times that remain unused, ensemble sensitivity may be used to increase predictability and forecast accuracy through an objective ensemble subsetting technique. This technique identifies ensemble members with the smallest errors in regions of high sensitivity to produce a smaller, more accurate ensemble subset. Ensemble subsets can significantly reduce synoptic-scale forecast errors, but applying this strategy to convective-scale forecasts presents additional challenges. Objective verification of the sensitivity-based ensemble subsetting technique is conducted for ensemble forecasts of 2–5-km updraft helicity (UH) and simulated reflectivity. Many degrees of freedom are varied to identify the lead times, subset sizes, forecast thresholds, and atmospheric predictors that provide most forecast benefit. Subsets vastly reduce error of UH forecasts in an idealized framework but tend to degrade fractions skill scores and reliability in a real-world framework. Results reveal this discrepancy is a result of verifying probabilistic UH forecasts with storm-report-based observations, which effectively dampens technique performance. The potential of ensemble subsetting and likely other postprocessing techniques is limited by tuning UH forecasts to predict severe reports. Additional diagnostic ideas to improve postprocessing tool optimization for convection-allowing models are discussed.


1995 ◽  
Vol 117 (3) ◽  
pp. 582-588 ◽  
Author(s):  
L. N. Virgin ◽  
T. F. Walsh ◽  
J. D. Knight

This paper describes the results of a study into the dynamic behavior of a magnetic bearing system. The research focuses attention on the influence of nonlinearities on the forced response of a two-degree-of-freedom rotating mass suspended by magnetic bearings and subject to rotating unbalance and feedback control. Geometric coupling between the degrees of freedom leads to a pair of nonlinear ordinary differential equations, which are then solved using both numerical simulation and approximate analytical techniques. The system exhibits a variety of interesting and somewhat unexpected phenomena including various amplitude driven bifurcational events, sensitivity to initial conditions, and the complete loss of stability associated with the escape from the potential well in which the system can be thought to be oscillating. An approximate criterion to avoid this last possibility is developed based on concepts of limiting the response of the system. The present paper may be considered as an extension to an earlier study by the same authors, which described the practical context of the work, free vibration, control aspects, and derivation of the mathematical model.


2013 ◽  
Vol 57 (03) ◽  
pp. 125-140
Author(s):  
Daniel A. Liut ◽  
Kenneth M. Weems ◽  
Tin-Guen Yen

A quasi-three-dimensional hydrodynamic model is presented to simulate shallow water phenomena. The method is based on a finite-volume approach designed to solve shallow water equations in the time domain. The nonlinearities of the governing equations are considered. The methodology can be used to compute green water effects on a variety of platforms with six-degrees-of-freedom motions. Different boundary and initial conditions can be applied for multiple types of moving platforms, like a ship's deck, tanks, etc. Comparisons with experimental data are discussed. The shallow water model has been integrated with the Large Amplitude Motions Program to compute the effects of green water flow over decks within a time-domain simulation of ship motions in waves. Results associated to this implementation are presented.


1988 ◽  
Vol 78 (1) ◽  
pp. 155-161 ◽  
Author(s):  
J. Van Sickle

AbstractSeveral published reports have presented estimates of the rate of increase, r, based on sampled ovarian age distributions from Glossina populations throughout Africa. These estimates are invalid, because an age distribution sampled at one point in time can be equated to a survivorship curve only if r = 0. When such a survivorship curve and a corresponding fecundity schedule are then used to estimate r via the Euler-Lotka equation, the result is a value of r near zero, regardless of the population's true rate of increase. Synthetic sampling from a hypothetical tsetse population confirmed that estimates computed in this fashion are entirely the products of sampling error. Valid estimates of r can sometimes be obtained from an age distribution, using an alternative method, but such estimates are highly sensitive to sampling errors in the distribution.


Sign in / Sign up

Export Citation Format

Share Document