Investigation of Ensemble Variance as a Measure of True Forecast Variance

2011 ◽  
Vol 139 (12) ◽  
pp. 3954-3963 ◽  
Author(s):  
Walter C. Kolczynski ◽  
David R. Stauffer ◽  
Sue Ellen Haupt ◽  
Naomi S. Altman ◽  
Aijun Deng

Abstract The uncertainty in meteorological predictions is of interest for applications ranging from economic to recreational to public safety. One common method to estimate uncertainty is by using meteorological ensembles. These ensembles provide an easily quantifiable measure of the uncertainty in the forecast in the form of the ensemble variance. However, ensemble variance may not accurately reflect the actual uncertainty, so any measure of uncertainty derived from the ensemble should be calibrated to provide a more reliable estimate of the actual uncertainty in the forecast. A previous study introduced the linear variance calibration (LVC) as a simple method to determine the ensemble variance to error variance relationship and demonstrated this technique on real ensemble data. The LVC parameters, the slopes, and y intercepts, however, are generally different from the ideal values. This current study uses a stochastic model to examine the LVC in a controlled setting. The stochastic model is capable of simulating underdispersive and overdispersive ensembles as well as perfectly reliable ensembles. Because the underlying relationship is specified, LVC results can be compared to theoretical values of the slope and y intercept. Results indicate that all types of ensembles produce calibration slopes that are smaller than their theoretical values for ensemble sizes less than several hundred members, with corresponding y intercepts greater than their theoretical values. This indicates that all ensembles, even otherwise perfect ensembles, should be calibrated if the ensemble size is less than several hundred. In addition, it is shown that an adjustment factor can be computed for inadequate ensemble size. This adjustment factor is independent of the stochastic model and is applicable to any linear regression of error variance on ensemble variance. When applied to experiments using the stochastic model, the adjustment produces LVC parameters near their theoretical values for all ensemble sizes. Although the adjustment is unnecessary when applying LVC, it allows for a more accurate assessment of the reliability of ensembles, and a fair comparison of the reliability for differently sized ensembles.

2017 ◽  
Vol 146 (1) ◽  
pp. 49-62 ◽  
Author(s):  
Sam Hatfield ◽  
Aneesh Subramanian ◽  
Tim Palmer ◽  
Peter Düben

Abstract A new approach for improving the accuracy of data assimilation, by trading numerical precision for ensemble size, is introduced. Data assimilation is inherently uncertain because of the use of noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower-precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, computational resources can be redistributed toward, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localization, lowering precision could actually permit an improvement in the accuracy of weather forecasts. Here, this idea is tested on an ensemble data assimilation system comprising the Lorenz ’96 toy atmospheric model and the ensemble square root filter. The system is run at double-, single-, and half-precision (the latter using an emulation tool), and the performance of each precision is measured through mean error statistics and rank histograms. The sensitivity of these results to the observation error and the length of the observation window are addressed. Then, by reinvesting the saved computational resources from reducing precision into the ensemble size, assimilation error can be reduced for (hypothetically) no extra cost. This results in increased forecasting skill, with respect to double-precision assimilation.


1977 ◽  
Vol 68 (3) ◽  
pp. 265-274
Author(s):  
N. Dahlmann ◽  
W. Schlegel ◽  
K. H. Hölzer ◽  
G. Hopfeld
Keyword(s):  

2015 ◽  
Vol 143 (12) ◽  
pp. 4847-4864 ◽  
Author(s):  
Mats Hamrud ◽  
Massimo Bonavita ◽  
Lars Isaksen

Abstract The desire to do detailed comparisons between variational and more scalable ensemble-based data assimilation systems in a semioperational environment has led to the development of a state-of-the-art EnKF system at ECMWF. A broad description of the ECMWF EnKF is given in this paper, focusing on highlighting differences compared to standard EnKF practice. In particular, a discussion of the novel algorithm used to control imbalances between the mass and wind fields in the EnKF analysis is given. The scalability and computational properties of the EnKF are reviewed and the implementation choices adopted at ECMWF described. The sensitivity of the ECMWF EnKF to ensemble size, horizontal resolution, and representation of model errors is also discussed. A comparison with 4DVar will be found in Part II of this two-part study.


2011 ◽  
Vol 335-336 ◽  
pp. 255-259
Author(s):  
Xi Wang Wu ◽  
Jian Zhong Xiao ◽  
Feng Xia ◽  
Yong Gang Hu ◽  
Zhou Peng

The key problem to prepare carbon nanotubes (CNTs) reinforced ceramic matrix composites is how to break up massive agglomerates of CNTs and disperse uniformly CNTs. We obtain the CNTs-Al2O3composite powder by shear treatment on melted CNTs-Al2O3-agents mixture. Microstructure observations of CNTs-Al2O3composite powder show that CNTs can be dispersed uniformly by shearing process. The rheological results also affirm the conclusion. According to the rheological theory, we build the ideal dispersion model of CNTs-Al2O3suspension system and discuss the dispersion mechanism.


2012 ◽  
Vol 25 (2) ◽  
pp. 459-472 ◽  
Author(s):  
Angeline G. Pendergrass ◽  
Gregory J. Hakim ◽  
David S. Battisti ◽  
Gerard Roe

Abstract A central issue for understanding past climates involves the use of sparse time-integrated data to recover the physical properties of the coupled climate system. This issue is explored in a simple model of the midlatitude climate system that has attributes consistent with the observed climate. A quasigeostrophic (QG) model thermally coupled to a slab ocean is used to approximate midlatitude coupled variability, and a variant of the ensemble Kalman filter is used to assimilate time-averaged observations. The dependence of reconstruction skill on coupling and thermal inertia is explored. Results from this model are compared with those for an even simpler two-variable linear stochastic model of midlatitude air–sea interaction, for which the assimilation problem can be solved semianalytically. Results for the QG model show that skill decreases as the length of time over which observations are averaged increases in both the atmosphere and ocean when normalized against the time-averaged climatological variance. Skill in the ocean increases with slab depth, as expected from thermal inertia arguments, but skill in the atmosphere decreases. An explanation of this counterintuitive result derives from an analytical expression for the forecast error covariance in the two-variable stochastic model, which shows that the ratio of noise to total error increases with slab ocean depth. Essentially, noise becomes trapped in the atmosphere by a thermally stiffer ocean, which dominates the decrease in initial condition error owing to improved skill in the ocean. Increasing coupling strength in the QG model yields higher skill in the atmosphere and lower skill in the ocean, as the atmosphere accesses the longer ocean memory and the ocean accesses more atmospheric high-frequency “noise.” The two-variable stochastic model fails to capture this effect, showing decreasing skill in both the atmosphere and ocean for increased coupling strength, due to an increase in the ratio of noise to the forecast error variance. Implications for the potential for data assimilation to improve climate reconstructions are discussed.


PEDIATRICS ◽  
1968 ◽  
Vol 41 (6) ◽  
pp. 1047-1054 ◽  
Author(s):  
Jerold Lucey ◽  
Mario Ferreiro ◽  
Jean Hewitt

The ideal treatment for hyperbilirubinemia of prematurity would be a safe and simple method for preventing its occurrence. In 1958 it was first demonstrated that serum bilirubin concentrations of newborn infants can be reduced by exposure to light. This treatment has not been widely used because of doubts as to its effectiveness and concern for the possible toxicity of the photochemical decomposition products of bilirubin. Recent experimental evidence indicates that these products are non-toxic. A controlled clinical trial has been carried out among 111 premature infants to test the effectiveness of artificial blue light in preventing hyperbilirubinemia of prematurity. Treated infants were placed in light from 12 to 144 hours of age and serial bilirubin determinations were carried out. The control (58 infants) and treated (53 infants) groups were comparable with respect to birth weight, gestational age, fluid intake, and weight loss. The results indicate a statistically significant difference between the groups. By taking advantage of this alternate route of elimination of bilirubin in the newborn infant, it should be possible to greatly reduce the need for exchange transfusion for hyperbilirubinemia of prematurity.


1972 ◽  
Vol 9 (8) ◽  
pp. 1014-1029 ◽  
Author(s):  
G. Poupinet

A study of the group velocity of PL for about fifty paths in Canada has been made. It is difficult to measure the dispersion of PL for long periods because two Airy phases arrive in the beginning of the wave train. It is also concluded that like Rayleigh waves PL waves cannot really give more than an S-velocity distribution because the partial derivatives in SV are too large compared to those in P for the period range where a reliable estimate of the dispersion can be obtained. The different dispersion curves are interpreted by looking for lateral variations of PL dispersion. As these curves have only one or two degrees of freedom, we label a curve with an index of dispersion. As in Santo's studies, this index is attributed to each region crossed by fitting the propagation times for a given period. Diagrams are then used giving the variation of the index with the average S velocity and the depth of the Moho. The structures found by this rather simple method are well correlated with tectonic regions and gravity measurements.


2015 ◽  
Vol 143 (10) ◽  
pp. 3931-3947 ◽  
Author(s):  
Benjamin Ménétrier ◽  
Thomas Auligné

Abstract Localization and hybridization are two methods used in ensemble data assimilation to improve the accuracy of sample covariances. It is shown in this paper that it is beneficial to consider them jointly in the framework of linear filtering of sample covariances. Following previous work on localization, an objective method is provided to optimize both localization and hybridization coefficients simultaneously. Theoretical and experimental evidence shows that if optimal weights are used, localized-hybridized sample covariances are always more accurate than their localized-only counterparts, whatever the static covariance matrix specified for the hybridization. Experimental results obtained using a 1000-member ensemble as a reference show that the method developed in this paper can efficiently provide localization and hybridization coefficients consistent with the variable, vertical level, and ensemble size. Spatially heterogeneous optimization is shown to improve the accuracy of the filtered covariances, and consideration of both vertical and horizontal covariances is proven to have an impact on the hybridization coefficients.


2020 ◽  
Vol 12 (3) ◽  
pp. 510
Author(s):  
Bashar Alsadik

Lidar technology is thriving nowadays for different applications mainly for autonomous navigation, mapping, and smart city technology. Lidars vary in different aspects and can be: multi beam, single beam, spinning, solid state, full 360 field of view FOV, single or multi pulse returns, and many other geometric and radiometric aspects. Users and developers in the mapping industry are continuously looking for new released Lidars having high properties of output density, coverage, and accuracy while keeping a lower cost. Accordingly, every Lidar type should be well evaluated for the final intended mapping aim. This evaluation is not easy to implement in practice because of the need to have all the investigated Lidars available in hand and integrated into a ready to use mapping system. Furthermore, to have a fair comparison; it is necessary to ensure the test applied in the same environment at the same travelling path among other conditions. In this paper, we are evaluating two state-of-the-art multi beam Lidar types: Ouster OS-1-64 and Hesai Pandar64 for mapping applications. The evaluation of the Lidar types is applied in a simulation environment which approximates reality. The paper shows the determination of the ideal orientation angle for the two Lidars by assessing the density, coverage, and accuracy and presenting clear performance quantifications and conclusions.


Sign in / Sign up

Export Citation Format

Share Document