scholarly journals Improving Multi-Model Ensemble Forecasts of Tropical Cyclone Intensity Using Bayesian Model Averaging

2018 ◽  
Vol 32 (5) ◽  
pp. 794-803
Author(s):  
Xiaojiang Song ◽  
Yuejian Zhu ◽  
Jiayi Peng ◽  
Hong Guan
2014 ◽  
Vol 142 (8) ◽  
pp. 2860-2878 ◽  
Author(s):  
Ryan D. Torn

Abstract The value of assimilating targeted dropwindsonde observations meant to improve tropical cyclone intensity forecasts is evaluated using data collected during the Pre-Depression Investigation of Cloud-Systems in the Tropics (PREDICT) field project and a cycling ensemble Kalman filter. For each of the four initialization times studied, four different sets of Weather Research and Forecasting Model (WRF) ensemble forecasts are produced: one without any dropwindsonde data, one with all dropwindsonde data assimilated, one where a small subset of “targeted” dropwindsondes are identified using the ensemble-based sensitivity method, and a set of randomly selected dropwindsondes. For all four cases, the assimilation of dropwindsondes leads to an improved intensity forecast, with the targeted dropwindsonde experiment recovering at least 80% of the difference between the experiment where all dropwindsondes and no dropwindsondes are assimilated. By contrast, assimilating randomly selected dropwindsondes leads to a smaller impact in three of the four cases. In general, zonal and meridional wind observations at or below 700 hPa have the largest impact on the forecast due to the large sensitivity of the intensity forecast to the horizontal wind components at these levels and relatively large ensemble standard deviation relative to the assumed observation errors.


2007 ◽  
Vol 30 (5) ◽  
pp. 1371-1386 ◽  
Author(s):  
Qingyun Duan ◽  
Newsha K. Ajami ◽  
Xiaogang Gao ◽  
Soroosh Sorooshian

2016 ◽  
Vol 30 (16) ◽  
pp. 2861-2879 ◽  
Author(s):  
Gaofeng Zhu ◽  
Xin Li ◽  
Kun Zhang ◽  
Zhenyu Ding ◽  
Tuo Han ◽  
...  

2011 ◽  
Vol 139 (5) ◽  
pp. 1626-1636 ◽  
Author(s):  
Richard M. Chmielecki ◽  
Adrian E. Raftery

Bayesian model averaging (BMA) is a statistical postprocessing technique that has been used in probabilistic weather forecasting to calibrate forecast ensembles and generate predictive probability density functions (PDFs) for weather quantities. The authors apply BMA to probabilistic visibility forecasting using a predictive PDF that is a mixture of discrete point mass and beta distribution components. Three approaches to developing predictive PDFs for visibility are developed, each using BMA to postprocess an ensemble of visibility forecasts. In the first approach, the ensemble is generated by a translation algorithm that converts predicted concentrations of hydrometeorological variables into visibility. The second approach augments the raw ensemble visibility forecasts with model forecasts of relative humidity and quantitative precipitation. In the third approach, the ensemble members are generated from relative humidity and precipitation alone. These methods are applied to 12-h ensemble forecasts from 2007 to 2008 and are tested against verifying observations recorded at Automated Surface Observing Stations in the Pacific Northwest. Each of the three methods produces predictive PDFs that are calibrated and sharp with respect to both climatology and the raw ensemble.


2013 ◽  
Vol 141 (6) ◽  
pp. 2107-2119 ◽  
Author(s):  
J. McLean Sloughter ◽  
Tilmann Gneiting ◽  
Adrian E. Raftery

Abstract Probabilistic forecasts of wind vectors are becoming critical as interest grows in wind as a clean and renewable source of energy, in addition to a wide range of other uses, from aviation to recreational boating. Unlike other common forecasting problems, which deal with univariate quantities, statistical approaches to wind vector forecasting must be based on bivariate distributions. The prevailing paradigm in weather forecasting is to issue deterministic forecasts based on numerical weather prediction models. Uncertainty can then be assessed through ensemble forecasts, where multiple estimates of the current state of the atmosphere are used to generate a collection of deterministic predictions. Ensemble forecasts are often uncalibrated, however, and Bayesian model averaging (BMA) is a statistical way of postprocessing these forecast ensembles to create calibrated predictive probability density functions (PDFs). It represents the predictive PDF as a weighted average of PDFs centered on the individual bias-corrected forecasts, where the weights reflect the forecasts’ relative contributions to predictive skill over a training period. In this paper the authors extend the BMA methodology to use bivariate distributions, enabling them to provide probabilistic forecasts of wind vectors. The BMA method is applied to 48-h-ahead forecasts of wind vectors over the North American Pacific Northwest in 2003 using the University of Washington mesoscale ensemble and is shown to provide better-calibrated probabilistic forecasts than the raw ensemble, which are also sharper than probabilistic forecasts derived from climatology.


2010 ◽  
Vol 138 (11) ◽  
pp. 4199-4211 ◽  
Author(s):  
Maurice J. Schmeits ◽  
Kees J. Kok

Abstract Using a 20-yr ECMWF ensemble reforecast dataset of total precipitation and a 20-yr dataset of a dense precipitation observation network in the Netherlands, a comparison is made between the raw ensemble output, Bayesian model averaging (BMA), and extended logistic regression (LR). A previous study indicated that BMA and conventional LR are successful in calibrating multimodel ensemble forecasts of precipitation for a single forecast projection. However, a more elaborate comparison between these methods has not yet been made. This study compares the raw ensemble output, BMA, and extended LR for single-model ensemble reforecasts of precipitation; namely, from the ECMWF ensemble prediction system (EPS). The raw EPS output turns out to be generally well calibrated up to 6 forecast days, if compared to the area-mean 24-h precipitation sum. Surprisingly, BMA is less skillful than the raw EPS output from forecast day 3 onward. This is due to the bias correction in BMA, which applies model output statistics to individual ensemble members. As a result, the spread of the bias-corrected ensemble members is decreased, especially for the longer forecast projections. Here, an additive bias correction is applied instead and the equation for the probability of precipitation in BMA is also changed. These modifications to BMA are referred to as “modified BMA” and lead to a significant improvement in the skill of BMA for the longer projections. If the area-maximum 24-h precipitation sum is used as a predictand, both modified BMA and extended LR improve the raw EPS output significantly for the first 5 forecast days. However, the difference in skill between modified BMA and extended LR does not seem to be statistically significant. Yet, extended LR might be preferred, because incorporating predictors that are different from the predictand is straightforward, in contrast to BMA.


2007 ◽  
Vol 135 (4) ◽  
pp. 1364-1385 ◽  
Author(s):  
Laurence J. Wilson ◽  
Stephane Beauregard ◽  
Adrian E. Raftery ◽  
Richard Verret

Abstract Bayesian model averaging (BMA) has recently been proposed as a way of correcting underdispersion in ensemble forecasts. BMA is a standard statistical procedure for combining predictive distributions from different sources. The output of BMA is a probability density function (pdf), which is a weighted average of pdfs centered on the bias-corrected forecasts. The BMA weights reflect the relative contributions of the component models to the predictive skill over a training sample. The variance of the BMA pdf is made up of two components, the between-model variance, and the within-model error variance, both estimated from the training sample. This paper describes the results of experiments with BMA to calibrate surface temperature forecasts from the 16-member Canadian ensemble system. Using one year of ensemble forecasts, BMA was applied for different training periods ranging from 25 to 80 days. The method was trained on the most recent forecast period, then applied to the next day’s forecasts as an independent sample. This process was repeated through the year, and forecast quality was evaluated using rank histograms, the continuous rank probability score, and the continuous rank probability skill score. An examination of the BMA weights provided a useful comparative evaluation of the component models, both for the ensemble itself and for the ensemble augmented with the unperturbed control forecast and the higher-resolution deterministic forecast. Training periods around 40 days provided a good calibration of the ensemble dispersion. Both full regression and simple bias-correction methods worked well to correct the bias, except that the full regression failed to completely remove seasonal trend biases in spring and fall. Simple correction of the bias was sufficient to produce positive forecast skill out to 10 days with respect to climatology, which was improved by the BMA. The addition of the control forecast and the full-resolution model forecast to the ensemble produced modest improvement in the forecasts for ranges out to about 7 days. Finally, BMA produced significantly narrower 90% prediction intervals compared to a simple Gaussian bias correction, while achieving similar overall accuracy.


Sign in / Sign up

Export Citation Format

Share Document