Exploring Copula-based Bayesian Model Averaging with multiple ANNs for PM2.5 ensemble forecasts

2020 ◽  
Vol 263 ◽  
pp. 121528 ◽  
Author(s):  
Yanlai Zhou ◽  
Fi-John Chang ◽  
Hua Chen ◽  
Hong Li
2011 ◽  
Vol 139 (5) ◽  
pp. 1626-1636 ◽  
Author(s):  
Richard M. Chmielecki ◽  
Adrian E. Raftery

Bayesian model averaging (BMA) is a statistical postprocessing technique that has been used in probabilistic weather forecasting to calibrate forecast ensembles and generate predictive probability density functions (PDFs) for weather quantities. The authors apply BMA to probabilistic visibility forecasting using a predictive PDF that is a mixture of discrete point mass and beta distribution components. Three approaches to developing predictive PDFs for visibility are developed, each using BMA to postprocess an ensemble of visibility forecasts. In the first approach, the ensemble is generated by a translation algorithm that converts predicted concentrations of hydrometeorological variables into visibility. The second approach augments the raw ensemble visibility forecasts with model forecasts of relative humidity and quantitative precipitation. In the third approach, the ensemble members are generated from relative humidity and precipitation alone. These methods are applied to 12-h ensemble forecasts from 2007 to 2008 and are tested against verifying observations recorded at Automated Surface Observing Stations in the Pacific Northwest. Each of the three methods produces predictive PDFs that are calibrated and sharp with respect to both climatology and the raw ensemble.


2013 ◽  
Vol 141 (6) ◽  
pp. 2107-2119 ◽  
Author(s):  
J. McLean Sloughter ◽  
Tilmann Gneiting ◽  
Adrian E. Raftery

Abstract Probabilistic forecasts of wind vectors are becoming critical as interest grows in wind as a clean and renewable source of energy, in addition to a wide range of other uses, from aviation to recreational boating. Unlike other common forecasting problems, which deal with univariate quantities, statistical approaches to wind vector forecasting must be based on bivariate distributions. The prevailing paradigm in weather forecasting is to issue deterministic forecasts based on numerical weather prediction models. Uncertainty can then be assessed through ensemble forecasts, where multiple estimates of the current state of the atmosphere are used to generate a collection of deterministic predictions. Ensemble forecasts are often uncalibrated, however, and Bayesian model averaging (BMA) is a statistical way of postprocessing these forecast ensembles to create calibrated predictive probability density functions (PDFs). It represents the predictive PDF as a weighted average of PDFs centered on the individual bias-corrected forecasts, where the weights reflect the forecasts’ relative contributions to predictive skill over a training period. In this paper the authors extend the BMA methodology to use bivariate distributions, enabling them to provide probabilistic forecasts of wind vectors. The BMA method is applied to 48-h-ahead forecasts of wind vectors over the North American Pacific Northwest in 2003 using the University of Washington mesoscale ensemble and is shown to provide better-calibrated probabilistic forecasts than the raw ensemble, which are also sharper than probabilistic forecasts derived from climatology.


2010 ◽  
Vol 138 (11) ◽  
pp. 4199-4211 ◽  
Author(s):  
Maurice J. Schmeits ◽  
Kees J. Kok

Abstract Using a 20-yr ECMWF ensemble reforecast dataset of total precipitation and a 20-yr dataset of a dense precipitation observation network in the Netherlands, a comparison is made between the raw ensemble output, Bayesian model averaging (BMA), and extended logistic regression (LR). A previous study indicated that BMA and conventional LR are successful in calibrating multimodel ensemble forecasts of precipitation for a single forecast projection. However, a more elaborate comparison between these methods has not yet been made. This study compares the raw ensemble output, BMA, and extended LR for single-model ensemble reforecasts of precipitation; namely, from the ECMWF ensemble prediction system (EPS). The raw EPS output turns out to be generally well calibrated up to 6 forecast days, if compared to the area-mean 24-h precipitation sum. Surprisingly, BMA is less skillful than the raw EPS output from forecast day 3 onward. This is due to the bias correction in BMA, which applies model output statistics to individual ensemble members. As a result, the spread of the bias-corrected ensemble members is decreased, especially for the longer forecast projections. Here, an additive bias correction is applied instead and the equation for the probability of precipitation in BMA is also changed. These modifications to BMA are referred to as “modified BMA” and lead to a significant improvement in the skill of BMA for the longer projections. If the area-maximum 24-h precipitation sum is used as a predictand, both modified BMA and extended LR improve the raw EPS output significantly for the first 5 forecast days. However, the difference in skill between modified BMA and extended LR does not seem to be statistically significant. Yet, extended LR might be preferred, because incorporating predictors that are different from the predictand is straightforward, in contrast to BMA.


2007 ◽  
Vol 135 (4) ◽  
pp. 1364-1385 ◽  
Author(s):  
Laurence J. Wilson ◽  
Stephane Beauregard ◽  
Adrian E. Raftery ◽  
Richard Verret

Abstract Bayesian model averaging (BMA) has recently been proposed as a way of correcting underdispersion in ensemble forecasts. BMA is a standard statistical procedure for combining predictive distributions from different sources. The output of BMA is a probability density function (pdf), which is a weighted average of pdfs centered on the bias-corrected forecasts. The BMA weights reflect the relative contributions of the component models to the predictive skill over a training sample. The variance of the BMA pdf is made up of two components, the between-model variance, and the within-model error variance, both estimated from the training sample. This paper describes the results of experiments with BMA to calibrate surface temperature forecasts from the 16-member Canadian ensemble system. Using one year of ensemble forecasts, BMA was applied for different training periods ranging from 25 to 80 days. The method was trained on the most recent forecast period, then applied to the next day’s forecasts as an independent sample. This process was repeated through the year, and forecast quality was evaluated using rank histograms, the continuous rank probability score, and the continuous rank probability skill score. An examination of the BMA weights provided a useful comparative evaluation of the component models, both for the ensemble itself and for the ensemble augmented with the unperturbed control forecast and the higher-resolution deterministic forecast. Training periods around 40 days provided a good calibration of the ensemble dispersion. Both full regression and simple bias-correction methods worked well to correct the bias, except that the full regression failed to completely remove seasonal trend biases in spring and fall. Simple correction of the bias was sufficient to produce positive forecast skill out to 10 days with respect to climatology, which was improved by the BMA. The addition of the control forecast and the full-resolution model forecast to the ensemble produced modest improvement in the forecasts for ranges out to about 7 days. Finally, BMA produced significantly narrower 90% prediction intervals compared to a simple Gaussian bias correction, while achieving similar overall accuracy.


2008 ◽  
Vol 136 (12) ◽  
pp. 4641-4652 ◽  
Author(s):  
Craig H. Bishop ◽  
Kevin T. Shanley

Abstract Methods of ensemble postprocessing in which continuous probability density functions are constructed from ensemble forecasts by centering functions around each of the ensemble members have come to be called Bayesian model averaging (BMA) or “dressing” methods. Here idealized ensemble forecasting experiments are used to show that these methods are liable to produce systematically unreliable probability forecasts of climatologically extreme weather. It is argued that the failure of these methods is linked to an assumption that the distribution of truth given the forecast can be sampled by adding stochastic perturbations to state estimates, even when these state estimates have a realistic climate. It is shown that this assumption is incorrect, and it is argued that such dressing techniques better describe the likelihood distribution of historical ensemble-mean forecasts given the truth for certain values of the truth. This paradigm shift leads to an approach that incorporates prior climatological information into BMA ensemble postprocessing through Bayes’s theorem. This new approach is shown to cure BMA’s ill treatment of extreme weather by providing a posterior BMA distribution whose probabilistic forecasts are reliable for both extreme and nonextreme weather forecasts.


Author(s):  
Lorenzo Bencivelli ◽  
Massimiliano Giuseppe Marcellino ◽  
Gianluca Moretti

Sign in / Sign up

Export Citation Format

Share Document