forecast uncertainty
Recently Published Documents


TOTAL DOCUMENTS

278
(FIVE YEARS 77)

H-INDEX

31
(FIVE YEARS 4)

2021 ◽  
Vol 59 (4) ◽  
pp. 1135-1190
Author(s):  
Barbara Rossi

This article provides guidance on how to evaluate and improve the forecasting ability of models in the presence of instabilities, which are widespread in economic time series. Empirically relevant examples include predicting the financial crisis of 2007–08, as well as, more broadly, fluctuations in asset prices, exchange rates, output growth, and inflation. In the context of unstable environments, I discuss how to assess models’ forecasting ability; how to robustify models’ estimation; and how to correctly report measures of forecast uncertainty. Importantly, and perhaps surprisingly, breaks in models’ parameters are neither necessary nor sufficient to generate time variation in models’ forecasting performance: thus, one should not test for breaks in models’ parameters, but rather evaluate their forecasting ability in a robust way. In addition, local measures of models’ forecasting performance are more appropriate than traditional, average measures. (JEL C51, C53, E31, E32, E37, F37)


Author(s):  
Chin-Hung Chen ◽  
Kao-Shen Chung ◽  
Shu-Chih Yang ◽  
Li-Hsin Chen ◽  
Pay-Liam Lin ◽  
...  

AbstractA mesoscale convective system that occurred in southwestern Taiwan on 15 June 2008 is simulated using convection-allowing ensemble forecasts to investigate the forecast uncertainty associated with four microphysics schemes—the Goddard Cumulus Ensemble (GCE), Morrison (MOR), WRF single-moment 6-class (WSM6), and WRF double-moment 6-class (WDM6) schemes. First, the essential features of the convective structure, hydrometeor distribution, and microphysical tendencies for the different microphysics schemes are presented through deterministic forecasts. Second, ensemble forecasts with the same initial conditions are employed to estimate the forecast uncertainty produced by the different ensembles with the fixed microphysics scheme. GCE has the largest spread in most state variables due to its most efficient phase conversion between water species. By contrast, MOR results in the least spread. WSM6 and WDM6 have similar vertical spread structures due to their similar ice-phase formulae. However, WDM6 produces more ensemble spread than WSM6 does below the melting layer, resulting from its double-moment treatment of warm rain processes. The model simulations with the four microphysics schemes demonstrate upscale error growth through spectrum analysis of the root-mean difference total energy (RMDTE). The RMDTE results reveal that the GCE and WDM6 schemes are more sensitive to initial condition uncertainty, whereas the MOR and WSM6 schemes are relatively less sensitive to that for this event. Overall, the diabatic heating–cooling processes connect the convective-scale cloud microphysical processes to the large-scale dynamical and thermodynamical fields, and they significantly affect the forecast error signatures in the multiscale weather system.


2021 ◽  
Author(s):  
Boxiao Li ◽  
Hemant Phale ◽  
Yanfen Zhang ◽  
Timothy Tokar ◽  
Xian-Huan Wen

Abstract Design of Experiments (DoE) is one of the most commonly employed techniques in the petroleum industry for Assisted History Matching (AHM) and uncertainty analysis of reservoir production forecasts. Although conceptually straightforward, DoE is often misused by practitioners because many of its statistical and modeling principles are not carefully followed. Our earlier paper (Li et al. 2019) detailed the best practices in DoE-based AHM for brownfields. However, to our best knowledge, there is a lack of studies that summarize the common caveats and pitfalls in DoE-based production forecast uncertainty analysis for greenfields and history-matched brownfields. Our objective here is to summarize these caveats and pitfalls to help practitioners apply the correct principles for DoE-based production forecast uncertainty analysis. Over 60 common pitfalls in all stages of a DoE workflow are summarized. Special attention is paid to the following critical project transitions: (1) the transition from static earth modeling to dynamic reservoir simulation; (2) from AHM to production forecast; and (3) from analyzing subsurface uncertainties to analyzing field-development alternatives. Most pitfalls can be avoided by consistently following the statistical and modeling principles. Some pitfalls, however, can trap experienced engineers. For example, mistakes made in handling the three abovementioned transitions can yield strongly unreliable proxy and sensitivity analysis. For the representative examples we study, they can lead to having a proxy R2 of less than 0.2 versus larger than 0.9 if done correctly. Two improved experimental designs are created to resolve this challenge. Besides the technical pitfalls that are avoidable via robust statistical workflows, we also highlight the often more severe non-technical pitfalls that cannot be evaluated by measures like R2. Thoughts are shared on how they can be avoided, especially during project framing and the three critical transition scenarios.


2021 ◽  
Author(s):  
Julian Francesco Quinting ◽  
Christian M. Grams

Abstract. Physical processes on the synoptic scale are important modulators of the large-scale extratropical circulation. In particular, rapidly ascending air streams in extratropical cyclones, so-called warm conveyor belts (WCBs), modulate the upper-tropospheric Rossby wave pattern and are sources and magnifiers of forecast uncertainty. Thus, from a process-oriented perspective, numerical weather prediction (NWP) and climate models should adequately represent WCBs. The identification of WCBs usually involves Lagrangian air parcel trajectories that ascend from the lower to the upper troposphere within two days. This requires numerical data with high spatial and temporal resolution which is often not available from standard output and requires expensive computations. This study introduces a novel framework that aims to predict the footprints of the WCB inflow, ascent, and outflow stages over the Northern Hemisphere from instantaneous gridded fields using convolutional neural networks (CNNs). With its comparably low computational costs and relying on standard model output alone the new diagnostic enables the systematic investigation of WCBs in large data sets such as ensemble reforecast or climate model projections which are mostly not suited for trajectory calculations. Building on the insights from a logistic regression approach of a previous study, the CNNs are trained using a combination of meteorological parameters as predictors and trajectory-based WCB footprints as predictands. Validation of the networks against the trajectory-based data set confirms that the CNN models reliably replicate the climatological frequency of WCBs as well as their footprints at instantaneous time steps. The CNN models significantly outperform previously developed logistic regression models. Including time-lagged information on the occurrence of WCB ascent as a predictor for the inflow and outflow stages further improves the models' skill considerably. A companion study demonstrates versatile applications of the CNNs in different data sets including the verification of WCBs in ensemble forecasts. Overall, the diagnostic demonstrates how deep learning methods may be used to investigate the representation of weather systems and of their related processes in NWP and climate models in order to shed light on forecast uncertainty and systematic biases from a process-oriented perspective.


2021 ◽  
Author(s):  
Akila Herath ◽  
Kithsiri M. Liyanage ◽  
M.A. Mohammed Manaz ◽  
Taisuke Masuta ◽  
Chan-Nan Lu

2021 ◽  
Author(s):  
Max Schneider ◽  
Michelle McDowell ◽  
Peter Guttorp ◽  
E. Ashley Steel ◽  
Nadine Fleischhut

Abstract. Earthquake models can produce aftershock forecasts, which have recently been released to lay audiences following large earthquakes. While visualization literature suggests that displaying forecast uncertainty can improve how forecast maps are used, research on uncertainty visualization is missing from earthquake science. We designed a pre-registered online experiment to test the effectiveness of three visualization techniques for displaying aftershock forecast maps and their uncertainty. These maps showed the forecasted number of aftershocks at each location for a week following a hypothetical mainshock, along with the uncertainty around each location’s forecast. Three different uncertainty visualizations were produced: (1) forecast and uncertainty maps adjacent to one another; (2) the forecast map depicted in a color scheme, with the uncertainty shown by the transparency of the color; and (3) two maps that showed the lower and upper bounds of the forecast distribution at each location. Unlike previous experiments, we compared the three uncertainty visualizations using tasks that are systematically designed to address broadly applicable and user-generated communication goals. We compared task responses between participants using uncertainty visualizations and using the forecast map shown without its uncertainty (the current practice). Participants completed two map-reading tasks that targeted several dimensions of the readability of uncertainty visualizations. Participants then performed a comparative judgment task, which demonstrated whether a visualization was successful in reaching two key communication goals: indicating where many aftershocks and no aftershocks are likely (sure bets) and where the forecast is low but the uncertainty is high enough to imply potential risk (surprises). All visualizations performed equally well in the goal of communicating sure bet situations. But the visualization with lower and upper bounds was substantially better than the other designs at communicating surprises. These results have implications for the communication of forecast uncertainty both within and beyond earthquake science.


Energies ◽  
2021 ◽  
Vol 14 (16) ◽  
pp. 4951
Author(s):  
Thomas Carrière ◽  
Rodrigo Amaro e Silva ◽  
Fuqiang Zhuang ◽  
Yves-Marie Saint-Drenan ◽  
Philippe Blanc

Probabilistic solar forecasting is an issue of growing relevance for the integration of photovoltaic (PV) energy. However, for short-term applications, estimating the forecast uncertainty is challenging and usually delegated to statistical models. To address this limitation, the present work proposes an approach which combines physical and statistical foundations and leverages on satellite-derived clear-sky index (kc) and cloud motion vectors (CMV), both traditionally used for deterministic forecasting. The forecast uncertainty is estimated by using the CMV in a different way than the one generally used by standard CMV-based forecasting approach and by implementing an ensemble approach based on a Gaussian noise-adding step to both the kc and the CMV estimations. Using 15-min average ground-measured Global Horizontal Irradiance (GHI) data for two locations in France as reference, the proposed model shows to largely surpass the baseline probabilistic forecast Complete History Persistence Ensemble (CH-PeEn), reducing the Continuous Ranked Probability Score (CRPS) between 37% and 62%, depending on the forecast horizon. Results also show that this is mainly driven by improving the model’s sharpness, which was measured using the Prediction Interval Normalized Average Width (PINAW) metric.


Sign in / Sign up

Export Citation Format

Share Document