forecast comparison
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 2)

H-INDEX

9
(FIVE YEARS 1)

2016 ◽  
Vol 33 (6) ◽  
pp. 1306-1351 ◽  
Author(s):  
Sainan Jin ◽  
Valentina Corradi ◽  
Norman R. Swanson

Forecast accuracy is typically measured in terms of a given loss function. However, as a consequence of the use of misspecified models in multiple model comparisons, relative forecast rankings are loss function dependent. In order to address this issue, a novel criterion for forecast evaluation that utilizes the entire distribution of forecast errors is introduced. In particular, we introduce the concepts of general-loss (GL) forecast superiority and convex-loss (CL) forecast superiority; and we develop tests for GL (CL) superiority that are based on an out-of-sample generalization of the tests introduced by Linton, Maasoumi, and Whang (2005, Review of Economic Studies 72, 735–765). Our test statistics are characterized by nonstandard limiting distributions, under the null, necessitating the use of resampling procedures to obtain critical values. Additionally, the tests are consistent and have nontrivial local power, under a sequence of local alternatives. The above theory is developed for the stationary case, as well as for the case of heterogeneity that is induced by distributional change over time. Monte Carlo simulations suggest that the tests perform reasonably well in finite samples, and an application in which we examine exchange rate data indicates that our tests can help identify superior forecasting models, regardless of loss function.


2016 ◽  
Vol 144 (2) ◽  
pp. 615-626 ◽  
Author(s):  
Timothy DelSole ◽  
Michael K. Tippett

Abstract This paper proposes a procedure based on random walks for testing and visualizing differences in forecast skill. The test is formally equivalent to the sign test and has numerous attractive statistical properties, including being independent of distributional assumptions about the forecast errors and being applicable to a wide class of measures of forecast quality. While the test is best suited for independent outcomes, it provides useful information even when serial correlation exists. The procedure is applied to deterministic ENSO forecasts from the North American Multimodel Ensemble and yields several revealing results, including 1) the Canadian models are the most skillful dynamical models, even when compared to the multimodel mean; 2) a regression model is significantly more skillful than all but one dynamical model (to which it is equally skillful); and 3) in some cases, there are significant differences in skill between ensemble members from the same model, potentially reflecting differences in initialization. The method requires only a few years of data to detect significant differences in the skill of models with known errors/biases, suggesting that the procedure may be useful for model development and monitoring of real-time forecasts.


Author(s):  
Sainan Jin ◽  
Valentina Corradi ◽  
Norman R. Swanson
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document