scholarly journals “Jumpiness” of the ECMWF and Met Office EPS Control and Ensemble-Mean Forecasts

2009 ◽  
Vol 137 (11) ◽  
pp. 3823-3836 ◽  
Author(s):  
Ervin Zsoter ◽  
Roberto Buizza ◽  
David Richardson

Abstract This work investigates the inconsistency between forecasts issued at different times but valid for the same time, and shows that ensemble-mean forecasts are less inconsistent than corresponding control forecasts. The “jumpiness” index, the concepts of different forecast jumps—the “flip,” “flip-flop,” and “flip-flop-flip”—and the inconsistency correlation between time series of inconsistency indices are introduced to measure the consistency/inconsistency of consecutive forecasts. These new measures are used to compare the behavior of the ECMWF and the Met Office control and ensemble-mean forecasts for an 18-month period over Europe. Results indicate that for both the ECMWF and the Met Office ensembles, the ensemble-mean forecast is less inconsistent than the control forecast. However, they also indicate that the ensemble mean follows its corresponding control forecast more closely than the controls (or the ensemble means) of the two ensemble systems following each other, thus suggesting weaknesses in both ensemble systems in the simulation of forecast uncertainty due to model or analysis error. Results also show that there is only a weak link between forecast jumpiness and forecast error (i.e., forecasts with lower inconsistency do not necessarily have, on average, lower error).

2017 ◽  
Vol 32 (1) ◽  
pp. 149-164 ◽  
Author(s):  
Carlee F. Loeser ◽  
Michael A. Herrera ◽  
Istvan Szunyogh

Abstract This study investigates the efficiency of the major operational global ensemble forecast systems of the world in capturing the spatiotemporal evolution of the forecast uncertainty. Using data from 2015, it updates the results of an earlier study based on data from 2012. It also tests, for the first time on operational ensemble data, two quantitative relationships to aid in the interpretation of the raw ensemble forecasts. One of these relationships provides a flow-dependent prediction of the reliability of the ensemble in capturing the uncertain forecast features, while the other predicts the 95th percentile value of the magnitude of the forecast error. It is found that, except for the system of the Met Office, the main characteristics of the ensemble forecast systems have changed little between 2012 and 2015. The performance of the UKMO ensemble improved in predicting the overall magnitude of the uncertainty, but its ability to predict the dominant uncertain forecast features was degraded. A common serious limitation of the ensemble systems remains that they all have major difficulties with predicting the large-scale atmospheric flow in the long (longer than 10 days) forecast range. These difficulties are due to the inability of the ensemble members to maintain large-scale waves in the forecasts, which presents a stumbling block in the way of extending the skill of numerical weather forecasts to the subseasonal range. The two tested predictive relationships were found to provide highly accurate predictions of the flow-dependent reliability of the ensemble predictions and the 95th percentile value of the magnitude of the forecast error for the operational ensemble forecast systems.


2011 ◽  
Vol 11 (2) ◽  
pp. 487-500 ◽  
Author(s):  
S. Federico

Abstract. Since 2005, one-hour temperature forecasts for the Calabria region (southern Italy), modelled by the Regional Atmospheric Modeling System (RAMS), have been issued by CRATI/ISAC-CNR (Consortium for Research and Application of Innovative Technologies/Institute for Atmospheric and Climate Sciences of the National Research Council) and are available online at http://meteo.crati.it/previsioni.html (every six hours). Beginning in June 2008, the horizontal resolution was enhanced to 2.5 km. In the present paper, forecast skill and accuracy are evaluated out to four days for the 2008 summer season (from 6 June to 30 September, 112 runs). For this purpose, gridded high horizontal resolution forecasts of minimum, mean, and maximum temperatures are evaluated against gridded analyses at the same horizontal resolution (2.5 km). Gridded analysis is based on Optimal Interpolation (OI) and uses the RAMS first-day temperature forecast as the background field. Observations from 87 thermometers are used in the analysis system. The analysis error is introduced to quantify the effect of using the RAMS first-day forecast as the background field in the OI analyses and to define the forecast error unambiguously, while spatial interpolation (SI) analysis is considered to quantify the statistics' sensitivity to the verifying analysis and to show the quality of the OI analyses for different background fields. Two case studies, the first one with a low (less than the 10th percentile) root mean square error (RMSE) in the OI analysis, the second with the largest RMSE of the whole period in the OI analysis, are discussed to show the forecast performance under two different conditions. Cumulative statistics are used to quantify forecast errors out to four days. Results show that maximum temperature has the largest RMSE, while minimum and mean temperature errors are similar. For the period considered, the OI analysis RMSEs for minimum, mean, and maximum temperatures vary from 1.8, 1.6, and 2.0 °C, respectively, for the first-day forecast, to 2.0, 1.9, and 2.6 °C, respectively, for the fourth-day forecast. Cumulative statistics are computed using both SI and OI analysis as reference. Although SI statistics likely overestimate the forecast error because they ignore the observational error, the study shows that the difference between OI and SI statistics is less than the analysis error. The forecast skill is compared with that of the persistence forecast. The Anomaly Correlation Coefficient (ACC) shows that the model forecast is useful for all days and parameters considered here, and it is able to capture day-to-day weather variability. The model forecast issued for the fourth day is still better than the first-day forecast of a 24-h persistence forecast, at least for mean and maximum temperature. The impact of using the RAMS first-day forecast as the background field in the OI analysis is quantified by comparing statistics computed with OI and SI analyses. Minimum temperature is more sensitive to the change in the analysis dataset as a consequence of its larger representative error.


2018 ◽  
Vol 7 (3.15) ◽  
pp. 36 ◽  
Author(s):  
Sarah Nadirah Mohd Johari ◽  
Fairuz Husna Muhamad Farid ◽  
Nur Afifah Enara Binti Nasrudin ◽  
Nur Sarah Liyana Bistamam ◽  
Nur Syamira Syamimi Muhammad Shuhaili

Predicting financial market changes is an important issue in time series analysis, receiving an increasing attention due to financial crisis. Autoregressive integrated moving average (ARIMA) model has been one of the most widely used linear models in time series forecasting but ARIMA model cannot capture nonlinear patterns easily. Generalized autoregressive conditional heteroscedasticity (GARCH) model applied understanding of volatility depending to the estimation of previous forecast error and current volatility, improving ARIMA model. Support vector machine (SVM) and artificial neural network (ANN) have been successfully applied in solving nonlinear regression estimation problems. This study proposes hybrid methodology that exploits unique strength of GARCH + SVM model, and GARCH + ANN model in forecasting stock index. Real data sets of stock prices FTSE Bursa Malaysia KLCI were used to examine the forecasting accuracy of the proposed model. The results shows that the proposed hybrid model achieves best forecasting compared to other model.  


2021 ◽  
Vol 59 (4) ◽  
pp. 1135-1190
Author(s):  
Barbara Rossi

This article provides guidance on how to evaluate and improve the forecasting ability of models in the presence of instabilities, which are widespread in economic time series. Empirically relevant examples include predicting the financial crisis of 2007–08, as well as, more broadly, fluctuations in asset prices, exchange rates, output growth, and inflation. In the context of unstable environments, I discuss how to assess models’ forecasting ability; how to robustify models’ estimation; and how to correctly report measures of forecast uncertainty. Importantly, and perhaps surprisingly, breaks in models’ parameters are neither necessary nor sufficient to generate time variation in models’ forecasting performance: thus, one should not test for breaks in models’ parameters, but rather evaluate their forecasting ability in a robust way. In addition, local measures of models’ forecasting performance are more appropriate than traditional, average measures. (JEL C51, C53, E31, E32, E37, F37)


2011 ◽  
Vol 139 (10) ◽  
pp. 3284-3303 ◽  
Author(s):  
Jun Du ◽  
Binbin Zhou

Abstract This study proposes a dynamical performance-ranking method (called the Du–Zhou ranking method) to predict the relative performance of individual ensemble members by assuming the ensemble mean is a good estimation of the truth. The results show that the method 1) generally works well, especially for shorter ranges such as a 1-day forecast; 2) has less error in predicting the extreme (best and worst) performers than the intermediate performers; 3) works better when the variation in performance among ensemble members is large; 4) works better when the model bias is small; 5) works better in a multimodel than in a single-model ensemble environment; and 6) works best when using the magnitude difference between a member and its ensemble mean as the “distance” measure in ranking members. The ensemble mean and median generally perform similarly to each other. This method was applied to a weighted ensemble average to see if it can improve the ensemble mean forecast over a commonly used, simple equally weighted ensemble averaging method. The results indicate that the weighted ensemble mean forecast has a smaller systematic error. This superiority of the weighted over the simple mean is especially true for smaller-sized ensembles, such as 5 and 11 members, but it decreases with the increase in ensemble size and almost vanishes when the ensemble size increases to 21 members. There is, however, little impact on the random error and the spatial patterns of ensemble mean forecasts. These results imply that it might be difficult to improve the ensemble mean by just weighting members when an ensemble reaches a certain size. However, it is found that the weighted averaging can reduce the total forecast error more when a raw ensemble-mean forecast itself is less accurate. It is also expected that the effectiveness of weighted averaging should be improved when the ensemble spread is improved or when the ranking method itself is improved, although such an improvement should not be expected to be too big (probably less than 10%, on average).


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Hyo-Jong Song

Abstract Numerical weather prediction provides essential information of societal influence. Advances in the initial condition estimation have led to the improvement of the prediction skill. The process to produce the better initial condition (analysis) with the combination of short-range forecast and observation over the globe requires information about uncertainty of the forecast results to decide how much observation is reflected to the analysis and how far the observation information should be propagated. Forecast ensemble represents the error of the short-range forecast at the instance. The influence of observation propagating along with forecast ensemble correlation needs to be restricted by localized correlation function because of less reliability of sample correlation. So far, solitary radius of influence is usually used since there has not been an understanding about the realism of multiple scales in the forecast uncertainty. In this study, it is explicitly shown that multiple scales exist in short-range forecast error and any single-scale localization approach could not resolve this situation. A combination of Gaussian correlation functions of various scales is designed, which more weighs observation itself near the data point and makes ensemble perturbation, far from the observation position, more participate in decision of the analysis. Its outstanding performance supports the existence of multi-scale correlation in forecast uncertainty.


2016 ◽  
Vol 55 (7) ◽  
pp. 1633-1649 ◽  
Author(s):  
Marc Schröder ◽  
Maarit Lockhoff ◽  
John M. Forsythe ◽  
Heather Q. Cronk ◽  
Thomas H. Vonder Haar ◽  
...  

AbstractThe Global Energy and Water Cycle Exchanges project (GEWEX) water vapor assessment’s (G-VAP) main objective is to analyze and explain strengths and weaknesses of satellite-based data records of water vapor through intercomparisons and comparisons with ground-based data. G-VAP results from the intercomparison of six total column water vapor (TCWV) data records are presented. Prior to the intercomparison, the data records were regridded to a common regular grid of 2° × 2° longitude–latitude. All data records cover a common period from 1988 to 2008. The intercomparison is complemented by an analysis of trend estimates, which was applied as a tool to identify issues in the data records. It was observed that the trends over global ice-free oceans are generally different among the different data records. Most of these differences are statistically significant. Distinct spatial features are evident in maps of differences in trend estimates, which largely coincide with maxima in standard deviations from the ensemble mean. The penalized maximal F test has been applied to global ice-free ocean and selected land regional anomaly time series, revealing differences in trends to be largely caused by breakpoints in the different data records. The time, magnitude, and number of breakpoints typically differ from region to region and between data records. These breakpoints often coincide with changes in observing systems used for the different data records. The TCWV data records have also been compared with data from a radiosonde archive. For example, at Lindenberg, Germany, and at Yichang, China, such breakpoints are not observed, providing further evidence for the regional imprint of changes in the observing system.


2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Guocan Wu ◽  
Bo Dan ◽  
Xiaogu Zheng

Assimilating observations to a land surface model can further improve soil moisture estimation accuracy. However, assimilation results largely rely on forecast error and generally cannot maintain a water budget balance. In this study, shallow soil moisture observations are assimilated into Common Land Model (CoLM) to estimate the soil moisture in different layers. A proposed forecast error inflation and water balance constraint are adopted in the Ensemble Transform Kalman Filter to reduce the analysis error and water budget residuals. The assimilation results indicate that the analysis error is reduced and the water imbalance is mitigated with this approach.


2017 ◽  
Vol 2017 ◽  
pp. 1-15 ◽  
Author(s):  
Tomasz Andrysiak ◽  
Łukasz Saganowski ◽  
Piotr Kiedrowski

The article presents solutions to anomaly detection in network traffic for critical smart metering infrastructure, realized with the use of radio sensory network. The structure of the examined smart meter network and the key security aspects which have influence on the correct performance of an advanced metering infrastructure (possibility of passive and active cyberattacks) are described. An effective and quick anomaly detection method is proposed. At its initial stage, Cook’s distance was used for detection and elimination of outlier observations. So prepared data was used to estimate standard statistical models based on exponential smoothing, that is, Brown’s, Holt’s, and Winters’ models. To estimate possible fluctuations in forecasts of the implemented models, properly parameterized Bollinger Bands was used. Next, statistical relations between the estimated traffic model and its real variability were examined to detect abnormal behavior, which could indicate a cyberattack attempt. An update procedure of standard models in case there were significant real network traffic fluctuations was also proposed. The choice of optimal parameter values of statistical models was realized as forecast error minimization. The results confirmed efficiency of the presented method and accuracy of choice of the proper statistical model for the analyzed time series.


Sign in / Sign up

Export Citation Format

Share Document