scholarly journals Surface high-resolution temperature forecast in southern Italy

2011 ◽  
Vol 6 (1) ◽  
pp. 211-217
Author(s):  
S. Federico ◽  
E. Avolio ◽  
F. Fusto ◽  
R. Niccoli ◽  
C. Bellecci

Abstract. Since June 2008, 1-h temperature forecasts for the Calabria region (Southern Italy) are issued at 2.5 km horizontal resolution at CRATI/ISAC-CNR. Forecasts are available online at http://meteo.crati.it/previsioni.html (every 6-h). This paper shows the forecast performance out to three days for one climatological year (from 1 December 2008 to 30 November 2009, 365 run) for minimum, mean and maximum temperature. The forecast is evaluated against gridded analyses at the same horizontal resolution. Gridded analysis is based on Optimal Interpolation (OI) and uses a de-trending technique for computing the background field. Observations from 87 thermometers are used in the analysis system. In this paper cumulative statistics are shown to quantify forecast errors out to three days.

2011 ◽  
Vol 11 (2) ◽  
pp. 487-500 ◽  
Author(s):  
S. Federico

Abstract. Since 2005, one-hour temperature forecasts for the Calabria region (southern Italy), modelled by the Regional Atmospheric Modeling System (RAMS), have been issued by CRATI/ISAC-CNR (Consortium for Research and Application of Innovative Technologies/Institute for Atmospheric and Climate Sciences of the National Research Council) and are available online at http://meteo.crati.it/previsioni.html (every six hours). Beginning in June 2008, the horizontal resolution was enhanced to 2.5 km. In the present paper, forecast skill and accuracy are evaluated out to four days for the 2008 summer season (from 6 June to 30 September, 112 runs). For this purpose, gridded high horizontal resolution forecasts of minimum, mean, and maximum temperatures are evaluated against gridded analyses at the same horizontal resolution (2.5 km). Gridded analysis is based on Optimal Interpolation (OI) and uses the RAMS first-day temperature forecast as the background field. Observations from 87 thermometers are used in the analysis system. The analysis error is introduced to quantify the effect of using the RAMS first-day forecast as the background field in the OI analyses and to define the forecast error unambiguously, while spatial interpolation (SI) analysis is considered to quantify the statistics' sensitivity to the verifying analysis and to show the quality of the OI analyses for different background fields. Two case studies, the first one with a low (less than the 10th percentile) root mean square error (RMSE) in the OI analysis, the second with the largest RMSE of the whole period in the OI analysis, are discussed to show the forecast performance under two different conditions. Cumulative statistics are used to quantify forecast errors out to four days. Results show that maximum temperature has the largest RMSE, while minimum and mean temperature errors are similar. For the period considered, the OI analysis RMSEs for minimum, mean, and maximum temperatures vary from 1.8, 1.6, and 2.0 °C, respectively, for the first-day forecast, to 2.0, 1.9, and 2.6 °C, respectively, for the fourth-day forecast. Cumulative statistics are computed using both SI and OI analysis as reference. Although SI statistics likely overestimate the forecast error because they ignore the observational error, the study shows that the difference between OI and SI statistics is less than the analysis error. The forecast skill is compared with that of the persistence forecast. The Anomaly Correlation Coefficient (ACC) shows that the model forecast is useful for all days and parameters considered here, and it is able to capture day-to-day weather variability. The model forecast issued for the fourth day is still better than the first-day forecast of a 24-h persistence forecast, at least for mean and maximum temperature. The impact of using the RAMS first-day forecast as the background field in the OI analysis is quantified by comparing statistics computed with OI and SI analyses. Minimum temperature is more sensitive to the change in the analysis dataset as a consequence of its larger representative error.


2020 ◽  
Author(s):  
Jia lihong

<p>It is very difficult to predict accurate temperature, especially for maximum and minimum temperature, due to the large diurnal temperature range in arid area. Based on the temperature forecast products from ECMWF, T639, DOGRAFS and GRAPES models and hourly temperature observations at 105 automatic weather stations in Xinjiang during 2013~2015, two kinds of error correction and integration schemes were designed by using the decaying averaging method, ensemble average and weighted ensemble average method, the effects of error correction and integration on predicted maximum and minimum temperature in fore seasons in different partitions Xinjiang were tested contrastively. The first scheme was integrating forecast temperature before correcting errors, while the second scheme was correcting forecast errors firstly and then giving an integration. The results are follows as: (1)The accuracy of temperature predictions from ECMWF model was the best in Xinjiang as a whole, while that from DOGRAFS model was the worst, and the improvement to minimum temperature predictions was higher than that of maximum temperature prediction. (2) With regarding to different partitions Xinjiang, the accuracies of predicted maximum and minimum temperature in northern Xinjiang, west region and plain areas were correspondingly higher than those in southern Xinjiang, east region and mountain areas, and the correction capability to temperature prediction in winter was higher than that in other seasons. (3) The integrated prediction of maximum and minimum temperature by weighted ensemble average method was better than that of ensemble average method. The second scheme was superior to the first scheme. (4) The improvement to maximum(minimum) temperature prediction in the extreme high(low) temperature event process from 13 to 30 July 2017(from 22 to 24 April 2014) in Xinjiang was significant by using the second scheme.</p>


2005 ◽  
Vol 20 (6) ◽  
pp. 1006-1020 ◽  
Author(s):  
Andrew A. Taylor ◽  
Lance M. Leslie

Abstract Error characteristics of model output statistics (MOS) temperature forecasts are calculated for over 200 locations around the continental United States. The forecasts are verified on a station-by-station basis for the year 2001. Error measures used include mean algebraic error (bias), mean absolute error (MAE), relative frequency of occurrence of bias and MAE values, and the daily forecast errors themselves. A case study examining the spatial and temporal evolution of MOS errors is also presented. The error characteristics presented here, together with the case study, provide a more detailed evaluation of MOS performance than may be obtained from regionally averaged error statistics. Knowledge concerning locations where MOS forecasts have large errors or biases and why those errors or biases exist is of great value to operational forecasters. Not only does such knowledge help improve their forecasts, but forecaster performance is often compared to MOS predictions. Examples of biases in MOS forecast errors are illustrated by examining two stations in detail. Significant warm and cold biases are found in maximum temperature forecasts for Los Angeles, California (LAX), and minimum temperature forecasts for Las Vegas, Nevada (LAS), respectively. MAE values for MOS temperature predictions calculated in this study suggest that coastal stations tend to have lower MAE values and lower variability in their errors, while forecasts with high MAE and error variability are more frequent in the interior of the United States. Therefore, MAE values from samples of MOS forecasts are directly proportional to the variance in the observations. Additionally, it is found that daily maximum temperature forecast errors exhibit less variability during the summer months than they do over the rest of the year, and that forecasts for any one station rarely follow a consistent temporal pattern for more than two or three consecutive days. These inconsistent error patterns indicate that forecasting temperatures based on recent trends in MOS forecast errors at an individual station is usually not a good strategy. As shown in earlier studies by other authors and demonstrated again here, MOS temperature forecasts are often inaccurate in the vicinity of strong temperature gradients, for locations affected by shallow cold air masses, or for stations in regions of anomalously warm or cold temperatures. Finally, a case study is presented examining the spatial and temporal distributions of MOS temperature forecast errors across the United States from 13 to 15 February 2001. During this period, two surges of cold arctic air moved south into the United States. In contrast to error trends at individual stations, nationwide spatial and temporal patterns of MOS forecast errors could prove to be a powerful forecasting tool. Nationwide plots of errors in MOS forecasts would be useful if made available in real time to operational forecasters.


2016 ◽  
Vol 145 (1) ◽  
pp. 137-147 ◽  
Author(s):  
Jakob W. Messner ◽  
Georg J. Mayr ◽  
Achim Zeileis

Abstract Nonhomogeneous regression is often used to statistically postprocess ensemble forecasts. Usually only ensemble forecasts of the predictand variable are used as input, but other potentially useful information sources are ignored. Although it is straightforward to add further input variables, overfitting can easily deteriorate the forecast performance for increasing numbers of input variables. This paper proposes a boosting algorithm to estimate the regression coefficients, while automatically selecting the most relevant input variables by restricting the coefficients of less important variables to zero. A case study with ensemble forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF) shows that this approach effectively selects important input variables to clearly improve minimum and maximum temperature predictions at five central European stations.


2010 ◽  
Vol 138 (12) ◽  
pp. 4402-4415 ◽  
Author(s):  
Paul J. Roebber

Abstract Simulated evolution is used to generate consensus forecasts of next-day minimum temperature for a site in Ohio. The evolved forecast algorithm logic is interpretable in terms of physics that might be accounted for by experienced forecasters, but the logic of the individual algorithms that form the consensus is unique. As a result, evolved program consensus forecasts produce substantial increases in forecast accuracy relative to forecast benchmarks such as model output statistics (MOS) and those from the National Weather Service (NWS). The best consensus produces a mean absolute error (MAE) of 2.98°F on an independent test dataset, representing a 27% improvement relative to MOS. These results translate to potential annual cost savings for electricity production in the state of Ohio of the order of $2 million relative to the NWS forecasts. Perfect forecasts provide nearly $6 million in additional annual electricity production cost savings relative to the evolved program consensus. The frequency of outlier events (forecast busts) falls from 24% using NWS to 16% using the evolved program consensus. Information on when busts are most likely can be provided through a logistic regression equation with two variables: forecast wind speed and the deviation of the NWS minimum temperature forecast from persistence. A forecast of a bust is 4 times more likely to be correct than wrong, suggesting some utility in anticipating the most egregious forecast errors. Discussion concerning the probabilistic applications of evolved programs, the application of this technique to other forecast problems, and the relevance of these findings to the future role of human forecasting is provided.


2014 ◽  
Vol 29 (3) ◽  
pp. 489-504 ◽  
Author(s):  
David R. Novak ◽  
Christopher Bailey ◽  
Keith F. Brill ◽  
Patrick Burke ◽  
Wallace A. Hogsett ◽  
...  

Abstract The role of the human forecaster in improving upon the accuracy of numerical weather prediction is explored using multiyear verification of human-generated short-range precipitation forecasts and medium-range maximum temperature forecasts from the Weather Prediction Center (WPC). Results show that human-generated forecasts improve over raw deterministic model guidance. Over the past two decades, WPC human forecasters achieved a 20%–40% improvement over the North American Mesoscale (NAM) model and the Global Forecast System (GFS) for the 1 in. (25.4 mm) (24 h)−1 threshold for day 1 precipitation forecasts, with a smaller, but statistically significant, 5%–15% improvement over the deterministic ECMWF model. Medium-range maximum temperature forecasts also exhibit statistically significant improvement over GFS model output statistics (MOS), and the improvement has been increasing over the past 5 yr. The quality added by humans for forecasts of high-impact events varies by element and forecast projection, with generally large improvements when the forecaster makes changes ≥8°F (4.4°C) to MOS temperatures. Human improvement over guidance for extreme rainfall events [3 in. (76.2 mm) (24 h)−1] is largest in the short-range forecast. However, human-generated forecasts failed to outperform the most skillful downscaled, bias-corrected ensemble guidance for precipitation and maximum temperature available near the same time as the human-modified forecasts. Thus, as additional downscaled and bias-corrected sensible weather element guidance becomes operationally available, and with the support of near-real-time verification, forecaster training, and tools to guide forecaster interventions, a key test is whether forecasters can learn to make statistically significant improvements over the most skillful of this guidance. Such a test can inform to what degree, and just how quickly, the role of the forecaster changes.


Author(s):  
Kai Carstensen ◽  
Klaus Wohlrabe ◽  
Christina Ziegler

SummaryIn this paper we assess the information content of seven widely cited early indicators for the euro area with respect to forecasting area-wide industrial production. To this end, we use various tests that are designed to compare competing forecast models. In addition to the standard Diebold-Mariano test, we employ tests that account for specific problems typically encountered in forecast exercises. Specifically, we pay attention to nested model structures, we alleviate the problem of data snooping arising from multiple pairwise testing, and we analyze the structural stability in the relative forecast performance of one indicator compared to a benchmark model. Moreover, we consider loss functions that overweight forecast errors in booms and recessions to check-whether a specific indicator that appears to be a good choice on average is also preferable in times of economic stress. We find that none of this indicators uniformly dominates all its competitors. The optimal choice rather depends on the specific forecast situation and the loss function of the user. For 1-month forecasts the business climate indicator of the European Commission and the OECD composite leading indicator generally work well, for 6-month forecasts the OECD composite leading indicator performs very good by all criteria, and for 12-month forecasts the FAZ-Euro indicator published by the Frankfurter Allgemeine Zeitung is the only one that can beat the benchmark AR(1) model.


Author(s):  
Jason J. Kemper ◽  
Mark F. Bielecki ◽  
Thomas L. Acker

In wind integration studies, accurate representations of the wind power output from potential wind power plants and corresponding representations of wind power forecasts are needed, and typically used in a production cost simulation. Two methods for generating “synthetic” wind power forecasts that capture the statistical trends and characteristics found in commercial forecasting techniques are presented. These two methods are based on auto-regressive moving average (ARMA) models and the Markov random walk method. Statistical criteria are suggested for evaluation of wind power forecast performance, and both synthetic forecast methods proposed are evaluated quantitatively and qualitatively. The forecast performance is then compared with a commercial forecast used for an operational wind power plant in the Northwestern United States evaluated using the same statistical performance measures. These quantitative evaluation parameters are monitored during specific months of the year, during rapid ramping events, and at all times. The best ARMA based models failed to replicate the auto-regressive decay of forecast errors associated with commercial forecasts. A modification to the Markov method, consisting of adding a dimension to the state transition array, allowed the forecast time series to depend on multiple inputs. This improvement lowered the artificial variability in the original time series. The overall performance of this method was better than for the ARMA based models, and provides a suitable technique for use in creating a synthetic wind forecast for a wind integration study.


2010 ◽  
Vol 138 (2) ◽  
pp. 563-578 ◽  
Author(s):  
Jean-François Caron ◽  
Luc Fillion

Abstract The differences in the balance characteristics between dry and precipitation areas in estimated short-term forecast error fields are investigated. The motivation is to see if dry and precipitation areas need to be treated differently in atmospheric data assimilation systems. Using an ensemble of lagged forecast differences, it is shown that perturbations are, on average, farther away from geostrophic balance over precipitation areas than over dry areas and that the deviation from geostrophic balance is proportional to the intensity of precipitation. Following these results, the authors investigate whether some improvements in the coupling between mass and rotational wind increments over precipitation areas can be achieved by using only the precipitation points within an ensemble of estimated forecast errors to construct a so-called diabatic balance operator by linear regression. Comparisons with a traditional approach to construct balance operators by linear regression show that the new approach leads to a gradually significant improvement (related to the intensity of the diabatic processes) of the accuracy of the coupling over precipitation areas as judged from an ensemble of lagged forecast differences. Results from a series of simplified data assimilation experiments show that the new balance operators can produce analysis increments that are substantially different from those associated with the traditional balance operator, particularly for observations located in the lower atmosphere. Issues concerning the implementation of this new approach in a full-fledged analysis system are briefly discussed but their investigations are left for a following study.


Sign in / Sign up

Export Citation Format

Share Document