scholarly journals Evaluation of IRI’s Seasonal Climate Forecasts for the Extreme 15% Tails

2011 ◽  
Vol 26 (4) ◽  
pp. 545-554 ◽  
Author(s):  
Anthony G. Barnston ◽  
Simon J. Mason

Abstract This paper evaluates the quality of real-time seasonal probabilistic forecasts of the extreme 15% tails of the climatological distribution of temperature and precipitation issued by the International Research Institute for Climate and Society (IRI) from 1998 through 2009. IRI’s forecasts have been based largely on a two-tiered multimodel dynamical prediction system. Forecasts of the 15% extremes have been consistent with the corresponding probabilistic forecasts for the standard tercile-based categories; however, nonclimatological forecasts for the extremes have been issued sparingly. Results indicate positive skill in terms of resolution and discrimination for the extremes forecasts, particularly in the tropics. Additionally, with the exception of some overconfidence for extreme above-normal precipitation and a strong cool bias for temperature, reliability analyses suggest generally good calibration. Skills for temperature are generally higher than those for precipitation, due both to correct forecasts of increased probabilities of extremely high (above the upper 15th percentile) temperatures associated with warming trends, and to better discrimination of interannual variability. However, above-normal temperature extremes were substantially underforecast, as noted also for the IRI’s tercile forecasts.

2010 ◽  
Vol 49 (3) ◽  
pp. 493-520 ◽  
Author(s):  
Anthony G. Barnston ◽  
Shuhua Li ◽  
Simon J. Mason ◽  
David G. DeWitt ◽  
Lisa Goddard ◽  
...  

Abstract This paper examines the quality of seasonal probabilistic forecasts of near-global temperature and precipitation issued by the International Research Institute for Climate and Society (IRI) from late 1997 through 2008, using mainly a two-tiered multimodel dynamical prediction system. Skill levels, while modest when globally averaged, depend markedly on season and location and average higher in the tropics than extratropics. To first order, seasons and regions of useful skill correspond to known direct effects as well as remote teleconnections from anomalies of tropical sea surface temperature in the Pacific Ocean (e.g., ENSO related) and in other tropical basins. This result is consistent with previous skill assessments by IRI and others and suggests skill levels beneficial to informed clients making climate risk management decisions for specific applications. Skill levels for temperature are generally higher, and less seasonally and regionally dependent, than those for precipitation, partly because of correct forecasts of enhanced probabilities for above-normal temperatures associated with warming trends. However, underforecasting of above-normal temperatures suggests that the dynamical forecast system could be improved through inclusion of time-varying greenhouse gas concentrations. Skills of the objective multimodel probability forecasts, used as the primary basis for the final forecaster-modified issued forecasts, are comparable to those of the final forecasts, but their probabilistic reliability is somewhat weaker. Automated recalibration of the multimodel output should permit improvements to their reliability, allowing them to be issued as is. IRI is currently developing single-tier prediction components.


2003 ◽  
Vol 84 (12) ◽  
pp. 1761-1782 ◽  
Author(s):  
L. Goddard ◽  
A. G. Barnston ◽  
S. J. Mason

The International Research Institute for Climate Prediction (IRI) net assessment seasonal temperature and precipitation forecasts are evaluated for the 4-yr period from October–December 1997 to October–December 2001. These probabilistic forecasts represent the human distillation of seasonal climate predictions from various sources. The ranked probability skill score (RPSS) serves as the verification measure. The evaluation is offered as time-averaged spatial maps of the RPSS as well as area-averaged time series. A key element of this evaluation is the examination of the extent to which the consolidation of several predictions, accomplished here subjectively by the forecasters, contributes to or detracts from the forecast skill possible from any individual prediction tool. Overall, the skills of the net assessment forecasts for both temperature and precipitation are positive throughout the 1997–2001 period. The skill may have been enhanced during the peak of the 1997/98 El Niño, particularly for tropical precipitation, although widespread positive skill exists even at times of weak forcing from the tropical Pacific. The temporally averaged RPSS for the net assessment temperature forecasts appears lower than that for the AGCMs. Over time, however, the IRI forecast skill is more consistently positive than that of the AGCMs. The IRI precipitation forecasts generally have lower skill than the temperature forecasts, but the forecast probabilities for precipitation are found to be appropriate to the frequency of the observed outcomes, and thus reliable. Over many regions where the precipitation variability is known to be potentially predictable, the net assessment precipitation forecasts exhibit more spatially coherent areas of positive skill than most, if not all, prediction tools. On average, the IRI net assessment forecasts appear to perform better than any of the individual objective prediction tools.


2020 ◽  
Vol 162 ◽  
pp. 1321-1339
Author(s):  
Josselin Le Gal La Salle ◽  
Jordi Badosa ◽  
Mathieu David ◽  
Pierre Pinson ◽  
Philippe Lauret

2018 ◽  
Vol 146 (3) ◽  
pp. 781-796 ◽  
Author(s):  
Jingzhuo Wang ◽  
Jing Chen ◽  
Jun Du ◽  
Yutao Zhang ◽  
Yu Xia ◽  
...  

This study demonstrates how model bias can adversely affect the quality assessment of an ensemble prediction system (EPS) by verification metrics. A regional EPS [Global and Regional Assimilation and Prediction Enhanced System-Regional Ensemble Prediction System (GRAPES-REPS)] was verified over a period of one month over China. Three variables (500-hPa and 2-m temperatures, and 250-hPa wind) are selected to represent “strong” and “weak” bias situations. Ensemble spread and probabilistic forecasts are compared before and after a bias correction. The results show that the conclusions drawn from ensemble verification about the EPS are dramatically different with or without model bias. This is true for both ensemble spread and probabilistic forecasts. The GRAPES-REPS is severely underdispersive before the bias correction but becomes calibrated afterward, although the improvement in the spread’s spatial structure is much less; the spread–skill relation is also improved. The probabilities become much sharper and almost perfectly reliable after the bias is removed. Therefore, it is necessary to remove forecast biases before an EPS can be accurately evaluated since an EPS deals only with random error but not systematic error. Only when an EPS has no or little forecast bias, can ensemble verification metrics reliably reveal the true quality of an EPS without removing forecast bias first. An implication is that EPS developers should not be expected to introduce methods to dramatically increase ensemble spread (either by perturbation method or statistical calibration) to achieve reliability. Instead, the preferred solution is to reduce model bias through prediction system developments and to focus on the quality of spread (not the quantity of spread). Forecast products should also be produced from the debiased but not the raw ensemble.


2020 ◽  
Author(s):  
Jingzhuo Wang ◽  
Jing Chen ◽  
Jun Du

<p>        This study demonstrates how model bias can adversely affect the quality assessment of an ensemble prediction system (EPS) by verification metrics. A regional EPS [Global and Regional Assimilation and Prediction Enhanced System-Regional Ensemble Prediction System (GRAPES-REPS)] was verified over a period of one month over China. Three variables (500-hPa and 2-m temperatures, and 250-hPa wind) are selected to represent "strong" and "weak" bias situations. Ensemble spread and probabilistic forecasts are compared before and after a bias correction. The results show that the conclusions drawn from ensemble verification about the EPS are dramatically different with or without model bias. This is true for both ensemble spread and probabilistic forecasts. The GRAPES-REPS is severely underdispersive before the bias correction but becomes calibrated afterward, although the improvement in the spread' spatial structure is much less; the spread-skill relation is also improved. The probabilities become much sharper and almost perfectly reliable after the bias is removed. Therefore, it is necessary to remove forecast biases before an EPS can be accurately evaluated since an EPS deals only with random error but not systematic error. Only when an EPS has no or little forecast bias, can ensemble verification metrics reliably reveal the true quality of an EPS without removing forecast bias first. An implication is that EPS developers should not be expected to introduce methods to dramatically increase ensemble spread (either by perturbation method or statistical calibration) to achieve reliability. Instead, the preferred solution is to reduce model bias through prediction system developments and to focus on the quality of spread (not the quantity of spread). Forecast products should also be produced from the debiased but not the raw ensemble.</p>


2014 ◽  
Vol 11 (96) ◽  
pp. 20131162 ◽  
Author(s):  
A. Weisheimer ◽  
T. N. Palmer

Seasonal climate forecasts are being used increasingly across a range of application sectors. A recent UK governmental report asked: how good are seasonal forecasts on a scale of 1–5 (where 5 is very good), and how good can we expect them to be in 30 years time? Seasonal forecasts are made from ensembles of integrations of numerical models of climate. We argue that ‘goodness’ should be assessed first and foremost in terms of the probabilistic reliability of these ensemble-based forecasts; reliable inputs are essential for any forecast-based decision-making. We propose that a ‘5’ should be reserved for systems that are not only reliable overall, but where, in particular, small ensemble spread is a reliable indicator of low ensemble forecast error. We study the reliability of regional temperature and precipitation forecasts of the current operational seasonal forecast system of the European Centre for Medium-Range Weather Forecasts, universally regarded as one of the world-leading operational institutes producing seasonal climate forecasts. A wide range of ‘goodness’ rankings, depending on region and variable (with summer forecasts of rainfall over Northern Europe performing exceptionally poorly) is found. Finally, we discuss the prospects of reaching ‘5’ across all regions and variables in 30 years time.


2019 ◽  
Vol 100 (10) ◽  
pp. 2043-2060 ◽  
Author(s):  
Kathy Pegion ◽  
Ben P. Kirtman ◽  
Emily Becker ◽  
Dan C. Collins ◽  
Emerson LaJoie ◽  
...  

AbstractThe Subseasonal Experiment (SubX) is a multimodel subseasonal prediction experiment designed around operational requirements with the goal of improving subseasonal forecasts. Seven global models have produced 17 years of retrospective (re)forecasts and more than a year of weekly real-time forecasts. The reforecasts and forecasts are archived at the Data Library of the International Research Institute for Climate and Society, Columbia University, providing a comprehensive database for research on subseasonal to seasonal predictability and predictions. The SubX models show skill for temperature and precipitation 3 weeks ahead of time in specific regions. The SubX multimodel ensemble mean is more skillful than any individual model overall. Skill in simulating the Madden–Julian oscillation (MJO) and the North Atlantic Oscillation (NAO), two sources of subseasonal predictability, is also evaluated, with skillful predictions of the MJO 4 weeks in advance and of the NAO 2 weeks in advance. SubX is also able to make useful contributions to operational forecast guidance at the Climate Prediction Center. Additionally, SubX provides information on the potential for extreme precipitation associated with tropical cyclones, which can help emergency management and aid organizations to plan for disasters.


Author(s):  
A.V. Konstantinovich ◽  
◽  
A.S. Kuracheva ◽  
E.D. Binkevich

In conditions of climate change, when temperature and precipitation fluctuations occur more and more frequently during the growing season, it is necessary to obtain high quality seedlings with "immunity" to various stress factors, including high weediness, the damage from which is associated with a decrease in yield (by 25 -35%) and with a deterioration in the quality of agricultural products. Due to the imbalance in production technology, seedlings are often weakened, overgrown, with a low yield per unit area and survival rate in the field. One of the solutions to this problem is the use of PP for pre-sowing seed treatment to increase the competitiveness of seedlings in the field.


Sign in / Sign up

Export Citation Format

Share Document