scholarly journals Evaluation of the Storm Prediction Center’s Day 1 Convective Outlooks

2012 ◽  
Vol 27 (6) ◽  
pp. 1580-1585 ◽  
Author(s):  
Nathan M. Hitchens ◽  
Harold E. Brooks

Abstract The Storm Prediction Center has issued daily convective outlooks since the mid-1950s. This paper represents an initial effort to examine the quality of these forecasts. Convective outlooks are plotted on a latitude–longitude grid with 80-km grid spacing and evaluated using storm reports to calculate verification measures including the probability of detection, frequency of hits, and critical success index. Results show distinct improvements in forecast performance over the duration of the study period, some of which can be attributed to apparent changes in forecasting philosophies.

2005 ◽  
Vol 20 (1) ◽  
pp. 51-62 ◽  
Author(s):  
David G. Baggaley ◽  
John M. Hanesiak

Abstract Blowing snow has a major impact on transportation and public safety. The goal of this study is to provide an operational technique for forecasting high-impact blowing snow on the Canadian arctic and the Prairie provinces using historical meteorological data. The focus is to provide some guidance as to the probability of reduced visibilities (e.g., less than 1 km) in blowing snow given a forecast wind speed and direction. The wind character associated with blowing snow was examined using a large database consisting of up to 40 yr of hourly observations at 15 locations in the Prairie provinces and at 17 locations in the arctic. Instances of blowing snow were divided into cases with and without concurrent falling snow. The latter group was subdivided by the time since the last snowfall in an attempt to account for aging processes of the snowpack. An empirical scheme was developed that could discriminate conditions that produce significantly reduced visibility in blowing snow using wind speed, air temperature, and time since last snowfall as predictors. This process was evaluated using actual hourly observations to compute the probability of detection, false alarm ratio, credibility, and critical success index. A critical success index as high as 66% was achieved. This technique can be used to give an objective first guess of the likelihood of high-impact blowing snow using common forecast parameters.


2009 ◽  
Vol 24 (2) ◽  
pp. 601-608 ◽  
Author(s):  
Paul J. Roebber

Abstract A method for visually representing multiple measures of dichotomous (yes–no) forecast quality (probability of detection, false alarm ratio, bias, and critical success index) in a single diagram is presented. Illustration of the method is provided using performance statistics from two previously published forecast verification studies (snowfall density and convective initiation) and a verification of several new forecast datasets: Storm Prediction Center forecasts of severe storms (nontornadic and tornadic), Hydrometeorological Prediction Center forecasts of heavy precipitation (greater than 12.5 mm in a 6-h period), National Weather Service Forecast Office terminal aviation forecasts (ceiling and visibility), and medium-range ensemble forecasts of 500-hPa height anomalies. The use of such verification metrics in concert with more detailed investigations to advance forecasting is briefly discussed.


2013 ◽  
Vol 14 (1) ◽  
pp. 29
Author(s):  
Ardhi Adhary Arbain ◽  
Mahally Kudsy ◽  
M. Djazim Syaifullah

Intisari  Simulasi WRF pada tanggal 16-17 Januari 2013 dilakukan untuk menguji performa model dalam mendeteksi fenomena seruak dingin dan hujan ekstrim yang merupakan pemicu utama bencana banjir Jakarta pada periode tersebut. Metode verifikasi kualitatif dan kuantitatif pada tiap grid secara dikotomi digunakan untuk membandingkan keluaran model dengan data observasi Global Satellite Mapping of Precipitation (GSMaP) dan NCEP Reanalysis. Performa model WRF dihitung berdasarkan nilai akurasi (ACC), Critical Success Index (CSI), Probability of Detection (POD) dan False Alarm Ratio (FAR) yang diperoleh dari hasil verifikasi numerik. Hasil pengujian menunjukkan bahwa WRF mampu melakukan deteksi waktu awal kejadian hujan ekstrim dengan tepat setelah 6-7 jam sejak inisiasi model dilakukan. Performa terbaik WRF teramati pada pukul 02-09 WIB (LT) dengan nilai CSI mencapai 0,32, POD 0,82 dan FAR 0,66. Hasil verifikasi secara kualitatif dan kuantitatif juga menunjukkan bahwa WRF dapat melakukan deteksi seruak dingin dan hujan ekstrim sebelum banjir terjadi, walaupun dengan ketepatan durasi waktu dan lokasi kejadian yang masih relatif rendah bila dibandingkan dengan data observasi.  Abstract  WRF simulation on January 16-17, 2013 has been conducted to evaluate the model performance in detecting cold surge and extreme precipitation phenomena which were the triggers of Jakarta flood event during the period. Qualitative and quantitative dichotomous grid-to-grid verification methods are utilized to compare the model output with Global Satellite Mapping of Precipitation (GSMaP) observation and NCEP Reanalysis dataset. WRF model performance is calculated based on the scores of accuracy (ACC), Critical Success Index (CSI), Probability of Detection (POD) and False Alarm Ration (FAR) which are generated from numerical verification. The results show that WRF could precisely detect the onset of extreme precipitation event in 6-7 hours after the model initiation.The best performance of the model is observed at 02-09 WIB (LT) with CSI score of 0.32, POD 0.82 and FAR 0.66. Despite the model inability to accurately predict the duration and location of cold surge and extreme precipitation, the qualitative and quantitative verification results also show that WRF could detect the phenomena just before the flood event occured.


2019 ◽  
Vol 9 ◽  
pp. A27
Author(s):  
Marlon Núñez ◽  
Teresa Nieves-Chinchilla ◽  
Antti Pulkkinen

This study shows a quantitative assessment of the use of Extreme Ultraviolet (EUV) observations in the prediction of Solar Energetic Proton (SEP) events. The UMASEP scheme (Space Weather, 9, S07003, 2011; 13, 2015, 807–819) forecasts the occurrence and the intensity of the first hours of SEP events. In order to predict well-connected events, this scheme correlates Solar Soft X-rays (SXR) with differential proton fluxes of the GOES satellites. In this study, we explore the use of the EUV time history from GOES-EUVS and SDO-AIA instruments in the UMASEP scheme. This study presents the results of the prediction of the occurrence of well-connected >10 MeV SEP events, for the period from May 2010 to December 2017, in terms of Probability of Detection (POD), False Alarm Ratio (FAR), Critical Success Index (CSI), and the average and median of the warning times. The UMASEP/EUV-based models were calibrated using GOES and SDO data from May 2010 to October 2014, and validated using out-of-sample SDO data from November 2014 to December 2017. The best results were obtained by those models that used EUV data in the range 50–340 Å. We conclude that the UMASEP/EUV-based models yield similar or better POD results, and similar or worse FAR results, than those of the current real-time UMASEP/SXR-based model. The reason for the higher POD of the UMASEP/EUV-based models in the range 50–340 Å, was due to the high percentage of successful predictions of well-connected SEP events associated with <C4 flares and behind-the-limb flares, which amounted to 25% of all the well-connected events during the period May 2010 to December 2017. By using all the available data (2010–2017), this study also concluded that the simultaneous use of SXRs and EUVs in 94 Å in the UMASEP-10 tool for predicting all >10 MeV SEP events, improves the overall performance, obtaining a POD of 92.9% (39/42) compared with 81% (34/42) of the current tool, and a slightly worse FAR of 31.6% (18/57) compared with 29.2% (14/58) of the current tool.


Water ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 1061
Author(s):  
Thanh Thi Luong ◽  
Judith Pöschmann ◽  
Rico Kronenberg ◽  
Christian Bernhofer

Convective rainfall can cause dangerous flash floods within less than six hours. Thus, simple approaches are required for issuing quick warnings. The flash flood guidance (FFG) approach pre-calculates rainfall levels (thresholds) potentially causing critical water levels for a specific catchment. Afterwards, only rainfall and soil moisture information are required to issue warnings. This study applied the principle of FFG to the Wernersbach Catchment (Germany) with excellent data coverage using the BROOK90 water budget model. The rainfall thresholds were determined for durations of 1 to 24 h, by running BROOK90 in “inverse” mode, identifying rainfall values for each duration that led to exceedance of critical discharge (fixed value). After calibrating the model based on its runoff, we ran it in hourly mode with four precipitation types and various levels of initial soil moisture for the period 1996–2010. The rainfall threshold curves showed a very high probability of detection (POD) of 91% for the 40 extracted flash flood events in the study period, however, the false alarm rate (FAR) of 56% and the critical success index (CSI) of 42% should be improved in further studies. The proposed adjusted FFG approach has the potential to provide reliable support in flash flood forecasting.


2014 ◽  
Vol 32 (3) ◽  
pp. 561
Author(s):  
Fabiani Denise Bender ◽  
Rita Yuri Ynoue

BSTRACT. This study aims to describe a spatial analysis of precipitation field with the MODE tool, which consists in comparing features converted from griddedforecast and observed precipitation values. This evaluation was performed daily from April 2010 to March 2011, for the 36-h GFS precipitation forecast started at00 UTC over the state of São Paulo and neighborhood. Besides traditional verification measures, such as accuracy (A), critical success index (CSI), bias (BIAS),probability of detection (POD), and false alarm ratio (FAR); new verification measures are proposed, such as area ratio (AR), centroid distance (CD) and 50th and 90thpercentiles ratio of intensity (PR50 and PR90). Better performance was attained during the rainy season. Part of the errors in the simulations was due to overestimationof the forecasted intensity and precipitation areas.Keywords: object-based verification, weather forecast, precipitation, MODE, São Paulo. RESUMO. Este estudo tem como objetivo descrever uma análise espacial do campo de precipitação com a ferramenta MODE, que consiste em converter valores deprecipitação de grade do campo previsto e observado em objetos, que posteriormente serão comparados entre si. A avaliação é realizada diariamente sobre o estadode São Paulo e vizinhança, para o período de abril de 2010 a março de 2011, para as simulações do modelo GFS iniciadas às 00 UTC, na integração de 36 horas. Além da verificação através de índices tradicionais, como probabilidade de acerto (PA), índice crítico de sucesso (ICS), viés (VIÉS), probabilidade de detecção (PD)e razão de falso alarme (RFA), novos índices de avaliação são propostos, como razão de área (RA), distância do centroide (DC) e razão dos percentis 50 e 90 deintensidade (RP50 e RP90). O melhor desempenho ocorreu para a estação chuvosa. Parte dos erros nas simulações foi devido à superestimativa da intensidade e da área de abrangência dos eventos de precipitação em relação ao observado.Palavras-chave: avaliação baseada em objetos, previsão do tempo, precipitação, MODE, São Paulo.


2016 ◽  
Vol 33 (1) ◽  
pp. 61-80 ◽  
Author(s):  
S.-G. Park ◽  
Ji-Hyeon Kim ◽  
Jeong-Seok Ko ◽  
Gyuwon Lee

AbstractThe Ministry of Land, Infrastructure and Transport (MOLIT) of South Korea operates two S-band dual-polarimetric radars, as of 2013, to manage water resources through quantitative rainfall estimations at the surface level. However, the radar measurements suffer from range ambiguity. In this study, an algorithm based on fuzzy logic is developed to identify range overlaid echoes using seven inputs: standard deviations of differential reflectivity SD(ZDR), differential propagation phase SD(ϕDP), correlation coefficient SD(ρHV) and spectrum width SD(συ), mean of ρHV and συ, and difference of ϕDP from the system offset ΔϕDP. An examination of the algorithm’s performance shows that these echoes can be well identified and that echoes strongly affected by second trip are highlighted by high probabilities, over 0.6; echoes weakly affected have probabilities from 0.4 to 0.6; and those with low probabilities, below 0.4, are assigned as echoes without range ambiguity. A quantitative analysis of a limited number of cases using the usual skill scores shows that when the probability of 0.4 is considered as a threshold for identifying the range overlaid echoes, they can be identified with a probability of detection of 90%, a false alarm rate of 6%, and a critical success index of 84%.


2016 ◽  
Vol 31 (1) ◽  
pp. 273-295 ◽  
Author(s):  
Burkely T. Gallo ◽  
Adam J. Clark ◽  
Scott R. Dembek

Abstract Hourly maximum fields of simulated storm diagnostics from experimental versions of convection-permitting models (CPMs) provide valuable information regarding severe weather potential. While past studies have focused on predicting any type of severe weather, this study uses a CPM-based Weather Research and Forecasting (WRF) Model ensemble initialized daily at the National Severe Storms Laboratory (NSSL) to derive tornado probabilities using a combination of simulated storm diagnostics and environmental parameters. Daily probabilistic tornado forecasts are developed from the NSSL-WRF ensemble using updraft helicity (UH) as a tornado proxy. The UH fields are combined with simulated environmental fields such as lifted condensation level (LCL) height, most unstable and surface-based CAPE (MUCAPE and SBCAPE, respectively), and multifield severe weather parameters such as the significant tornado parameter (STP). Varying thresholds of 2–5-km updraft helicity were tested with differing values of σ in the Gaussian smoother that was used to derive forecast probabilities, as well as different environmental information, with the aim of maximizing both forecast skill and reliability. The addition of environmental information improved the reliability and the critical success index (CSI) while slightly degrading the area under the receiver operating characteristic (ROC) curve across all UH thresholds and σ values. The probabilities accurately reflected the location of tornado reports, and three case studies demonstrate value to forecasters. Based on initial tests, four sets of tornado probabilities were chosen for evaluation by participants in the 2015 National Oceanic and Atmospheric Administration’s Hazardous Weather Testbed Spring Forecasting Experiment from 4 May to 5 June 2015. Participants found the probabilities useful and noted an overforecasting tendency.


2020 ◽  
Vol 20 (5) ◽  
pp. 1799-1806
Author(s):  
Fatemeh Moazami Goudarzi ◽  
Amirpouya Sarraf ◽  
Hassan Ahmadi

Abstract In recent years, the use of climatic databases and satellite products by researchers has become increasingly common in the field of climate modeling and research. These datasets play an important role in developing countries. This study evaluated two reanalyses, CMORPH and SM2RAIN-ASCAT over Maharlu Lake, a semi-arid region in Iran. The results showed that these two near-time datasets do not have accurate data over this basin. However, the probability of detection (POD), critical success index (CSI), and false alarm ratio (FAR) statistics showed acceptable accuracy in the detection of precipitation. The coefficient of determination and root mean square error statistics have unacceptable accuracy over this area. The monthly changes in each of the indices showed that the CMORPH database had more errors in the spring months, but in other months the error rate was improved. SM2RAIN-ASCAT had better accuracy over this area relative to CMORPH. The estimation of the total accuracy of the data showed that these two satellite databases were not capable of estimating precipitation in the area.


Author(s):  
Richard Mülller ◽  
Stephane Haussler ◽  
Matthias Jerg

The study investigates the role of NWP filtering for the remote sensing of Cumulonimbus Clouds (Cbs) by implementation of 14 different experiments, covering Central Europe. These experiments compiles different stability filter settings as well as the use of different channels for the InfraRed (IR) brightness temperatures. As stability filter parameters from Numerical Weather Prediction (NWP) are used. The brightness temperature information results from the IR SEVIRI instrument on-board of Meteosat Second Generation satellite and enables the detection of very cold and high clouds close to the tropopause. The satellite only approaches (no NWP filtering) result in the detection of Cbs with a relative high probability of detection, but unfortunately combined with a large False Alarm Rate (FAR), leading to a Critical Success Index (CSI) below 60 %. The false alarms results from other types of very cold and high clouds. It is shown that the false alarms can be significantly decreased by application of an appropriate NWP stability filter, leading to the increase of CSI to about 70 % . A brief review and reflection of the literature clarifies that the this function of the NWP filter can not be replaced by MSG IR spectroscopy. Thus, NWP filtering is strongly recommended to increase the quality of satellite based Cb detection. Further, it has been shown that the well established convective available potential energy (CAPE) and the convection index (KO) works well as stability filter.


Sign in / Sign up

Export Citation Format

Share Document