Impacts of Forecaster Involvement on Convective Storm Initiation and Evolution Nowcasting

2012 ◽  
Vol 27 (5) ◽  
pp. 1061-1089 ◽  
Author(s):  
Rita D. Roberts ◽  
Amanda R. S. Anderson ◽  
Eric Nelson ◽  
Barbara G. Brown ◽  
James W. Wilson ◽  
...  

Abstract A forecaster-interactive capability was added to an automated convective storm nowcasting system [Auto-Nowcaster (ANC)] to allow forecasters to enhance the performance of 1-h nowcasts of convective storm initiation and evolution produced every 6 min. This Forecaster-Over-The-Loop (FOTL-ANC) system was tested at the National Weather Service Fort Worth–Dallas, Texas, Weather Forecast Office during daily operations from 2005 to 2010. The forecaster’s role was to enter the locations of surface convergence boundaries into the ANC prior to dissemination of nowcasts to the Center Weather Service Unit. Verification of the FOTL-ANC versus ANC (no human) nowcasts was conducted on the convective scale. Categorical verification scores were computed for 30 subdomains within the forecast domain. Special focus was placed on subdomains that included convergence boundaries for evaluation of forecaster involvement and impact on the FOTL-ANC nowcasts. The probability of detection of convective storms increased by 20%–60% with little to no change observed in the false-alarm ratios. Bias values increased from 0.8–1.0 to 1.0–3.0 with human involvement. The accuracy of storm nowcasts notably improved with forecaster involvement; critical success index (CSI) values increased from 0.15–0.25 (ANC) to 0.2–0.4 (FOTL-ANC). Over short time periods, CSI values as large as 0.6 were also observed. This study demonstrated definitively that forecaster involvement led to positive improvement in the nowcasts in most cases while causing no degradation in other cases; a few exceptions are noted. Results show that forecasters can play an important role in the production of rapidly updated, convective storm nowcasts for end users.

2014 ◽  
Vol 32 (3) ◽  
pp. 561
Author(s):  
Fabiani Denise Bender ◽  
Rita Yuri Ynoue

BSTRACT. This study aims to describe a spatial analysis of precipitation field with the MODE tool, which consists in comparing features converted from griddedforecast and observed precipitation values. This evaluation was performed daily from April 2010 to March 2011, for the 36-h GFS precipitation forecast started at00 UTC over the state of São Paulo and neighborhood. Besides traditional verification measures, such as accuracy (A), critical success index (CSI), bias (BIAS),probability of detection (POD), and false alarm ratio (FAR); new verification measures are proposed, such as area ratio (AR), centroid distance (CD) and 50th and 90thpercentiles ratio of intensity (PR50 and PR90). Better performance was attained during the rainy season. Part of the errors in the simulations was due to overestimationof the forecasted intensity and precipitation areas.Keywords: object-based verification, weather forecast, precipitation, MODE, São Paulo. RESUMO. Este estudo tem como objetivo descrever uma análise espacial do campo de precipitação com a ferramenta MODE, que consiste em converter valores deprecipitação de grade do campo previsto e observado em objetos, que posteriormente serão comparados entre si. A avaliação é realizada diariamente sobre o estadode São Paulo e vizinhança, para o período de abril de 2010 a março de 2011, para as simulações do modelo GFS iniciadas às 00 UTC, na integração de 36 horas. Além da verificação através de índices tradicionais, como probabilidade de acerto (PA), índice crítico de sucesso (ICS), viés (VIÉS), probabilidade de detecção (PD)e razão de falso alarme (RFA), novos índices de avaliação são propostos, como razão de área (RA), distância do centroide (DC) e razão dos percentis 50 e 90 deintensidade (RP50 e RP90). O melhor desempenho ocorreu para a estação chuvosa. Parte dos erros nas simulações foi devido à superestimativa da intensidade e da área de abrangência dos eventos de precipitação em relação ao observado.Palavras-chave: avaliação baseada em objetos, previsão do tempo, precipitação, MODE, São Paulo.


2020 ◽  
Vol 35 (2) ◽  
pp. 635-656 ◽  
Author(s):  
Matthew J. Bunkers ◽  
Steven R. Fleegel ◽  
Thomas Grafenauer ◽  
Chauncy J. Schultz ◽  
Philip N. Schumacher

Abstract The objective of this study is to provide guidance on when hail and/or wind is climatologically most likely (temporally and spatially) based on the ratio of severe hail reports to severe wind reports, which can be used by National Weather Forecast (NWS) forecasters when issuing severe convective warnings. Accordingly, a climatology of reported hail-to-wind ratios (i.e., number of hail reports divided by the number of wind reports) for observed severe convective storms was derived using U.S. storm reports from 1955 to 2017. Owing to several temporal changes in reporting and warning procedures, the 1996–2017 period was chosen for spatiotemporal analyses, yielding 265 691 hail and 294 449 wind reports. The most notable changes in hail–wind ratios occurred around 1996 as the NWS modernized and deployed new radars (leading to more hail reports relative to wind) and in 2010 when the severe hail criterion increased nationwide (leading to more wind reports relative to hail). One key finding is that hail–wind ratios are maximized (i.e., relatively more hail than wind) during the late morning through midafternoon and in the spring (March–May), with geographical maxima over the central United States and complex/elevated terrain. Otherwise, minimum ratios occur overnight, during the late summer (July–August) as well as November–December, and over the eastern United States. While the results reflect reporting biases (e.g., fewer wind than hail reports in low-population areas but more wind reports where mesonets are available), meteorological factors such as convective mode and cool spring versus warm summer environments also appear associated with the hail–wind ratio climatology.


Water ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 1061
Author(s):  
Thanh Thi Luong ◽  
Judith Pöschmann ◽  
Rico Kronenberg ◽  
Christian Bernhofer

Convective rainfall can cause dangerous flash floods within less than six hours. Thus, simple approaches are required for issuing quick warnings. The flash flood guidance (FFG) approach pre-calculates rainfall levels (thresholds) potentially causing critical water levels for a specific catchment. Afterwards, only rainfall and soil moisture information are required to issue warnings. This study applied the principle of FFG to the Wernersbach Catchment (Germany) with excellent data coverage using the BROOK90 water budget model. The rainfall thresholds were determined for durations of 1 to 24 h, by running BROOK90 in “inverse” mode, identifying rainfall values for each duration that led to exceedance of critical discharge (fixed value). After calibrating the model based on its runoff, we ran it in hourly mode with four precipitation types and various levels of initial soil moisture for the period 1996–2010. The rainfall threshold curves showed a very high probability of detection (POD) of 91% for the 40 extracted flash flood events in the study period, however, the false alarm rate (FAR) of 56% and the critical success index (CSI) of 42% should be improved in further studies. The proposed adjusted FFG approach has the potential to provide reliable support in flash flood forecasting.


Author(s):  
Evan S. Bentley ◽  
Richard L. Thompson ◽  
Barry R. Bowers ◽  
Justin G. Gibbs ◽  
Steven E. Nelson

AbstractPrevious work has considered tornado occurrence with respect to radar data, both WSR-88D and mobile research radars, and a few studies have examined techniques to potentially improve tornado warning performance. To date, though, there has been little work focusing on systematic, large-sample evaluation of National Weather Service (NWS) tornado warnings with respect to radar-observable quantities and the near-storm environment. In this work, three full years (2016–2018) of NWS tornado warnings across the contiguous United States were examined, in conjunction with supporting data in the few minutes preceding warning issuance, or tornado formation in the case of missed events. The investigation herein examines WSR-88D and Storm Prediction Center (SPC) mesoanalysis data associated with these tornado warnings with comparisons made to the current Warning Decision Training Division (WDTD) guidance.Combining low-level rotational velocity and the significant tornado parameter (STP), as used in prior work, shows promise as a means to estimate tornado warning performance, as well as relative changes in performance as criteria thresholds vary. For example, low-level rotational velocity peaking in excess of 30 kt (15 m s−1), in a near-storm environment which is not prohibitive for tornadoes (STP > 0), results in an increased probability of detection and reduced false alarms compared to observed NWS tornado warning metrics. Tornado warning false alarms can also be reduced through limiting warnings with weak (<30 kt), broad (>1nm) circulations in a poor (STP=0) environment, careful elimination of velocity data artifacts like sidelobe contamination, and through greater scrutiny of human-based tornado reports in otherwise questionable scenarios.


2018 ◽  
Vol 33 (6) ◽  
pp. 1501-1511 ◽  
Author(s):  
Harold E. Brooks ◽  
James Correia

Abstract Tornado warnings are one of the flagship products of the National Weather Service. We update the time series of various metrics of performance in order to provide baselines over the 1986–2016 period for lead time, probability of detection, false alarm ratio, and warning duration. We have used metrics (mean lead time for tornadoes warned in advance, fraction of tornadoes warned in advance) that work in a consistent way across the official changes in policy for warning issuance, as well as across points in time when unofficial changes took place. The mean lead time for tornadoes warned in advance was relatively constant from 1986 to 2011, while the fraction of tornadoes warned in advance increased through about 2006, and the false alarm ratio slowly decreased. The largest changes in performance take place in 2012 when the default warning duration decreased, and there is an apparent increased emphasis on reducing false alarms. As a result, the lead time, probability of detection, and false alarm ratio all decrease in 2012. Our analysis is based, in large part, on signal detection theory, which separates the quality of the warning system from the threshold for issuing warnings. Threshold changes lead to trade-offs between false alarms and missed detections. Such changes provide further evidence for changes in what the warning system as a whole considers important, as well as highlighting the limitations of measuring performance by looking at metrics independently.


2021 ◽  
Vol 8 ◽  
Author(s):  
Marta de Alfonso ◽  
Jue Lin-Ye ◽  
José M. García-Valdecasas ◽  
Susana Pérez-Rubio ◽  
M. Yolanda Luna ◽  
...  

Storm Gloria, generated on January 17th, 2020 in the Eastern North Atlantic, crossed the Iberian Peninsula and impacted the Western Mediterranean during the following days. The event produced relevant damages on the coast and the infrastructures at the Catalan-Balearic Sea, due to extraordinary wind and wave fields, concomitant with anomalously intense rain and ocean currents. Puertos del Estado (the Spanish holding of harbors) has developed and operates a complex monitoring and forecasting system (PORTUS System), in collaboration with the Spanish Met Office (AEMET). The present work shows how Gloria was correctly forecasted by this system, alerts were properly issued (with special focus to the ports), and the buoys were able to monitor the sea state conditions during the event, measuring several new records of significant wave height and exceptional high mean wave periods. The paper describes, in detail, the dynamic evolution of the atmospheric conditions, and the sea state during the storm. It is by means of the study of both in situ and modeled PORTUS data, in combination with the AEMET weather forecast system results. The analysis also serves to place this storm in a historical context, showing the exceptional nature of the event, and to identify the specific reasons why its impact was particularly severe. The work also demonstrates the relevance of the PORTUS System to warn, in advance, the main Spanish Ports. It prevents accidents that could result in fatal casualties. To do so, the wave forecast warning performance is analyzed, making special focus on the skill score for the different horizons. Furthermore, it is demonstrated how a storm of this nature results in the need of changes on the extreme wave analysis for the area. It impacts all sorts of design activities at the coastline. The paper studies both how this storm fits into existing extreme analysis and how these should be modified in the light of this particular single event. This work is the first of a series of papers to be published on this issue. They analyze, in detail, other aspects of the event, including evolution of sea level and description of coastal damages.


2019 ◽  
Vol 34 (4) ◽  
pp. 1137-1160 ◽  
Author(s):  
Ryan Lagerquist ◽  
Amy McGovern ◽  
David John Gagne II

AbstractThis paper describes the use of convolutional neural nets (CNN), a type of deep learning, to identify fronts in gridded data, followed by a novel postprocessing method that converts probability grids to objects. Synoptic-scale fronts are often associated with extreme weather in the midlatitudes. Predictors are 1000-mb (1 mb = 1 hPa) grids of wind velocity, temperature, specific humidity, wet-bulb potential temperature, and/or geopotential height from the North American Regional Reanalysis. Labels are human-drawn fronts from Weather Prediction Center bulletins. We present two experiments to optimize parameters of the CNN and object conversion. To evaluate our system, we compare the objects (predicted warm and cold fronts) with human-analyzed warm and cold fronts, matching fronts of the same type within a 100- or 250-km neighborhood distance. At 250 km our system obtains a probability of detection of 0.73, success ratio of 0.65 (or false-alarm rate of 0.35), and critical success index of 0.52. These values drastically outperform the baseline, which is a traditional method from numerical frontal analysis. Our system is not intended to replace human meteorologists, but to provide an objective method that can be applied consistently and easily to a large number of cases. Our system could be used, for example, to create climatologies and quantify the spread in forecast frontal properties across members of a numerical weather prediction ensemble.


2019 ◽  
Vol 11 (4) ◽  
pp. 443 ◽  
Author(s):  
Richard Müller ◽  
Stéphane Haussler ◽  
Matthias Jerg ◽  
Dirk Heizenreder

This study presents a novel approach for the early detection of developing thunderstorms. To date, methods for the detection of developing thunderstorms have usually relied on accurate Atmospheric Motion Vectors (AMVs) for the estimation of the cooling rates of convective clouds, which correspond to the updraft strengths of the cloud objects. In this study, we present a method for the estimation of the updraft strength that does not rely on AMVs. The updraft strength is derived directly from the satellite observations in the SEVIRI water vapor channels. For this purpose, the absolute value of the vector product of spatio-temporal gradients of the SEVIRI water vapor channels is calculated for each satellite pixel, referred to as Normalized Updraft Strength (NUS). The main idea of the concept is that vertical updraft leads to NUS values significantly above zero, whereas horizontal cloud movement leads to NUS values close to zero. Thus, NUS is a measure of the strength of the vertical updraft and can be applied to distinguish between advection and convection. The performance of the method has been investigated for two summer periods in 2016 and 2017 by validation with lightning data. Values of the Critical Success Index (CSI) of about 66% for 2016 and 60% for 2017 demonstrate the good performance of the method. The Probability of Detection (POD) values for the base case are 81.8% for 2016 and 89.2% for 2017, respectively. The corresponding False Alarm Ratio (FAR) values are 22.6% (2016) and 36.4% (2017), respectively. In summary, the method has the potential to reduce forecast lead time significantly and can be quite useful in regions without a well-maintained radar network.


2016 ◽  
Vol 97 (3) ◽  
pp. 329-336 ◽  
Author(s):  
Robert J. Trapp ◽  
David J. Stensrud ◽  
Michael C. Coniglio ◽  
Russ S. Schumacher ◽  
Michael E. Baldwin ◽  
...  

Abstract The Mesoscale Predictability Experiment (MPEX) was a field campaign conducted 15 May through 15 June 2013 within the Great Plains region of the United States. One of the research foci of MPEX regarded the upscaling effects of deep convective storms on their environment, and how these feed back to the convective-scale dynamics and predictability. Balloon-borne GPS radiosondes, or “upsondes,” were used to sample such environmental feedbacks. Two of the upsonde teams employed dual-frequency sounding systems that allowed for upsonde observations at intervals as fast as 15 min. Because these dual-frequency systems also had the capacity for full mobility during sonde reception, highly adaptive and rapid storm-relative sampling of the convectively modified environment was possible. This article documents the mobile sounding capabilities and unique sampling strategies employed during MPEX.


2015 ◽  
Vol 96 (12) ◽  
pp. 2127-2149 ◽  
Author(s):  
Morris L. Weisman ◽  
Robert J. Trapp ◽  
Glen S. Romine ◽  
Chris Davis ◽  
Ryan Torn ◽  
...  

Abstract The Mesoscale Predictability Experiment (MPEX) was conducted from 15 May to 15 June 2013 in the central United States. MPEX was motivated by the basic question of whether experimental, subsynoptic observations can extend convective-scale predictability and otherwise enhance skill in short-term regional numerical weather prediction. Observational tools for MPEX included the National Science Foundation (NSF)–National Center for Atmospheric Research (NCAR) Gulfstream V aircraft (GV), which featured the Airborne Vertical Atmospheric Profiling System mini-dropsonde system and a microwave temperature-profiling (MTP) system as well as several ground-based mobile upsonde systems. Basic operations involved two missions per day: an early morning mission with the GV, well upstream of anticipated convective storms, and an afternoon and early evening mission with the mobile sounding units to sample the initiation and upscale feedbacks of the convection. A total of 18 intensive observing periods (IOPs) were completed during the field phase, representing a wide spectrum of synoptic regimes and convective events, including several major severe weather and/or tornado outbreak days. The novel observational strategy employed during MPEX is documented herein, as is the unique role of the ensemble modeling efforts—which included an ensemble sensitivity analysis—to both guide the observational strategies and help address the potential impacts of such enhanced observations on short-term convective forecasting. Preliminary results of retrospective data assimilation experiments are discussed, as are data analyses showing upscale convective feedbacks.


Sign in / Sign up

Export Citation Format

Share Document