Assessment of NWS County Warning Area Tornado Risk, Exposure, and Vulnerability

2021 ◽  
Vol 13 (2) ◽  
pp. 189-209
Author(s):  
Stephen M. Strader ◽  
Alex M. Haberlie ◽  
Alexandra G. Loitz

AbstractThis study investigates the interrelationships between National Weather Service (NWS) county warning area (CWA) tornado risk, exposure, and societal vulnerability. CWA climatological tornado risk is determined using historical tornado event data, and exposure and vulnerability are assessed by employing present-day population, housing, socioeconomic, and demographic metrics. In addition, tornado watches, warnings, warning lead times, false alarm warnings, and unwarned tornado reports are examined in relation to CWA risk, exposure, and vulnerability. Results indicate that southeastern U.S. CWAs are more susceptible to tornado impacts because of their greater tornado frequencies and larger damage footprints intersecting more vulnerable populations (e.g., poverty and manufactured homes). Midwest CWAs experience fewer tornadoes relative to Southeast and southern plains CWAs but encompass faster tornado translational speeds and greater population densities where higher concentrations of vulnerable individuals often reside. Northern plains CWAs contain longer-tracked tornadoes on average and larger percentages of vulnerable elderly and rural persons. Southern plains CWAs experience the highest tornado frequencies in general and contain larger percentages of minority Latinx populations. Many of the most socially vulnerable CWAs have shorter warning lead times and greater percentages of false alarm warnings and unwarned tornadoes. Study findings provide NWS forecasters with an improved understanding of the relationships between tornado risk, exposure, vulnerability, and warning outcomes within their respective CWAs. Findings may also assist NWS Weather Forecast Offices and the Warning Decision Training Division with developing training materials aimed at increasing NWS forecaster knowledge of how tornado risk, exposure, and vulnerability factors influence local tornado disaster potential.

2021 ◽  
Author(s):  
Thomas Röösli ◽  
David N. Bresch

<p>Weather extremes can have high socio-economic impacts. Better impact forecasting and preventive action help to reduce these impacts. In Switzerland, the winter windstorms caused high building damage, felled trees and interrupted traffic and power. Events such as Burglind-Eleanor in January 2018 are a learning opportunity for weather warnings, risk modelling and decision-making.</p><p>We have developed and implemented an operational impact forecasting system for building damage due to wind events in Switzerland. We use the ensemble weather forecast of wind gusts produced by the national meteorological agency MeteoSwiss. We couple this hazard information with a spatially explicit impact model (CLIMADA) for building damages due to winter windstorms. Each day, the impact forecasting system publishes a probabilistic forecast of the expected building damages on a spatial grid.</p><p>This system produces promising results for major historical storms when compared to aggregated daily building insurance claims data from a public building insurer of the canton of Zurich. The daily impact forecasts were qualitatively categorized as (1) successful (2) miss or (3) false alarm. The impacts of windstorm Burglind-Eleanor and five other winter windstorms were forecasted reasonably well, with four successful forecasts, one miss and one false alarm.</p><p> The building damage due to smaller storm extremes was not as successfully forecasted. Thunderstorms are not as well forecasted with 2 days’ lead time and as a result the impact forecasting system produces more misses and false alarms outside the winter storm season. For the Alpine-specific southerly Foehn winds, the impact forecasts produce many false alarms, probably caused by an overestimation of wind gusts in the weather forecast.</p><p>The forecasting system can be used to improve weather warnings and allocate resources and staff in the claims handling process of building insurances. This will help to improve recovery time and costs to institutions and individuals. The open-source code and open meteorological data makes this implementation transferable to other hazard types and other geographical regions.</p>


2009 ◽  
Vol 24 (1) ◽  
pp. 140-154 ◽  
Author(s):  
J. Brotzge ◽  
S. Erickson

Abstract During a 5-yr period of study from 2000 to 2004, slightly more than 10% of all National Weather Service (NWS) tornado warnings were issued either simultaneously as the tornado formed (i.e., with zero lead time) or minutes after initial tornado formation but prior to tornado dissipation (i.e., with “negative” lead time). This study examines why these tornadoes were not warned in advance, and what climate, storm morphology, and sociological factors may have played a role in delaying the issuance of the warning. This dataset of zero and negative lead time warnings are sorted by their F-scale ratings, geographically by region and weather forecast office (WFO), hour of the day, month of the year, tornado-to-radar distance, county population density, and number of tornadoes by day, hour, and order of occurrence. Two key results from this study are (i) providing advance warning on the first tornado of the day remains a difficult challenge and (ii) the more isolated the tornado event, the less likelihood that an advance warning is provided. WFOs that experience many large-scale outbreaks have a lower proportion of warnings with negative lead time than WFOs that experience many more isolated, one-tornado or two-tornado warning days. Monthly and geographic trends in lead time are directly impacted by the number of multiple tornado events. Except for a few isolated cases, the impacts of tornado-to-radar distance, county population density, and storm morphology did not have a significant impact on negative lead-time warnings.


2019 ◽  
Vol 147 (9) ◽  
pp. 3409-3428 ◽  
Author(s):  
Jan-Huey Chen ◽  
Shian-Jiann Lin ◽  
Linjiong Zhou ◽  
Xi Chen ◽  
Shannon Rees ◽  
...  

Abstract A new global model using the GFDL nonhydrostatic Finite-Volume Cubed-Sphere Dynamical Core (FV3) coupled to physical parameterizations from the National Centers for Environmental Prediction’s Global Forecast System (NCEP/GFS) was built at GFDL, named fvGFS. The modern dynamical core, FV3, has been selected for the National Oceanic and Atmospheric Administration’s Next Generation Global Prediction System (NGGPS) due to its accuracy, adaptability, and computational efficiency, which brings a great opportunity for the unification of weather and climate prediction systems. The performance of tropical cyclone (TC) forecasts in the 13-km fvGFS is evaluated globally based on 363 daily cases of 10-day forecasts in 2015. Track and intensity errors of TCs in fvGFS are compared to those in the operational GFS. The fvGFS outperforms the GFS in TC intensity prediction for all basins. For TC track prediction, the fvGFS forecasts are substantially better over the northern Atlantic basin and the northern Pacific Ocean than the GFS forecasts. An updated version of the fvGFS with the GFDL 6-category cloud microphysics scheme is also investigated based on the same 363 cases. With this upgraded microphysics scheme, fvGFS shows much improvement in TC intensity prediction over the operational GFS. Besides track and intensity forecasts, the performance of TC genesis forecast is also compared between the fvGFS and operational GFS. In addition to evaluating the hit/false alarm ratios, a novel method is developed to investigate the lengths of TC genesis lead times in the forecasts. Both versions of fvGFS show higher hit ratios, lower false alarm ratios, and longer genesis lead times than those of the GFS model in most of the TC basins.


2016 ◽  
Vol 31 (6) ◽  
pp. 1771-1790 ◽  
Author(s):  
Alexandra K. Anderson-Frey ◽  
Yvette P. Richardson ◽  
Andrew R. Dean ◽  
Richard L. Thompson ◽  
Bryan T. Smith

Abstract In this study, a 13-yr climatology of tornado event and warning environments, including metrics of tornado intensity and storm morphology, is investigated with particular focus on the environments of tornadoes associated with quasi-linear convective systems and right-moving supercells. The regions of the environmental parameter space having poor warning performance in various geographical locations, as well as during different times of the day and year, are highlighted. Kernel density estimations of the tornado report and warning environments are produced for two parameter spaces: mixed-layer convective available potential energy (MLCAPE) versus 0–6-km vector shear magnitude (SHR6), and mixed-layer lifting condensation level (MLLCL) versus 0–1-km storm-relative helicity (SRH1). The warning performance is best in environments characteristic of severe convection (i.e., environments featuring large values of MLCAPE and SHR6). For tornadoes occurring during the early evening transition period, MLCAPE is maximized, MLLCL heights decrease, SHR6 and SRH1 increase, tornadoes rated as 2 or greater on the enhanced Fujita scale (EF2+) are most common, the probability of detection is relatively high, and false alarm ratios are relatively low. Overall, the parameter-space distributions of warnings and events are similar; at least in a broad sense, there is no systematic problem with forecasting that explains the high overall false alarm ratio, which instead seems to stem from the inability to know which storms in a given environment will be tornadic.


2016 ◽  
Vol 32 (1) ◽  
pp. 47-60 ◽  
Author(s):  
David R. Harrison ◽  
Christopher D. Karstens

Abstract This study provides a quantitative climatological analysis of the fundamental geospatial components of storm-based warnings and offers insight into how the National Weather Service (NWS) uses the current storm-based warning system under the established directives and policies. From October 2007 through May 2016, the NWS issued over 500 000 storm-based warnings and severe weather statements (SVSs), primarily concentrated east of the Rocky Mountains. A geospatial analysis of these warning counts by county warning area (CWA) shows local maxima in the lower Mississippi valley, southern plains, central plains, and the southern Appalachians. Regional uniformity exists in the patterns of average speed and direction provided by the time/motion/location tags, while the mean duration and polygon area varies significantly by CWA and region. These observed consistencies and inconsistencies may be indicative of how local weather forecast office (WFO) policy and end-user needs factor into the warning issuance and update process. This research concludes with a comparison of storm-based warnings to NWS policy and an analysis of CWAs with the greatest number of warnings issued during a single convective day.


2009 ◽  
Vol 1 (1) ◽  
pp. 38-53 ◽  
Author(s):  
Kevin M. Simmons ◽  
Daniel Sutter

Abstract This paper extends prior research on the societal value of tornado warnings to the impact of false alarms. Intuition and theory suggest that false alarms will reduce the response to warnings, yet little evidence of a “false alarm effect” has been unearthed. This paper exploits differences in the false-alarm ratio across the United States to test for a false-alarm effect in a regression model of tornado casualties from 1986 to 2004. A statistically significant and large false-alarm effect is found: tornadoes that occur in an area with a higher false-alarm ratio kill and injure more people, everything else being constant. The effect is consistent across false-alarm ratios defined over different geographies and time intervals. A one-standard-deviation increase in the false-alarm ratio increases expected fatalities by between 12% and 29% and increases expected injuries by between 14% and 32%. The reduction in the national tornado false-alarm ratio over the period reduced fatalities by 4%–11% and injuries by 4%–13%. The casualty effects of false alarms and warning lead times are approximately equal in magnitude, suggesting that the National Weather Service could not reduce casualties by trading off a higher probability of detection for a higher false-alarm ratio, or vice versa.


Forecasting ◽  
2021 ◽  
Vol 3 (3) ◽  
pp. 501-516
Author(s):  
Feifei Yang ◽  
Diego Cerrai ◽  
Emmanouil N. Anagnostou

Weather-related power outages affect millions of utility customers every year. Predicting storm outages with lead times of up to five days could help utilities to allocate crews and resources and devise cost-effective restoration plans that meet the strict time and efficiency requirements imposed by regulatory authorities. In this study, we construct a numerical experiment to evaluate how weather parameter uncertainty, based on weather forecasts with one to five days of lead time, propagates into outage prediction error. We apply a machine-learning-based outage prediction model on storm-caused outage events that occurred between 2016 and 2019 in the northeastern United States. The model predictions, fed by weather analysis and other environmental parameters including land cover, tree canopy, vegetation characteristics, and utility infrastructure variables exhibited a mean absolute percentage error of 38%, Nash–Sutcliffe efficiency of 0.54, and normalized centered root mean square error of 68%. Our numerical experiment demonstrated that uncertainties of precipitation and wind-gust variables play a significant role in the outage prediction uncertainty while sustained wind and temperature parameters play a less important role. We showed that, while the overall weather forecast uncertainty increases gradually with lead time, the corresponding outage prediction uncertainty exhibited a lower dependence on lead times up to 3 days and a stepwise increase in the four- and five-day lead times.


Author(s):  
Gregory J. Stumpf ◽  
Alan E. Gerard

AbstractThreats-in-Motion (TIM) is a warning generation approach that would enable the NWS to advance severe thunderstorm and tornado warnings from the current static polygon system to continuously updating polygons that move forward with a storm. This concept is proposed as a first stage for implementation of the Forecasting a Continuum of Environmental Threats (FACETs) paradigm, which eventually aims to deliver rapidly updating probabilistic hazard information alongside NWS warnings, watches, and other products.With TIM, a warning polygon is attached to the threat and moves forward along with it. This provides more uniform, or equitable, lead time for all locations downstream of the event. When forecaster workload is high, storms remain continually tracked and warned. TIM mitigates gaps in warning coverage and improves the handling of storm motion changes. In addition, warnings are automatically cleared from locations where the threat has passed. This all results in greater average lead times and lower average departure times than current NWS warnings, with little to no impact to average false alarm time. This is particularly noteworthy for storms expected to live longer than the average warning duration (30 or 45 minutes) such as long-tracked supercells that are more prevalent during significant tornado outbreaks.


Author(s):  
Charlie Kirkwood ◽  
Theo Economou ◽  
Henry Odbert ◽  
Nicolas Pugeault

Forecasting the weather is an increasingly data-intensive exercise. Numerical weather prediction (NWP) models are becoming more complex, with higher resolutions, and there are increasing numbers of different models in operation. While the forecasting skill of NWP models continues to improve, the number and complexity of these models poses a new challenge for the operational meteorologist: how should the information from all available models, each with their own unique biases and limitations, be combined in order to provide stakeholders with well-calibrated probabilistic forecasts to use in decision making? In this paper, we use a road surface temperature example to demonstrate a three-stage framework that uses machine learning to bridge the gap between sets of separate forecasts from NWP models and the ‘ideal’ forecast for decision support: probabilities of future weather outcomes. First, we use quantile regression forests to learn the error profile of each numerical model, and use these to apply empirically derived probability distributions to forecasts. Second, we combine these probabilistic forecasts using quantile averaging. Third, we interpolate between the aggregate quantiles in order to generate a full predictive distribution, which we demonstrate has properties suitable for decision support. Our results suggest that this approach provides an effective and operationally viable framework for the cohesive post-processing of weather forecasts across multiple models and lead times to produce a well-calibrated probabilistic output. This article is part of the theme issue ‘Machine learning for weather and climate modelling’.


2011 ◽  
Vol 26 (4) ◽  
pp. 534-544 ◽  
Author(s):  
J. Brotzge ◽  
S. Erickson ◽  
H. Brooks

Abstract During 2008 approximately 75% of tornado warnings issued by the National Weather Service (NWS) were false alarms. This study investigates some of the climatological trends in the issuance of false alarms and highlights several factors that impact false-alarm ratio (FAR) statistics. All tornadoes and tornado warnings issued across the continental United States between 2000 and 2004 were analyzed, and the data were sorted by hour of the day, month of the year, geographical region and weather forecast office (WFO), the number of tornadoes observed on a day in which a false alarm was issued, distance of the warned area from the nearest NWS radar, county population density, and county area. Analysis of the tornado false-alarm data identified six specific trends. First, the FAR was highest during nonpeak storm periods, such as during the night and during the winter and late summer. Second, the FAR was strongly tied to the number of tornadoes warned per day. Nearly one-third of all false alarms were issued on days when no tornadoes were confirmed within the WFO’s county warning area. Third, the FAR varied with distance from radar, with significantly lower estimates found beyond 150 km from radar. Fourth, the FAR varied with population density. For warnings within 50 km of an NWS radar, FAR increased with population density; however, for warnings beyond 150 km from radar, FAR decreased regardless of population density. Fifth, the FAR also varied as a function of county size. The FAR was generally highest for the smallest counties; the FAR was ~80% for all counties less than 1000 km2 regardless of distance from radar. Finally, the combined effects of distance from radar, population density, and county size led to significant variability across geographic regions.


Sign in / Sign up

Export Citation Format

Share Document