scholarly journals Tornado Warning Decisions Using Phased-Array Radar Data

2015 ◽  
Vol 30 (1) ◽  
pp. 57-78 ◽  
Author(s):  
Pamela Heinselman ◽  
Daphne LaDue ◽  
Darrel M. Kingfield ◽  
Robert Hoffman

Abstract The 2012 Phased Array Radar Innovative Sensing Experiment identified how rapidly scanned full-volumetric data captured known mesoscale processes and impacted tornado-warning lead time. Twelve forecasters from nine National Weather Service forecast offices used this rapid-scan phased-array radar (PAR) data to issue tornado warnings on two low-end tornadic and two nontornadic supercell cases. Verification of the tornadic cases revealed that forecasters’ use of PAR data provided a median tornado-warning lead time (TLT) of 20 min. This 20-min TLT exceeded by 6.5 and 9 min, respectively, participants’ forecast office and regions’ median spring season, low-end TLTs (2008–13). Furthermore, polygon-based probability of detection ranged from 0.75 to 1.0 and probability of false alarm for all four cases ranged from 0.0 to 0.5. Similar performance was observed regardless of prior warning experience. Use of a cognitive task analysis method called the recent case walk-through showed that this performance was due to forecasters’ use of rapid volumetric updates. Warning decisions were based upon the intensity, persistence, and important changes in features aloft that are precursors to tornadogenesis. Precursors that triggered forecasters’ decisions to warn occurred within one or two typical Weather Surveillance Radar-1988 Doppler (WSR-88D) scans, indicating PAR’s temporal sampling better matches the time scale at which these precursors evolve.

2017 ◽  
Vol 32 (1) ◽  
pp. 253-274 ◽  
Author(s):  
Katie A. Wilson ◽  
Pamela L. Heinselman ◽  
Charles M. Kuster ◽  
Darrel M. Kingfield ◽  
Ziho Kang

Abstract Impacts of radar update time on forecasters’ warning decision processes were analyzed in the 2015 Phased Array Radar Innovative Sensing Experiment. Thirty National Weather Service forecasters worked nine archived phased-array radar (PAR) cases in simulated real time. These cases presented nonsevere, severe hail and/or wind, and tornadic events. Forecasters worked each type of event with approximately 5-min (quarter speed), 2-min (half speed), and 1-min (full speed) PAR updates. Warning performance was analyzed with respect to lead time and verification. Combining all cases, forecasters’ median warning lead times when using full-, half-, and quarter-speed PAR updates were 17, 14.5, and 13.6 min, respectively. The use of faster PAR updates also resulted in higher probability of detection and lower false alarm ratio scores. Radar update speed did not impact warning duration or size. Analysis of forecaster performance on a case-by-case basis showed that the impact of PAR update speed varied depending on the situation. This impact was most noticeable during the tornadic cases, where radar update speed positively impacted tornado warning lead time during two supercell events, but not for a short-lived tornado occurring within a bowing line segment. Forecasters’ improved ability to correctly discriminate the severe weather threat during a nontornadic supercell event with faster PAR updates was also demonstrated. Forecasters provided subjective assessments of their cognitive workload in all nine cases. On average, forecasters were not cognitively overloaded, but some participants did experience higher levels of cognitive workload at times. A qualitative explanation of these particular instances is provided.


Author(s):  
Evan S. Bentley ◽  
Richard L. Thompson ◽  
Barry R. Bowers ◽  
Justin G. Gibbs ◽  
Steven E. Nelson

AbstractPrevious work has considered tornado occurrence with respect to radar data, both WSR-88D and mobile research radars, and a few studies have examined techniques to potentially improve tornado warning performance. To date, though, there has been little work focusing on systematic, large-sample evaluation of National Weather Service (NWS) tornado warnings with respect to radar-observable quantities and the near-storm environment. In this work, three full years (2016–2018) of NWS tornado warnings across the contiguous United States were examined, in conjunction with supporting data in the few minutes preceding warning issuance, or tornado formation in the case of missed events. The investigation herein examines WSR-88D and Storm Prediction Center (SPC) mesoanalysis data associated with these tornado warnings with comparisons made to the current Warning Decision Training Division (WDTD) guidance.Combining low-level rotational velocity and the significant tornado parameter (STP), as used in prior work, shows promise as a means to estimate tornado warning performance, as well as relative changes in performance as criteria thresholds vary. For example, low-level rotational velocity peaking in excess of 30 kt (15 m s−1), in a near-storm environment which is not prohibitive for tornadoes (STP > 0), results in an increased probability of detection and reduced false alarms compared to observed NWS tornado warning metrics. Tornado warning false alarms can also be reduced through limiting warnings with weak (<30 kt), broad (>1nm) circulations in a poor (STP=0) environment, careful elimination of velocity data artifacts like sidelobe contamination, and through greater scrutiny of human-based tornado reports in otherwise questionable scenarios.


2015 ◽  
Vol 30 (2) ◽  
pp. 389-404 ◽  
Author(s):  
Katie A. Bowden ◽  
Pamela L. Heinselman ◽  
Darrel M. Kingfield ◽  
Rick P. Thomas

Abstract The ongoing Phased Array Radar Innovative Sensing Experiment (PARISE) investigates the impacts of higher-temporal-resolution radar data on the warning decision process of NWS forecasters. Twelve NWS forecasters participated in the 2013 PARISE and were assigned to either a control (5-min updates) or an experimental (1-min updates) group. Participants worked two case studies in simulated real time. The first case presented a marginally severe hail event, and the second case presented a severe hail and wind event. While working each event, participants made decisions regarding the detection, identification, and reidentification of severe weather. These three levels compose what has now been termed the compound warning decision process. Decisions were verified with respect to the three levels of the compound warning decision process and the experimental group obtained a lower mean false alarm ratio than the control group throughout both cases. The experimental group also obtained a higher mean probability of detection than the control group throughout the first case and at the detection level in the second case. Statistical significance (p value = 0.0252) was established for the difference in median lead times obtained by the experimental (21.5 min) and control (17.3 min) groups. A confidence-based assessment was used to categorize decisions into four types: doubtful, uninformed, misinformed, and mastery. Although mastery (i.e., confident and correct) decisions formed the largest category in both groups, the experimental group had a larger proportion of mastery decisions, possibly because of their enhanced ability to observe and track individual storm characteristics through the use of 1-min updates.


Author(s):  
Makenzie J. Krocak ◽  
Harold E. Brooks

AbstractWhile many studies have looked at the quality of forecast products, few have attempted to understand the relationship between them. We begin to consider whether or not such an influence exists by analyzing storm-based tornado warning product metrics with respect to whether they occurred within a severe weather watch and, if so, what type of watch they occurred within.The probability of detection, false alarm ratio, and lead time all show a general improvement with increasing watch severity. In fact, the probability of detection increased more as a function of watch-type severity than the change in probability of detection during the time period of analysis. False alarm ratio decreased as watch type increased in severity, but with a much smaller magnitude than the difference in probability of detection. Lead time also improved with an increase in watch-type severity. Warnings outside of any watch had a mean lead time of 5.5 minutes, while those inside of a particularly dangerous situation tornado watch had a mean lead time of 15.1 minutes. These results indicate that the existence and type of severe weather watch may have an influence on the quality of tornado warnings. However, it is impossible to separate the influence of weather watches from possible differences in warning strategy or differences in environmental characteristics that make it more or less challenging to warn for tornadoes. Future studies should attempt to disentangle these numerous influences to assess how much influence intermediate products have on downstream products.


2015 ◽  
Vol 30 (4) ◽  
pp. 933-956 ◽  
Author(s):  
Charles M. Kuster ◽  
Pamela L. Heinselman ◽  
Marcus Austin

Abstract On 31 May 2013, a supercell produced a tornado rated as 3 on the enhanced Fujita scale (EF3) near El Reno, Oklahoma, which was sampled by the S-band phased-array radar (PAR) at the National Weather Radar Testbed in Norman, Oklahoma. Collaboration with the forecaster who issued tornado warnings for the El Reno supercell during real-time operations focused the analysis on critical radar signatures frequently assessed during warning operations. The wealth of real-world experience provided by the forecaster, along with the quantitative analysis, highlighted differences between rapid-scan PAR data and the Weather Surveillance Radar-1988 Doppler located near Oklahoma City, Oklahoma (KTLX), within the context of forecast challenges faced on 31 May 2013. The comparison revealed that the 70-s PAR data proved most advantageous to the forecaster’s situational awareness in instances of rapid storm organization, sudden mesocyclone intensification, and abrupt, short-term changes in tornado motion. Situations where PAR data were most advantageous in the depiction of storm-scale processes included 1) rapid variations in mesocyclone intensity and associated changes in inflow magnitude; 2) imminent radar-indicated development of the short-lived (EF0) Calumet, Oklahoma, and long-lived (EF3) El Reno tornadoes; and 3) precise location and motion of the tornado circulation. As a result, it is surmised that rapid-scan volumetric radar data in cases like this would augment a forecaster’s ability to observe rapidly evolving storm features and deliver timely, life-saving information to the general public.


2011 ◽  
Vol 3 (2) ◽  
pp. 128-140 ◽  
Author(s):  
S. Hoekstra ◽  
K. Klockow ◽  
R. Riley ◽  
J. Brotzge ◽  
H. Brooks ◽  
...  

Abstract Tornado warnings are currently issued an average of 13 min in advance of a tornado and are based on a warn-on-detection paradigm. However, computer model improvements may allow for a new warning paradigm, warn-on-forecast, to be established in the future. This would mean that tornado warnings could be issued one to two hours in advance, prior to storm initiation. In anticipation of the technological innovation, this study inquires whether the warn-on-forecast paradigm for tornado warnings may be preferred by the public (i.e., individuals and households). The authors sample is drawn from visitors to the National Weather Center in Norman, Oklahoma. During the summer and fall of 2009, surveys were distributed to 320 participants to assess their understanding and perception of weather risks and preferred tornado warning lead time. Responses were analyzed according to several different parameters including age, region of residency, educational level, number of children, and prior tornado experience. A majority of the respondents answered many of the weather risk questions correctly. They seemed to be familiar with tornado seasons; however, they were unaware of the relative number of fatalities caused by tornadoes and several additional weather phenomena each year in the United States. The preferred lead time was 34.3 min according to average survey responses. This suggests that while the general public may currently prefer a longer average lead time than the present system offers, the preference does not extend to the 1–2-h time frame theoretically offered by the warn-on-forecast system. When asked what they would do if given a 1-h lead time, respondents reported that taking shelter was a lesser priority than when given a 15-min lead time, and fleeing the area became a slightly more popular alternative. A majority of respondents also reported the situation would feel less life threatening if given a 1-h lead time. These results suggest that how the public responds to longer lead times may be complex and situationally dependent, and further study must be conducted to ascertain the users for whom the longer lead times would carry the most value. These results form the basis of an informative stated-preference approach to predicting public response to long (&gt;1 h) warning lead times, using public understanding of the risks posed by severe weather events to contextualize lead-time demand.


2018 ◽  
Vol 33 (6) ◽  
pp. 1501-1511 ◽  
Author(s):  
Harold E. Brooks ◽  
James Correia

Abstract Tornado warnings are one of the flagship products of the National Weather Service. We update the time series of various metrics of performance in order to provide baselines over the 1986–2016 period for lead time, probability of detection, false alarm ratio, and warning duration. We have used metrics (mean lead time for tornadoes warned in advance, fraction of tornadoes warned in advance) that work in a consistent way across the official changes in policy for warning issuance, as well as across points in time when unofficial changes took place. The mean lead time for tornadoes warned in advance was relatively constant from 1986 to 2011, while the fraction of tornadoes warned in advance increased through about 2006, and the false alarm ratio slowly decreased. The largest changes in performance take place in 2012 when the default warning duration decreased, and there is an apparent increased emphasis on reducing false alarms. As a result, the lead time, probability of detection, and false alarm ratio all decrease in 2012. Our analysis is based, in large part, on signal detection theory, which separates the quality of the warning system from the threshold for issuing warnings. Threshold changes lead to trade-offs between false alarms and missed detections. Such changes provide further evidence for changes in what the warning system as a whole considers important, as well as highlighting the limitations of measuring performance by looking at metrics independently.


2009 ◽  
Vol 24 (1) ◽  
pp. 140-154 ◽  
Author(s):  
J. Brotzge ◽  
S. Erickson

Abstract During a 5-yr period of study from 2000 to 2004, slightly more than 10% of all National Weather Service (NWS) tornado warnings were issued either simultaneously as the tornado formed (i.e., with zero lead time) or minutes after initial tornado formation but prior to tornado dissipation (i.e., with “negative” lead time). This study examines why these tornadoes were not warned in advance, and what climate, storm morphology, and sociological factors may have played a role in delaying the issuance of the warning. This dataset of zero and negative lead time warnings are sorted by their F-scale ratings, geographically by region and weather forecast office (WFO), hour of the day, month of the year, tornado-to-radar distance, county population density, and number of tornadoes by day, hour, and order of occurrence. Two key results from this study are (i) providing advance warning on the first tornado of the day remains a difficult challenge and (ii) the more isolated the tornado event, the less likelihood that an advance warning is provided. WFOs that experience many large-scale outbreaks have a lower proportion of warnings with negative lead time than WFOs that experience many more isolated, one-tornado or two-tornado warning days. Monthly and geographic trends in lead time are directly impacted by the number of multiple tornado events. Except for a few isolated cases, the impacts of tornado-to-radar distance, county population density, and storm morphology did not have a significant impact on negative lead-time warnings.


2020 ◽  
Vol 35 (5) ◽  
pp. 1939-1965
Author(s):  
Dylan Steinkruger ◽  
Paul Markowski ◽  
George Young

AbstractThe utility of employing artificial intelligence (AI) to issue tornado warnings is explored using an ensemble of 128 idealized simulations. Over 700 tornadoes develop within the ensemble of simulations, varying in duration, length, and associated storm mode. Machine-learning models are trained to forecast the temporal and spatial probabilities of tornado formation for a specific lead time. The machine-learning probabilities are used to produce tornado warning decisions for each grid point and lead time. An optimization function is defined, such that warning thresholds are modified to optimize the performance of the AI system on a specified metric (e.g., increased lead time, minimized false alarms, etc.). Using genetic algorithms, multiple AI systems are developed with different optimization functions. The different AI systems yield unique warning output depending on the desired attributes of the optimization function. The effects of the different optimization functions on warning performance are explored. Overall, performance is encouraging and suggests that automated tornado warning guidance is worth exploring with real-time data.


2010 ◽  
Vol 2010 ◽  
pp. 1-14 ◽  
Author(s):  
Qin Xu ◽  
Li Wei ◽  
Wei Gu ◽  
Jiandong Gong ◽  
Qingyun Zhao

A 3.5-dimensional variational method is developed for Doppler radar data assimilation. In this method, incremental analyses are performed in three steps to update the model state upon the background state provided by the model prediction. First, radar radial-velocity observations from three consecutive volume scans are analyzed on the model grid. The analyzed radial-velocity fields are then used in step 2 to produce incremental analyses for the vector velocity fields at two time levels between the three volume scans. The analyzed vector velocity fields are used in step 3 to produce incremental analyses for the thermodynamic fields at the central time level accompanied by the adjustments in water vapor and hydrometeor mixing ratios based on radar reflectivity observations. The finite element B-spline representations and recursive filter are used to reduce the dimension of the analysis space and enhance the computational efficiency. The method is applied to a squall line case observed by the phased-array radar with rapid volume scans at the National Weather Radar Testbed and is shown to be effective in assimilating the phased-array radar observations and improve the prediction of the subsequent evolution of the squall line.


Sign in / Sign up

Export Citation Format

Share Document