scholarly journals An Artificially Intelligent System for the Automated Issuance of Tornado Warnings in Simulated Convective Storms

2020 ◽  
Vol 35 (5) ◽  
pp. 1939-1965
Author(s):  
Dylan Steinkruger ◽  
Paul Markowski ◽  
George Young

AbstractThe utility of employing artificial intelligence (AI) to issue tornado warnings is explored using an ensemble of 128 idealized simulations. Over 700 tornadoes develop within the ensemble of simulations, varying in duration, length, and associated storm mode. Machine-learning models are trained to forecast the temporal and spatial probabilities of tornado formation for a specific lead time. The machine-learning probabilities are used to produce tornado warning decisions for each grid point and lead time. An optimization function is defined, such that warning thresholds are modified to optimize the performance of the AI system on a specified metric (e.g., increased lead time, minimized false alarms, etc.). Using genetic algorithms, multiple AI systems are developed with different optimization functions. The different AI systems yield unique warning output depending on the desired attributes of the optimization function. The effects of the different optimization functions on warning performance are explored. Overall, performance is encouraging and suggests that automated tornado warning guidance is worth exploring with real-time data.

2011 ◽  
Vol 3 (2) ◽  
pp. 128-140 ◽  
Author(s):  
S. Hoekstra ◽  
K. Klockow ◽  
R. Riley ◽  
J. Brotzge ◽  
H. Brooks ◽  
...  

Abstract Tornado warnings are currently issued an average of 13 min in advance of a tornado and are based on a warn-on-detection paradigm. However, computer model improvements may allow for a new warning paradigm, warn-on-forecast, to be established in the future. This would mean that tornado warnings could be issued one to two hours in advance, prior to storm initiation. In anticipation of the technological innovation, this study inquires whether the warn-on-forecast paradigm for tornado warnings may be preferred by the public (i.e., individuals and households). The authors sample is drawn from visitors to the National Weather Center in Norman, Oklahoma. During the summer and fall of 2009, surveys were distributed to 320 participants to assess their understanding and perception of weather risks and preferred tornado warning lead time. Responses were analyzed according to several different parameters including age, region of residency, educational level, number of children, and prior tornado experience. A majority of the respondents answered many of the weather risk questions correctly. They seemed to be familiar with tornado seasons; however, they were unaware of the relative number of fatalities caused by tornadoes and several additional weather phenomena each year in the United States. The preferred lead time was 34.3 min according to average survey responses. This suggests that while the general public may currently prefer a longer average lead time than the present system offers, the preference does not extend to the 1–2-h time frame theoretically offered by the warn-on-forecast system. When asked what they would do if given a 1-h lead time, respondents reported that taking shelter was a lesser priority than when given a 15-min lead time, and fleeing the area became a slightly more popular alternative. A majority of respondents also reported the situation would feel less life threatening if given a 1-h lead time. These results suggest that how the public responds to longer lead times may be complex and situationally dependent, and further study must be conducted to ascertain the users for whom the longer lead times would carry the most value. These results form the basis of an informative stated-preference approach to predicting public response to long (>1 h) warning lead times, using public understanding of the risks posed by severe weather events to contextualize lead-time demand.


Author(s):  
Evan S. Bentley ◽  
Richard L. Thompson ◽  
Barry R. Bowers ◽  
Justin G. Gibbs ◽  
Steven E. Nelson

AbstractPrevious work has considered tornado occurrence with respect to radar data, both WSR-88D and mobile research radars, and a few studies have examined techniques to potentially improve tornado warning performance. To date, though, there has been little work focusing on systematic, large-sample evaluation of National Weather Service (NWS) tornado warnings with respect to radar-observable quantities and the near-storm environment. In this work, three full years (2016–2018) of NWS tornado warnings across the contiguous United States were examined, in conjunction with supporting data in the few minutes preceding warning issuance, or tornado formation in the case of missed events. The investigation herein examines WSR-88D and Storm Prediction Center (SPC) mesoanalysis data associated with these tornado warnings with comparisons made to the current Warning Decision Training Division (WDTD) guidance.Combining low-level rotational velocity and the significant tornado parameter (STP), as used in prior work, shows promise as a means to estimate tornado warning performance, as well as relative changes in performance as criteria thresholds vary. For example, low-level rotational velocity peaking in excess of 30 kt (15 m s−1), in a near-storm environment which is not prohibitive for tornadoes (STP > 0), results in an increased probability of detection and reduced false alarms compared to observed NWS tornado warning metrics. Tornado warning false alarms can also be reduced through limiting warnings with weak (<30 kt), broad (>1nm) circulations in a poor (STP=0) environment, careful elimination of velocity data artifacts like sidelobe contamination, and through greater scrutiny of human-based tornado reports in otherwise questionable scenarios.


2018 ◽  
Vol 33 (6) ◽  
pp. 1501-1511 ◽  
Author(s):  
Harold E. Brooks ◽  
James Correia

Abstract Tornado warnings are one of the flagship products of the National Weather Service. We update the time series of various metrics of performance in order to provide baselines over the 1986–2016 period for lead time, probability of detection, false alarm ratio, and warning duration. We have used metrics (mean lead time for tornadoes warned in advance, fraction of tornadoes warned in advance) that work in a consistent way across the official changes in policy for warning issuance, as well as across points in time when unofficial changes took place. The mean lead time for tornadoes warned in advance was relatively constant from 1986 to 2011, while the fraction of tornadoes warned in advance increased through about 2006, and the false alarm ratio slowly decreased. The largest changes in performance take place in 2012 when the default warning duration decreased, and there is an apparent increased emphasis on reducing false alarms. As a result, the lead time, probability of detection, and false alarm ratio all decrease in 2012. Our analysis is based, in large part, on signal detection theory, which separates the quality of the warning system from the threshold for issuing warnings. Threshold changes lead to trade-offs between false alarms and missed detections. Such changes provide further evidence for changes in what the warning system as a whole considers important, as well as highlighting the limitations of measuring performance by looking at metrics independently.


2009 ◽  
Vol 24 (1) ◽  
pp. 140-154 ◽  
Author(s):  
J. Brotzge ◽  
S. Erickson

Abstract During a 5-yr period of study from 2000 to 2004, slightly more than 10% of all National Weather Service (NWS) tornado warnings were issued either simultaneously as the tornado formed (i.e., with zero lead time) or minutes after initial tornado formation but prior to tornado dissipation (i.e., with “negative” lead time). This study examines why these tornadoes were not warned in advance, and what climate, storm morphology, and sociological factors may have played a role in delaying the issuance of the warning. This dataset of zero and negative lead time warnings are sorted by their F-scale ratings, geographically by region and weather forecast office (WFO), hour of the day, month of the year, tornado-to-radar distance, county population density, and number of tornadoes by day, hour, and order of occurrence. Two key results from this study are (i) providing advance warning on the first tornado of the day remains a difficult challenge and (ii) the more isolated the tornado event, the less likelihood that an advance warning is provided. WFOs that experience many large-scale outbreaks have a lower proportion of warnings with negative lead time than WFOs that experience many more isolated, one-tornado or two-tornado warning days. Monthly and geographic trends in lead time are directly impacted by the number of multiple tornado events. Except for a few isolated cases, the impacts of tornado-to-radar distance, county population density, and storm morphology did not have a significant impact on negative lead-time warnings.


2015 ◽  
Vol 30 (1) ◽  
pp. 57-78 ◽  
Author(s):  
Pamela Heinselman ◽  
Daphne LaDue ◽  
Darrel M. Kingfield ◽  
Robert Hoffman

Abstract The 2012 Phased Array Radar Innovative Sensing Experiment identified how rapidly scanned full-volumetric data captured known mesoscale processes and impacted tornado-warning lead time. Twelve forecasters from nine National Weather Service forecast offices used this rapid-scan phased-array radar (PAR) data to issue tornado warnings on two low-end tornadic and two nontornadic supercell cases. Verification of the tornadic cases revealed that forecasters’ use of PAR data provided a median tornado-warning lead time (TLT) of 20 min. This 20-min TLT exceeded by 6.5 and 9 min, respectively, participants’ forecast office and regions’ median spring season, low-end TLTs (2008–13). Furthermore, polygon-based probability of detection ranged from 0.75 to 1.0 and probability of false alarm for all four cases ranged from 0.0 to 0.5. Similar performance was observed regardless of prior warning experience. Use of a cognitive task analysis method called the recent case walk-through showed that this performance was due to forecasters’ use of rapid volumetric updates. Warning decisions were based upon the intensity, persistence, and important changes in features aloft that are precursors to tornadogenesis. Precursors that triggered forecasters’ decisions to warn occurred within one or two typical Weather Surveillance Radar-1988 Doppler (WSR-88D) scans, indicating PAR’s temporal sampling better matches the time scale at which these precursors evolve.


Author(s):  
Makenzie J. Krocak ◽  
Harold E. Brooks

AbstractWhile many studies have looked at the quality of forecast products, few have attempted to understand the relationship between them. We begin to consider whether or not such an influence exists by analyzing storm-based tornado warning product metrics with respect to whether they occurred within a severe weather watch and, if so, what type of watch they occurred within.The probability of detection, false alarm ratio, and lead time all show a general improvement with increasing watch severity. In fact, the probability of detection increased more as a function of watch-type severity than the change in probability of detection during the time period of analysis. False alarm ratio decreased as watch type increased in severity, but with a much smaller magnitude than the difference in probability of detection. Lead time also improved with an increase in watch-type severity. Warnings outside of any watch had a mean lead time of 5.5 minutes, while those inside of a particularly dangerous situation tornado watch had a mean lead time of 15.1 minutes. These results indicate that the existence and type of severe weather watch may have an influence on the quality of tornado warnings. However, it is impossible to separate the influence of weather watches from possible differences in warning strategy or differences in environmental characteristics that make it more or less challenging to warn for tornadoes. Future studies should attempt to disentangle these numerous influences to assess how much influence intermediate products have on downstream products.


2020 ◽  
pp. 1-11
Author(s):  
Jie Liu ◽  
Lin Lin ◽  
Xiufang Liang

The online English teaching system has certain requirements for the intelligent scoring system, and the most difficult stage of intelligent scoring in the English test is to score the English composition through the intelligent model. In order to improve the intelligence of English composition scoring, based on machine learning algorithms, this study combines intelligent image recognition technology to improve machine learning algorithms, and proposes an improved MSER-based character candidate region extraction algorithm and a convolutional neural network-based pseudo-character region filtering algorithm. In addition, in order to verify whether the algorithm model proposed in this paper meets the requirements of the group text, that is, to verify the feasibility of the algorithm, the performance of the model proposed in this study is analyzed through design experiments. Moreover, the basic conditions for composition scoring are input into the model as a constraint model. The research results show that the algorithm proposed in this paper has a certain practical effect, and it can be applied to the English assessment system and the online assessment system of the homework evaluation system algorithm system.


2021 ◽  
Vol 10 (4) ◽  
pp. 199
Author(s):  
Francisco M. Bellas Aláez ◽  
Jesus M. Torres Palenzuela ◽  
Evangelos Spyrakos ◽  
Luis González Vilas

This work presents new prediction models based on recent developments in machine learning methods, such as Random Forest (RF) and AdaBoost, and compares them with more classical approaches, i.e., support vector machines (SVMs) and neural networks (NNs). The models predict Pseudo-nitzschia spp. blooms in the Galician Rias Baixas. This work builds on a previous study by the authors (doi.org/10.1016/j.pocean.2014.03.003) but uses an extended database (from 2002 to 2012) and new algorithms. Our results show that RF and AdaBoost provide better prediction results compared to SVMs and NNs, as they show improved performance metrics and a better balance between sensitivity and specificity. Classical machine learning approaches show higher sensitivities, but at a cost of lower specificity and higher percentages of false alarms (lower precision). These results seem to indicate a greater adaptation of new algorithms (RF and AdaBoost) to unbalanced datasets. Our models could be operationally implemented to establish a short-term prediction system.


Sign in / Sign up

Export Citation Format

Share Document