Object-Based Verification of Short-Term, Storm-Scale Probabilistic Mesocyclone Guidance from an Experimental Warn-on-Forecast System

2019 ◽  
Vol 34 (6) ◽  
pp. 1721-1739 ◽  
Author(s):  
Montgomery L. Flora ◽  
Patrick S. Skinner ◽  
Corey K. Potvin ◽  
Anthony E. Reinhart ◽  
Thomas A. Jones ◽  
...  

Abstract An object-based verification method for short-term, storm-scale probabilistic forecasts was developed and applied to mesocyclone guidance produced by the experimental Warn-on-Forecast System (WoFS) in 63 cases from 2017 to 2018. The probabilistic mesocyclone guidance was generated by calculating gridscale ensemble probabilities from WoFS forecasts of updraft helicity (UH) in layers 2–5 km (midlevel) and 0–2 km (low-level) above ground level (AGL) aggregated over 60-min periods. The resulting ensemble probability swaths are associated with individual thunderstorms and treated as objects with a single, representative probability value prescribed. A mesocyclone probability object, conceptually, is a region bounded by the ensemble forecast envelope of a mesocyclone track for a given thunderstorm over 1 h. The mesocyclone probability objects were matched against rotation track objects in Multi-Radar Multi-Sensor data using the total interest score, but with the maximum displacement varied between 0, 9, 15, and 30 km. Forecast accuracy and reliability were assessed at four different forecast lead time periods: 0–60, 30–90, 60–120, and 90–150 min. In the 0–60-min forecast period, the low-level UH probabilistic forecasts had a POD, FAR, and CSI of 0.46, 0.45, and 0.31, respectively, with a probability threshold of 22.2% (the threshold of maximum CSI). In the 90–150-min forecast period, the POD and CSI dropped to 0.39 and 0.27 while FAR remained relatively unchanged. Forecast probabilities > 60% overpredicted the likelihood of observed mesocyclones in the 0–60-min period; however, reliability improved when allowing larger maximum displacements for object matching and at longer lead times.

2016 ◽  
Vol 31 (3) ◽  
pp. 957-983 ◽  
Author(s):  
Nusrat Yussouf ◽  
John S. Kain ◽  
Adam J. Clark

Abstract A continuous-update-cycle storm-scale ensemble data assimilation (DA) and prediction system using the ARW model and DART software is used to generate retrospective 0–6-h ensemble forecasts of the 31 May 2013 tornado and flash flood event over central Oklahoma, with a focus on the prediction of heavy rainfall. Results indicate that the model-predicted probabilities of strong low-level mesocyclones correspond well with the locations of observed mesocyclones and with the observed damage track. The ensemble-mean quantitative precipitation forecast (QPF) from the radar DA experiments match NCEP’s stage IV analyses reasonably well in terms of location and amount of rainfall, particularly during the 0–3-h forecast period. In contrast, significant displacement errors and lower rainfall totals are evident in a control experiment that withholds radar data during the DA. The ensemble-derived probabilistic QPF (PQPF) from the radar DA experiment is more skillful than the PQPF from the no_radar experiment, based on visual inspection and probabilistic verification metrics. A novel object-based storm-tracking algorithm provides additional insight, suggesting that explicit assimilation and 1–2-h prediction of the dominant supercell is remarkably skillful in the radar experiment. The skill in both experiments is substantially higher during the 0–3-h forecast period than in the 3–6-h period. Furthermore, the difference in skill between the two forecasts decreases sharply during the latter period, indicating that the impact of radar DA is greatest during early forecast hours. Overall, the results demonstrate the potential for a frequently updated, high-resolution ensemble system to extend probabilistic low-level mesocyclone and flash flood forecast lead times and improve accuracy of convective precipitation nowcasting.


2021 ◽  
Vol 36 (1) ◽  
pp. 21-37
Author(s):  
Christopher A. Kerr ◽  
Louis J. Wicker ◽  
Patrick S. Skinner

AbstractThe Warn-on-Forecast system (WoFS) provides short-term, probabilistic forecasts of severe convective hazards including tornadoes, hail, and damaging winds. WoFS initial conditions are created through frequent assimilation of radar (reflectivity and radial velocity), satellite, and in situ observations. From 2016 to 2018, 5-km radial velocity Cressman superob analyses were created to reduce the observation counts and subsequent assimilation computational costs. The superobbing procedure smooths the radial velocity and subsequently fails to accurately depict important storm-scale features such as mesocyclones. This study retrospectively assimilates denser, 3-km radial velocity analyses in lieu of the 5-km analyses for eight case studies during the spring of 2018. Although there are forecast improvements during and shortly after convection initiation, 3-km analyses negatively impact forecasts initialized when convection is ongoing, as evidenced by model failure and initiation of spurious convection. Therefore, two additional experiments are performed using adaptive assimilation of 3-km radial velocity observations. Initially, an updraft variance mask is applied that limits radial velocity assimilation to areas where the observations are more likely to be beneficial. This experiment reduces spurious convection as well as the number of observations assimilated, in some cases even below that of the 5-km analysis experiments. The masking, however, eliminates an advantage of 3-km radial velocity assimilation for convection initiation timing. This problem is mitigated by additionally assimilating 3-km radial velocity observations in locations where large differences exist between the observed and ensemble-mean reflectivity fields, which retains the benefits of the denser radial velocity analyses while reducing the number of observations assimilated.


2020 ◽  
Vol 148 (5) ◽  
pp. 2135-2161 ◽  
Author(s):  
Aaron J. Hill ◽  
Gregory R. Herman ◽  
Russ S. Schumacher

Abstract Using nine years of historical forecasts spanning April 2003–April 2012 from NOAA’s Second Generation Global Ensemble Forecast System Reforecast (GEFS/R) ensemble, random forest (RF) models are trained to make probabilistic predictions of severe weather across the contiguous United States (CONUS) at Days 1–3, with separate models for tornado, hail, and severe wind prediction at Day 1 in an analogous fashion to the Storm Prediction Center’s (SPC’s) convective outlooks. Separate models are also trained for the western, central, and eastern CONUS. Input predictors include fields associated with severe weather prediction, including CAPE, CIN, wind shear, and numerous other variables. Predictor inputs incorporate the simulated spatiotemporal evolution of these atmospheric fields throughout the forecast period in the vicinity of the forecast point. These trained RF models are applied to unseen inputs from April 2012 to December 2016, and their forecasts are evaluated alongside the equivalent SPC outlooks. The RFs objectively make statistical deductions about the relationships between various simulated atmospheric fields and observations of different severe weather phenomena that accord with the community’s physical understandings about severe weather forecasting. Using these quantified flow-dependent relationships, the RF outlooks are found to produce calibrated probabilistic forecasts that slightly underperform SPC outlooks at Day 1, but significantly outperform their outlooks at Days 2 and 3. In all cases, a blend of the SPC and RF outlooks significantly outperforms the SPC outlooks alone, suggesting that use of RFs can improve operational severe weather forecasting throughout the Day 1–3 period.


2012 ◽  
Vol 140 (2) ◽  
pp. 696-716 ◽  
Author(s):  
Daniel T. Dawson II ◽  
Louis J. Wicker ◽  
Edward R. Mansell ◽  
Robin L. Tanamachi

The early tornadic phase of the Greensburg, Kansas, supercell on the evening of 4 May 2007 is simulated using a set of storm-scale (1-km horizontal grid spacing) 30-member ensemble Kalman filter (EnKF) data assimilation and forecast experiments. The Next Generation Weather Radar (NEXRAD) level-II radar data from the Dodge City, Kansas (KDDC), Weather Surveillance Radar-1988 Doppler (WSR-88D) are assimilated into the National Severe Storms Laboratory (NSSL) Collaborative Model for Multiscale Atmospheric Simulation (COMMAS). The initially horizontally homogeneous environments are initialized from one of three reconstructed soundings representative of the early tornadic phase of the storm, when a low-level jet (LLJ) was intensifying. To isolate the impact of the low-level wind profile, 0–3.5-km AGL wind profiles from Vance Air Force Base, Oklahoma (KVNX), WSR-88D velocity-azimuth display (VAD) analyses at 0130, 0200, and 0230 UTC are used. A sophisticated, double-moment bulk ice microphysics scheme is employed. For each of the three soundings, ensemble forecast experiments are initiated from EnKF analyses at various times prior to and shortly after the genesis of the Greensburg tornado (0200 UTC). Probabilistic forecasts of the mesocyclone-scale circulation(s) are generated and compared to the observed Greensburg tornado track. Probabilistic measures of significant rotation and observation-space diagnostic statistics are also calculated. It is shown that, in general, the track of the Greensburg tornado is well predicted, and forecasts improve as forecast lead time decreases. Significant variability is also seen across the experiments using different VAD wind profiles. Implications of these results regarding the choice of initial mesoscale environment, as well as for the “Warn-on-Forecast” paradigm for probabilistic numerical prediction of severe thunderstorms and tornadoes, are discussed.


2021 ◽  
Vol 2 (4) ◽  
pp. 42-48
Author(s):  
S. V. ZAYTSEV ◽  

In March 2018 the European Commission presented a proposal to adopt a digital services tax (DST) on certain types of revenues of multinational digital Companies. The purpose of the digital services tax is to compensate in the short term for the low level of corporate taxation of these companies in the European Union and thus meet the urgent need of civil society for greater tax fairness. DST is presented as an indirect tax on turnover and is often compared to value-added tax (VAT). In this article, the author seeks to highlight the many differences that exist between the harmonized European Union VAT and the new DST. In addition, the author challenges the idea that the DST will actually be an indirect tax and, most importantly, that it will effectively increase tax justice in the European Union.


2018 ◽  
Vol 36 (6) ◽  
pp. 1114-1134 ◽  
Author(s):  
Xiufeng Cheng ◽  
Jinqing Yang ◽  
Lixin Xia

PurposeThis paper aims to propose an extensible, service-oriented framework for context-aware data acquisition, description, interpretation and reasoning, which facilitates the development of mobile applications that provide a context-awareness service.Design/methodology/approachFirst, the authors propose the context data reasoning framework (CDRFM) for generating service-oriented contextual information. Then they used this framework to composite mobile sensor data into low-level contextual information. Finally, the authors exploited some high-level contextual information that can be inferred from the formatted low-level contextual information using particular inference rules.FindingsThe authors take “user behavior patterns” as an exemplary context information generation schema in their experimental study. The results reveal that the optimization of service can be guided by the implicit, high-level context information inside user behavior logs. They also prove the validity of the authors’ framework.Research limitations/implicationsFurther research will add more variety of sensor data. Furthermore, to validate the effectiveness of our framework, more reasoning rules need to be performed. Therefore, the authors may implement more algorithms in the framework to acquire more comprehensive context information.Practical implicationsCDRFM expands the context-awareness framework of previous research and unifies the procedures of acquiring, describing, modeling, reasoning and discovering implicit context information for mobile service providers.Social implicationsSupport the service-oriented context-awareness function in application design and related development in commercial mobile software industry.Originality/valueExtant researches on context awareness rarely considered the generation contextual information for service providers. The CDRFM can be used to generate valuable contextual information by implementing more reasoning rules.


Sign in / Sign up

Export Citation Format

Share Document