scholarly journals Application of Two Spatial Verification Methods to Ensemble Forecasts of Low-Level Rotation

2016 ◽  
Vol 31 (3) ◽  
pp. 713-735 ◽  
Author(s):  
Patrick S. Skinner ◽  
Louis J. Wicker ◽  
Dustan M. Wheatley ◽  
Kent H. Knopfmeier

Abstract Two spatial verification methods are applied to ensemble forecasts of low-level rotation in supercells: a four-dimensional, object-based matching algorithm and the displacement and amplitude score (DAS) based on optical flow. Ensemble forecasts of low-level rotation produced using the National Severe Storms Laboratory (NSSL) Experimental Warn-on-Forecast System are verified against WSR-88D single-Doppler azimuthal wind shear values interpolated to the model grid. Verification techniques are demonstrated using four 60-min forecasts issued at 15-min intervals in the hour preceding development of the 20 May 2013 Moore, Oklahoma, tornado and compared to results from two additional forecasts of tornadic supercells occurring during the springs of 2013 and 2014. The object-based verification technique and displacement component of DAS are found to reproduce subjectively determined forecast characteristics in successive forecasts for the 20 May 2013 event, as well as to discriminate in subjective forecast quality between different events. Ensemble-mean, object-based measures quantify spatial and temporal displacement, as well as storm motion biases in predicted low-level rotation in a manner consistent with subjective interpretation. Neither method produces useful measures of the intensity of low-level rotation, owing to deficiencies in the verification dataset and forecast resolution.

2014 ◽  
Vol 29 (6) ◽  
pp. 1451-1472 ◽  
Author(s):  
Jamie K. Wolff ◽  
Michelle Harrold ◽  
Tressa Fowler ◽  
John Halley Gotway ◽  
Louisa Nance ◽  
...  

Abstract While traditional verification methods are commonly used to assess numerical model quantitative precipitation forecasts (QPFs) using a grid-to-grid approach, they generally offer little diagnostic information or reasoning behind the computed statistic. On the other hand, advanced spatial verification techniques, such as neighborhood and object-based methods, can provide more meaningful insight into differences between forecast and observed features in terms of skill with spatial scale, coverage area, displacement, orientation, and intensity. To demonstrate the utility of applying advanced verification techniques to mid- and coarse-resolution models, the Developmental Testbed Center (DTC) applied several traditional metrics and spatial verification techniques to QPFs provided by the Global Forecast System (GFS) and operational North American Mesoscale Model (NAM). Along with frequency bias and Gilbert skill score (GSS) adjusted for bias, both the fractions skill score (FSS) and Method for Object-Based Diagnostic Evaluation (MODE) were utilized for this study with careful consideration given to how these methods were applied and how the results were interpreted. By illustrating the types of forecast attributes appropriate to assess with the spatial verification techniques, this paper provides examples of how to obtain advanced diagnostic information to help identify what aspects of the forecast are or are not performing well.


2019 ◽  
Vol 34 (3) ◽  
pp. 603-615
Author(s):  
Jun Du ◽  
Binbin Zhou ◽  
Jason Levit

Abstract Responding to the call for new verification methods in a recent editorial in Weather and Forecasting, this study proposed two new verification metrics to quantify the forecast challenges that a user faces in decision-making when using ensemble models. The measure of forecast challenge (MFC) combines forecast error and uncertainty information together into one single score. It consists of four elements: ensemble mean error, spread, nonlinearity, and outliers. The cross correlation among the four elements indicates that each element contains independent information. The relative contribution of each element to the MFC is analyzed by calculating the correlation between each element and MFC. The biggest contributor is the ensemble mean error, followed by the ensemble spread, nonlinearity, and outliers. By applying MFC to the predictability horizon diagram of a forecast ensemble, a predictability horizon diagram index (PHDX) is defined to quantify how the ensemble evolves at a specific location as an event approaches. The value of PHDX varies between 1.0 and −1.0. A positive PHDX indicates that the forecast challenge decreases as an event nears (type I), providing creditable forecast information to users. A negative PHDX value indicates that the forecast challenge increases as an event nears (type II), providing misleading information to users. A near-zero PHDX value indicates that the forecast challenge remains large as an event nears, providing largely uncertain information to users. Unlike current verification metrics that verify at a particular point in time, PHDX verifies a forecasting process through many forecasting cycles. Forecasting-process-oriented verification could be a new direction in model verification. The sample ensemble forecasts used in this study are produced from the NCEP global and regional ensembles.


2021 ◽  
Vol 36 (1) ◽  
pp. 3-19
Author(s):  
Burkely T. Gallo ◽  
Jamie K. Wolff ◽  
Adam J. Clark ◽  
Israel Jirak ◽  
Lindsay R. Blank ◽  
...  

AbstractVerification methods for convection-allowing models (CAMs) should consider the finescale spatial and temporal detail provided by CAMs, and including both neighborhood and object-based methods can account for displaced features that may still provide useful information. This work explores both contingency table–based verification techniques and object-based verification techniques as they relate to forecasts of severe convection. Two key fields in severe weather forecasting are investigated: updraft helicity (UH) and simulated composite reflectivity. UH is used to generate severe weather probabilities called surrogate severe fields, which have two tunable parameters: the UH threshold and the smoothing level. Probabilities computed using the UH threshold and smoothing level that give the best area under the receiver operating curve result in very high probabilities, while optimizing the parameters based on the Brier score reliability component results in much lower probabilities. Subjective ratings from participants in the 2018 NOAA Hazardous Weather Testbed Spring Forecasting Experiment (SFE) provide a complementary evaluation source. This work compares the verification methodologies in the context of three CAMs using the Finite-Volume Cubed-Sphere Dynamical Core (FV3), which will be the foundation of the U.S. Unified Forecast System (UFS). Three agencies ran FV3-based CAMs during the five-week 2018 SFE. These FV3-based CAMs are verified alongside a current operational CAM, the High-Resolution Rapid Refresh version 3 (HRRRv3). The HRRR is planned to eventually use the FV3 dynamical core as part of the UFS; as such evaluations relative to current HRRR configurations are imperative to maintaining high forecast quality and informing future implementation decisions.


2021 ◽  
Vol 4 ◽  
pp. 30-49
Author(s):  
A.Yu. Bundel ◽  
◽  
A.V. Muraviev ◽  
E.D. Olkhovaya ◽  
◽  
...  

State-of-the-art high-resolution NWP models simulate mesoscale systems with a high degree of detail, with large amplitudes and high gradients of fields of weather variables. Higher resolution leads to the spatial and temporal error growth and to a well-known double penalty problem. To solve this problem, the spatial verification methods have been developed over the last two decades, which ignore moderate errors (especially in the position), but can still evaluate the useful skill of a high-resolution model. The paper refers to the updated classification of spatial verification methods, briefly describes the main methods, and gives an overview of the international projects for intercomparison of the methods. Special attention is given to the application of the spatial approach to ensemble forecasting. Popular software packages are considered. The Russian translation is proposed for the relevant English terms. Keywords: high-resolution models, verification, double penalty, spatial methods, ensemble forecasting, object-based methods


2018 ◽  
Vol 33 (4) ◽  
pp. 1001-1020 ◽  
Author(s):  
Sabine Radanovics ◽  
Jean-Philippe Vidal ◽  
Eric Sauquet

Abstract Spatial verification methods able to handle high-resolution ensemble forecasts and analysis ensembles are increasingly required because of the increasing development of such ensembles. An ensemble extension of the structure–amplitude–location (SAL) spatial verification method is proposed here. The ensemble SAL (eSAL) allows for verifying ensemble forecasts against a deterministic or ensemble analysis. The eSAL components are equal to those of SAL in the deterministic case, thus allowing the comparison of deterministic and ensemble forecasts. The Mesoscale Verification Intercomparison over Complex Terrain (MesoVICT) project provides a dataset containing deterministic and ensemble precipitation forecasts as well as a deterministic and ensemble analysis for case studies in summer 2007 over the greater Alpine region. These datasets allow for testing of the sensitivity of SAL and eSAL to analysis uncertainty and their suitability for the verification of ensemble forecasts. Their sensitivity with respect to the main parameter of this feature-based method—the threshold for defining precipitation features—is furthermore tested for both the deterministic and ensemble forecasts. Our results stress the importance of using meaningful thresholds in order to limit any unstable behavior of the threshold-dependent SAL components. The eSAL components are typically close to the median of the distribution of deterministic SAL components calculated for all combinations of ensemble members of the forecast and the analysis, with considerably less computational time. The eSAL ensemble extension of SAL can be considered as a relevant summary measure that leads to more easily interpretable SAL diagrams.


2018 ◽  
Vol 99 (9) ◽  
pp. 1887-1906 ◽  
Author(s):  
Manfred Dorninger ◽  
Eric Gilleland ◽  
Barbara Casati ◽  
Marion P. Mittermaier ◽  
Elizabeth E. Ebert ◽  
...  

AbstractRecent advancements in numerical weather prediction (NWP) and the enhancement of model resolution have created the need for more robust and informative verification methods. In response to these needs, a plethora of spatial verification approaches have been developed in the past two decades. A spatial verification method intercomparison was established in 2007 with the aim of gaining a better understanding of the abilities of the new spatial verification methods to diagnose different types of forecast errors. The project focused on prescribed errors for quantitative precipitation forecasts over the central United States. The intercomparison led to a classification of spatial verification methods and a cataloging of their diagnostic capabilities, providing useful guidance to end users, model developers, and verification scientists. A decade later, NWP systems have continued to increase in resolution, including advances in high-resolution ensembles. This article describes the setup of a second phase of the verification intercomparison, called the Mesoscale Verification Intercomparison over Complex Terrain (MesoVICT). MesoVICT focuses on the application, capability, and enhancement of spatial verification methods to deterministic and ensemble forecasts of precipitation, wind, and temperature over complex terrain. Importantly, this phase also explores the issue of analysis uncertainty through the use of an ensemble of meteorological analyses.


2020 ◽  
Author(s):  
Marion Mittermaier ◽  
Rachel North ◽  
Christine Pequignet ◽  
Jan Maksymczuk

<div> <p>HiVE is a CMEMS funded collaboration between the atmospheric Numerical Weather Prediction (NWP) verification and the ocean community within the Met Office, aimed at demonstrating the use of spatial verification methods originally developed for the evaluation of high-resolution NWP forecasts, to CMEMS ocean model forecast products. Spatial verification methods provide more scale appropriate ways to better assess forecast characteristics and accuracy of km-scale forecasts, where the detail looks realistic but may not be in the right place at the right time. As a result, it can be the case that coarser resolution forecasts verify better (e.g. lower root-mean-square-error) than the higher resolution forecast. In this instance the smoothness of the coarser resolution forecast is rewarded, though the higher-resolution forecast may be better. The project utilised open source code library known as Model Evaluation Tools (MET) developed at the US National Center for Atmospheric Research (NCAR).</p> </div><div> <p> </p> </div><div> <p>This project saw, for the first time, the application of spatial verification methods to sub-10 km resolution ocean model forecasts. The project consisted of two parts. Part 1 is described in the companion poster to this one. Part 2 describes the skill of CMEMS products for forecasting events or features of interest such as algal blooms.  </p> </div><div> <p> </p> </div><div> <p>The Method for Object-based Diagnostic Evaluation (MODE) and the time dimension version MODE Time Domain (MTD) were applied to daily mean chlorophyll forecasts for the European North West Shelf from the FOAM-ERSEM model on the AMM7 grid. The forecasts are produced from a “cold start”, i.e. no data assimilation of biological variables. Here the entire 2019 algal bloom season was analysed to understand: intensity and spatial (area) biases; location and timing errors. Forecasts were compared to the CMEMS daily cloud free (L4) multi-sensor chlorophyll-<em>a</em> product. </p> </div><div> <p> </p> </div><div> <p>It has been found that there are large differences between forecast and observed concentrations of chlorophyll. This has meant that a quantile mapping approach for removing the bias was necessary before analysing the spatial properties of the forecast. Despite this the model still produces areas of chlorophyll which are too large compared to the observed. The model often produces areas of enhanced chlorophyll in approximately the right locations but the forecast and observed areas are rarely collocated and/or overlapping. Finally, the temporal analysis shows that the model struggled to get the onset of the season (being close to a month too late), but once the model picked up the signal there was better correspondence between the observed and forecast chlorophyll peaks for the remainder of the season. There was very little variation in forecast performance with lead time, suggesting that chlorophyll is a very slowly varying quantity.  </p> </div><div> <p> </p> </div><div> <p>Comparing an analysis which included the assimilation of observed chlorophyll shows that it is much closer to the observed L4 product than the non-biological assimilative analysis. It must be concluded that if the forecast were started from a DA analysis that included chlorophyll, it would lead to forecasts with less bias, and possibly a better detection of the onset of the bloom.  </p> </div><div> <p> </p> </div>


Author(s):  
Pierre-Loïc Garoche

The verification of control system software is critical to a host of technologies and industries, from aeronautics and medical technology to the cars we drive. The failure of controller software can cost people their lives. This book provides control engineers and computer scientists with an introduction to the formal techniques for analyzing and verifying this important class of software. Too often, control engineers are unaware of the issues surrounding the verification of software, while computer scientists tend to be unfamiliar with the specificities of controller software. The book provides a unified approach that is geared to graduate students in both fields, covering formal verification methods as well as the design and verification of controllers. It presents a wealth of new verification techniques for performing exhaustive analysis of controller software. These include new means to compute nonlinear invariants, the use of convex optimization tools, and methods for dealing with numerical imprecisions such as floating point computations occurring in the analyzed software. As the autonomy of critical systems continues to increase—as evidenced by autonomous cars, drones, and satellites and landers—the numerical functions in these systems are growing ever more advanced. The techniques presented here are essential to support the formal analysis of the controller software being used in these new and emerging technologies.


2017 ◽  
Vol 145 (6) ◽  
pp. 2257-2279 ◽  
Author(s):  
Bryan J. Putnam ◽  
Ming Xue ◽  
Youngsun Jung ◽  
Nathan A. Snook ◽  
Guifu Zhang

Abstract Ensemble-based probabilistic forecasts are performed for a mesoscale convective system (MCS) that occurred over Oklahoma on 8–9 May 2007, initialized from ensemble Kalman filter analyses using multinetwork radar data and different microphysics schemes. Two experiments are conducted, using either a single-moment or double-moment microphysics scheme during the 1-h-long assimilation period and in subsequent 3-h ensemble forecasts. Qualitative and quantitative verifications are performed on the ensemble forecasts, including probabilistic skill scores. The predicted dual-polarization (dual-pol) radar variables and their probabilistic forecasts are also evaluated against available dual-pol radar observations, and discussed in relation to predicted microphysical states and structures. Evaluation of predicted reflectivity (Z) fields shows that the double-moment ensemble predicts the precipitation coverage of the leading convective line and stratiform precipitation regions of the MCS with higher probabilities throughout the forecast period compared to the single-moment ensemble. In terms of the simulated differential reflectivity (ZDR) and specific differential phase (KDP) fields, the double-moment ensemble compares more realistically to the observations and better distinguishes the stratiform and convective precipitation regions. The ZDR from individual ensemble members indicates better raindrop size sorting along the leading convective line in the double-moment ensemble. Various commonly used ensemble forecast verification methods are examined for the prediction of dual-pol variables. The results demonstrate the challenges associated with verifying predicted dual-pol fields that can vary significantly in value over small distances. Several microphysics biases are noted with the help of simulated dual-pol variables, such as substantial overprediction of KDP values in the single-moment ensemble.


2015 ◽  
Vol 57 ◽  
Author(s):  
Andre Kristofer Pattantyus ◽  
Steven Businger

<div class="page" title="Page 1"><div class="section"><div class="layoutArea"><div class="column"><p><span>Deterministic model forecasts do not convey to the end users the forecast uncertainty the models possess as a result of physics parameterizations, simplifications in model representation of physical processes, and errors in initial conditions. This lack of understanding leads to a level of uncertainty in the forecasted value when only a single deterministic model forecast is available. Increasing computational power and parallel software architecture allows multiple simulations to be carried out simultaneously that yield useful measures of model uncertainty that can be derived from ensemble model results. The Hybrid Single Particle Lagrangian Integration Trajectory and Dispersion model has the ability to generate ensemble forecasts. A meteorological ensemble was formed to create probabilistic forecast products and an ensemble mean forecast for volcanic emissions from the Kilauea volcano that impacts the state of Hawai’i. The probabilistic forecast products show uncertainty in pollutant concentrations that are especially useful for decision-making regarding public health. Initial comparison of the ensemble mean forecasts with observations and a single model forecast show improvements in event timing for both sulfur dioxide and sulfate aerosol forecasts. </span></p></div></div></div></div><p> </p>


Sign in / Sign up

Export Citation Format

Share Document