scholarly journals Spatial Verification of Ensemble Precipitation: An Ensemble Version of SAL

2018 ◽  
Vol 33 (4) ◽  
pp. 1001-1020 ◽  
Author(s):  
Sabine Radanovics ◽  
Jean-Philippe Vidal ◽  
Eric Sauquet

Abstract Spatial verification methods able to handle high-resolution ensemble forecasts and analysis ensembles are increasingly required because of the increasing development of such ensembles. An ensemble extension of the structure–amplitude–location (SAL) spatial verification method is proposed here. The ensemble SAL (eSAL) allows for verifying ensemble forecasts against a deterministic or ensemble analysis. The eSAL components are equal to those of SAL in the deterministic case, thus allowing the comparison of deterministic and ensemble forecasts. The Mesoscale Verification Intercomparison over Complex Terrain (MesoVICT) project provides a dataset containing deterministic and ensemble precipitation forecasts as well as a deterministic and ensemble analysis for case studies in summer 2007 over the greater Alpine region. These datasets allow for testing of the sensitivity of SAL and eSAL to analysis uncertainty and their suitability for the verification of ensemble forecasts. Their sensitivity with respect to the main parameter of this feature-based method—the threshold for defining precipitation features—is furthermore tested for both the deterministic and ensemble forecasts. Our results stress the importance of using meaningful thresholds in order to limit any unstable behavior of the threshold-dependent SAL components. The eSAL components are typically close to the median of the distribution of deterministic SAL components calculated for all combinations of ensemble members of the forecast and the analysis, with considerably less computational time. The eSAL ensemble extension of SAL can be considered as a relevant summary measure that leads to more easily interpretable SAL diagrams.

2018 ◽  
Vol 99 (9) ◽  
pp. 1887-1906 ◽  
Author(s):  
Manfred Dorninger ◽  
Eric Gilleland ◽  
Barbara Casati ◽  
Marion P. Mittermaier ◽  
Elizabeth E. Ebert ◽  
...  

AbstractRecent advancements in numerical weather prediction (NWP) and the enhancement of model resolution have created the need for more robust and informative verification methods. In response to these needs, a plethora of spatial verification approaches have been developed in the past two decades. A spatial verification method intercomparison was established in 2007 with the aim of gaining a better understanding of the abilities of the new spatial verification methods to diagnose different types of forecast errors. The project focused on prescribed errors for quantitative precipitation forecasts over the central United States. The intercomparison led to a classification of spatial verification methods and a cataloging of their diagnostic capabilities, providing useful guidance to end users, model developers, and verification scientists. A decade later, NWP systems have continued to increase in resolution, including advances in high-resolution ensembles. This article describes the setup of a second phase of the verification intercomparison, called the Mesoscale Verification Intercomparison over Complex Terrain (MesoVICT). MesoVICT focuses on the application, capability, and enhancement of spatial verification methods to deterministic and ensemble forecasts of precipitation, wind, and temperature over complex terrain. Importantly, this phase also explores the issue of analysis uncertainty through the use of an ensemble of meteorological analyses.


2016 ◽  
Vol 31 (3) ◽  
pp. 713-735 ◽  
Author(s):  
Patrick S. Skinner ◽  
Louis J. Wicker ◽  
Dustan M. Wheatley ◽  
Kent H. Knopfmeier

Abstract Two spatial verification methods are applied to ensemble forecasts of low-level rotation in supercells: a four-dimensional, object-based matching algorithm and the displacement and amplitude score (DAS) based on optical flow. Ensemble forecasts of low-level rotation produced using the National Severe Storms Laboratory (NSSL) Experimental Warn-on-Forecast System are verified against WSR-88D single-Doppler azimuthal wind shear values interpolated to the model grid. Verification techniques are demonstrated using four 60-min forecasts issued at 15-min intervals in the hour preceding development of the 20 May 2013 Moore, Oklahoma, tornado and compared to results from two additional forecasts of tornadic supercells occurring during the springs of 2013 and 2014. The object-based verification technique and displacement component of DAS are found to reproduce subjectively determined forecast characteristics in successive forecasts for the 20 May 2013 event, as well as to discriminate in subjective forecast quality between different events. Ensemble-mean, object-based measures quantify spatial and temporal displacement, as well as storm motion biases in predicted low-level rotation in a manner consistent with subjective interpretation. Neither method produces useful measures of the intensity of low-level rotation, owing to deficiencies in the verification dataset and forecast resolution.


2008 ◽  
Vol 15 (1) ◽  
pp. 125-143 ◽  
Author(s):  
Chiara Marsigli ◽  
Andrea Montani ◽  
Tiziana Paccangnella

2009 ◽  
Vol 24 (6) ◽  
pp. 1472-1484 ◽  
Author(s):  
Heini Wernli ◽  
Christiane Hofmann ◽  
Matthias Zimmer

Abstract In this study, a recently introduced feature-based quality measure called SAL, which provides information about the structure, amplitude, and location of a quantitative precipitation forecast (QPF) in a prespecified domain, is applied to different sets of synthetic and realistic QPFs in the United States. The focus is on a detailed discussion of selected cases and on the comparison of the verification results obtained with SAL and some classical gridpoint-based error measures. For simple geometric precipitation objects it is shown that SAL adequately captures errors in the size and location of the objects, however, not in their orientation. The artificially modified (so-called fake) cases illustrate that SAL has the potential to distinguish between forecasts where intense precipitation objects are either isolated or embedded in a larger-scale low-intensity precipitation area. The real cases highlight that a quality assessment with SAL can lead to contrasting results compared to the application of classical error measures and that, overall, SAL provides useful guidance for identifying the specific shortcomings of a particular QPF. It is also discussed that verification results with SAL and other error measures should be interpreted with care if considering large domains, which may contain meteorologically distinct precipitation systems.


2020 ◽  
Author(s):  
Marion Mittermaier ◽  
Rachel North ◽  
Jan Maksymczuk ◽  
Christine Pequignet ◽  
David Ford

Abstract. A feature-based verification method, commonly used for atmospheric model applications, has been applied to Chlorophyll-a (Chl-a) concentration forecasts from the Met Office Atlantic Margin Model at 7 km resolution (AMM7) North West European Shelf Seas model, and compared against gridded satellite observations of Chl-a concentration from the Copernicus Marine Environmental Monitoring Service (CMEMS) catalogue. A significant concentration bias was found between the model and observations. Two variants of quantile mapping were used to mitigate against the impact of this bias on feature identification (determined by threshold exceedance). Forecast and observed Chl-a objects for the 2019 bloom season (March 1 to 31 July), were analysed, firstly in space only, and secondly as space-time objects, incorporating concepts of onset, duration and demise. It was found that forecast objects tend to be too large spatially, with lower object numbers produced by the forecasts compared to those observed. Based on an analysis of the space-time objects the onset of Chl-a blooming episodes at the start of the season is almost a month too late in the forecasts, whilst several forecast blooms did not materialise in the observations. Whilst the model does produce blooms in the right places, they may not be at the right time. There was very little variation in forecasts and results as a function of lead time. A pre-operational AMM7 analysis, which assimilates Chl-a concentrations was also assessed, and found to behave more like the observations, suggesting that forecasts driven from these analyses could improve both timing errors and the bias.


2011 ◽  
Vol 109 ◽  
pp. 466-470
Author(s):  
Wen Ying Wang ◽  
Ru Jiang Guo ◽  
Hui Yu ◽  
Si Rui Tian

Spatial verification for object retrieval is often time-consuming and susceptible to viewpoint changes. In this paper, we propose a novel spatial verification method that is robust to viewpoint changes. Firstly, the affine covariant neighborhoods (ACNs) of corresponding local regions are matched to eliminate possible false matches. Secondly, the RANSAC is performed to estimate the affine transformation from each single pair of corresponding local regions without the gravity vector assumption used in previous spatial verification methods. Experimental results demonstrate that this method is more robust and fast than previous spatial verification methods.


2017 ◽  
Vol 145 (6) ◽  
pp. 2257-2279 ◽  
Author(s):  
Bryan J. Putnam ◽  
Ming Xue ◽  
Youngsun Jung ◽  
Nathan A. Snook ◽  
Guifu Zhang

Abstract Ensemble-based probabilistic forecasts are performed for a mesoscale convective system (MCS) that occurred over Oklahoma on 8–9 May 2007, initialized from ensemble Kalman filter analyses using multinetwork radar data and different microphysics schemes. Two experiments are conducted, using either a single-moment or double-moment microphysics scheme during the 1-h-long assimilation period and in subsequent 3-h ensemble forecasts. Qualitative and quantitative verifications are performed on the ensemble forecasts, including probabilistic skill scores. The predicted dual-polarization (dual-pol) radar variables and their probabilistic forecasts are also evaluated against available dual-pol radar observations, and discussed in relation to predicted microphysical states and structures. Evaluation of predicted reflectivity (Z) fields shows that the double-moment ensemble predicts the precipitation coverage of the leading convective line and stratiform precipitation regions of the MCS with higher probabilities throughout the forecast period compared to the single-moment ensemble. In terms of the simulated differential reflectivity (ZDR) and specific differential phase (KDP) fields, the double-moment ensemble compares more realistically to the observations and better distinguishes the stratiform and convective precipitation regions. The ZDR from individual ensemble members indicates better raindrop size sorting along the leading convective line in the double-moment ensemble. Various commonly used ensemble forecast verification methods are examined for the prediction of dual-pol variables. The results demonstrate the challenges associated with verifying predicted dual-pol fields that can vary significantly in value over small distances. Several microphysics biases are noted with the help of simulated dual-pol variables, such as substantial overprediction of KDP values in the single-moment ensemble.


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Ruoshui Liu ◽  
Jianghui Liu ◽  
Jingjie Zhang ◽  
Moli Zhang

Cloud computing is a new way of data storage, where users tend to upload video data to cloud servers without redundantly local copies. However, it keeps the data out of users' hands which would conventionally control and manage the data. Therefore, it becomes the key issue on how to ensure the integrity and reliability of the video data stored in the cloud for the provision of video streaming services to end users. This paper details the verification methods for the integrity of video data encrypted using the fully homomorphic crytosystems in the context of cloud computing. Specifically, we apply dynamic operation to video data stored in the cloud with the method of block tags, so that the integrity of the data can be successfully verified. The whole process is based on the analysis of present Remote Data Integrity Checking (RDIC) methods.


The past few decades have seen large fluctuations in the perceived value of parallel computing. At times, parallel computation has optimistically been viewed as the solution to all of our computational limitations. The conventional division of verification methods is analyzed. It is concluded that synthetic methods of software verification can be considered as the most relevant, most useful and productive ones. It is noted that the implementation of the methods of formal verification of software of computer systems, which supplement the traditional methods of testing and debugging, and make it possible to improve the uptime and security of programs, is relevant. Methods of computer systems software formal verification can guarantee the check that verified properties are performed by system model. Nowadays, these methods are actively being developed in the direction of reducing the formal verification total cost, support of modern programming concepts and minimization of "manual" work in the transition from the system model to its implementation. Their main feature is an ability to search for errors using mathematical model, without recourse to existing realization of software. It is very convenient and economical. There are several specific techniques used for formal models analysis, such as deductive analysis, model and consistence check. Every verification method is been used in particular cases, depending on the goal. Synthetic methods of software verification are considered the most actual, useful and efficient, as they somehow try to combine the advantages of different verification approaches, getting rid of their drawbacks. Currently, there has been made significant progress in the development of such methods and their implementation in the practice of industrial software development.


2015 ◽  
Vol 144 (1) ◽  
pp. 213-224 ◽  
Author(s):  
Chiara Piccolo ◽  
Mike Cullen

Abstract A natural way to set up an ensemble forecasting system is to use a model with additional stochastic forcing representing the model error and to derive the initial uncertainty by using an ensemble of analyses generated with this model. Current operational practice has tended to separate the problems of generating initial uncertainty and forecast uncertainty. Thus, in ensemble forecasts, it is normal to use physically based stochastic forcing terms to represent model errors, while in generating analysis uncertainties, artificial inflation methods are used to ensure that the analysis spread is sufficient given the observations. In this paper a more unified approach is tested that uses the same stochastic forcing in the analyses and forecasts and estimates the model error forcing from data assimilation diagnostics. This is shown to be successful if there are sufficient observations. Ensembles used in data assimilation have to be reliable in a broader sense than the usual forecast verification methods; in particular, they need to have the correct covariance structure, which is demonstrated.


Sign in / Sign up

Export Citation Format

Share Document