spatial verification
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 10)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Vol 4 ◽  
pp. 30-49
Author(s):  
A.Yu. Bundel ◽  
◽  
A.V. Muraviev ◽  
E.D. Olkhovaya ◽  
◽  
...  

State-of-the-art high-resolution NWP models simulate mesoscale systems with a high degree of detail, with large amplitudes and high gradients of fields of weather variables. Higher resolution leads to the spatial and temporal error growth and to a well-known double penalty problem. To solve this problem, the spatial verification methods have been developed over the last two decades, which ignore moderate errors (especially in the position), but can still evaluate the useful skill of a high-resolution model. The paper refers to the updated classification of spatial verification methods, briefly describes the main methods, and gives an overview of the international projects for intercomparison of the methods. Special attention is given to the application of the spatial approach to ensemble forecasting. Popular software packages are considered. The Russian translation is proposed for the relevant English terms. Keywords: high-resolution models, verification, double penalty, spatial methods, ensemble forecasting, object-based methods


Author(s):  
J. Williams ◽  
M.C Hung ◽  
Y.H. Wu

Analysis was conducted to verify forecast against observation precipitation associated with mid-latitude cyclones over the Eastern US in winter and spring 2013 using Geographic Information Systems (GIS). The forecast data are day two 24-hour Quantitative Precipitation Forecasts (QPF) produced by the Global Forecast System (GFS) model. The analysis methods produced categorical geographic error maps of hits, misses and false alarms in spatial relation to the mid-latitude cyclones and traditional verification scores for each day. A hypothesis test was also performed to determine if the GFS mean forecast precipitation over the study area is significantly different from the mean observed precipitation during mid-latitude cyclones. The spatial verification maps, as an analytical and visualization tool, provided evidences on geographical relationship between correct predictions (hits and correct negatives) and incorrect predictions (misses and false alarms). Working together with quantitative scores and hypothesis test, spatial verification maps reveal that the GFS model has a tendency to over forecast precipitation coverage associated with mid-latitude cyclones over the Eastern US and often moves the mid-latitude cyclones too fast.


Author(s):  
Joel Brogan ◽  
Aparna Bharati ◽  
Daniel Moreira ◽  
Anderson Rocha ◽  
Kevin Bowyer ◽  
...  

2020 ◽  
Author(s):  
Jan Maksymczuk ◽  
Ric Crocker ◽  
Marion Mittermaier ◽  
Christine Pequignet

<div> <p>HiVE is a CMEMS funded collaboration between the atmospheric Numerical Weather Prediction (NWP) verification and the ocean community within the Met Office, aimed at demonstrating the use of spatial verification methods originally developed for the evaluation of high-resolution NWP forecasts, with CMEMS ocean model forecast products. Spatial verification methods provide more scale appropriate ways to better assess forecast characteristics and accuracy of km-scale forecasts, where the detail looks realistic but may not be in the right place at the right time. As a result, it can be the case that coarser resolution forecasts verify better (e.g. lower root-mean-square-error) than the higher resolution forecast. In this instance the smoothness of the coarser resolution forecast is rewarded, though the higher-resolution forecast may be better. The project utilised open source code library known as Model Evaluation Toolkit (MET) developed at the US National Center for Atmospheric Research. </p> </div><div> <p> </p> </div><div> <p>This project saw, for the first time, the application of spatial verification methods to sub-10 km resolution ocean model forecasts. The project consisted of two parts. Part 1 describes an assessment of the forecast skill for SST of CMEMS model configurations at observing locations using an approach called HiRA (High Resolution Assessment). Part 2 is described in the companion poster to this one.  </p> </div><div> <p> </p> </div><div> <p>HiRA is a single-observation-forecast-neighbourhood-type method which makes use of commonly used ensemble verification metrics such as the Brier Score (BS) and the Continuous Ranked Probability Score (CRPS). In this instance all model grid points within a predefined neighbourhood of the observing location are considered equi-probable outcomes (or pseudo-ensemble members) at the observing location. The technique allows for an inter-comparison of models with different grid resolutions as well as between deterministic and probabilistic forecasts in an equitable and consistent way. In this work it has been applied to the CMEMS products delivered from the AMM7 (~7km) and AMM15 (~1.5km) model configurations for the European North West Shelf that are provided by the Met Office. </p> </div><div> <p> </p> </div><div> <p>It has been found that when neighbourhoods of equivalent extent are compared for both configurations it is possible to show improved forecast skill for SST for the higher resolution AMM15 both on- and off-shelf, which has been difficult to demonstrate previously using traditional metrics. Forecast skill generally degrades with increasing lead time for both configurations, with the off-shelf results for the higher resolution model showing increasing benefits over the coarser configuration. </p> </div>


2020 ◽  
Author(s):  
Marion Mittermaier ◽  
Rachel North ◽  
Christine Pequignet ◽  
Jan Maksymczuk

<div> <p>HiVE is a CMEMS funded collaboration between the atmospheric Numerical Weather Prediction (NWP) verification and the ocean community within the Met Office, aimed at demonstrating the use of spatial verification methods originally developed for the evaluation of high-resolution NWP forecasts, to CMEMS ocean model forecast products. Spatial verification methods provide more scale appropriate ways to better assess forecast characteristics and accuracy of km-scale forecasts, where the detail looks realistic but may not be in the right place at the right time. As a result, it can be the case that coarser resolution forecasts verify better (e.g. lower root-mean-square-error) than the higher resolution forecast. In this instance the smoothness of the coarser resolution forecast is rewarded, though the higher-resolution forecast may be better. The project utilised open source code library known as Model Evaluation Tools (MET) developed at the US National Center for Atmospheric Research (NCAR).</p> </div><div> <p> </p> </div><div> <p>This project saw, for the first time, the application of spatial verification methods to sub-10 km resolution ocean model forecasts. The project consisted of two parts. Part 1 is described in the companion poster to this one. Part 2 describes the skill of CMEMS products for forecasting events or features of interest such as algal blooms.  </p> </div><div> <p> </p> </div><div> <p>The Method for Object-based Diagnostic Evaluation (MODE) and the time dimension version MODE Time Domain (MTD) were applied to daily mean chlorophyll forecasts for the European North West Shelf from the FOAM-ERSEM model on the AMM7 grid. The forecasts are produced from a “cold start”, i.e. no data assimilation of biological variables. Here the entire 2019 algal bloom season was analysed to understand: intensity and spatial (area) biases; location and timing errors. Forecasts were compared to the CMEMS daily cloud free (L4) multi-sensor chlorophyll-<em>a</em> product. </p> </div><div> <p> </p> </div><div> <p>It has been found that there are large differences between forecast and observed concentrations of chlorophyll. This has meant that a quantile mapping approach for removing the bias was necessary before analysing the spatial properties of the forecast. Despite this the model still produces areas of chlorophyll which are too large compared to the observed. The model often produces areas of enhanced chlorophyll in approximately the right locations but the forecast and observed areas are rarely collocated and/or overlapping. Finally, the temporal analysis shows that the model struggled to get the onset of the season (being close to a month too late), but once the model picked up the signal there was better correspondence between the observed and forecast chlorophyll peaks for the remainder of the season. There was very little variation in forecast performance with lead time, suggesting that chlorophyll is a very slowly varying quantity.  </p> </div><div> <p> </p> </div><div> <p>Comparing an analysis which included the assimilation of observed chlorophyll shows that it is much closer to the observed L4 product than the non-biological assimilative analysis. It must be concluded that if the forecast were started from a DA analysis that included chlorophyll, it would lead to forecasts with less bias, and possibly a better detection of the onset of the bloom.  </p> </div><div> <p> </p> </div>


2020 ◽  
Vol 16 (9) ◽  
pp. 1393
Author(s):  
Chen Ming ◽  
Zhang Zhifeng ◽  
Gao Tielian ◽  
Duan Li ◽  
Zhang Junpeng

2019 ◽  
Vol 12 (8) ◽  
pp. 3401-3418 ◽  
Author(s):  
Sebastian Buschow ◽  
Jakiw Pidstrigach ◽  
Petra Friederichs

Abstract. The quality of precipitation forecasts is difficult to evaluate objectively because images with disjointed features surrounded by zero intensities cannot easily be compared pixel by pixel: any displacement between observed and predicted fields is punished twice, generally leading to better marks for coarser models. To answer the question of whether a highly resolved model truly delivers an improved representation of precipitation processes, alternative tools are thus needed. Wavelet transformations can be used to summarize high-dimensional data in a few numbers which characterize the field's texture. A comparison of the transformed fields judges models solely based on their ability to predict spatial structures. The fidelity of the forecast's overall pattern is thus investigated separately from potential errors in feature location. This study introduces several new wavelet-based structure scores for the verification of deterministic as well as ensemble predictions. Their properties are rigorously tested in an idealized setting: a recently developed stochastic model for precipitation extremes generates realistic pairs of synthetic observations and forecasts with prespecified spatial correlations. The wavelet scores are found to react sensitively to differences in structural properties, meaning that the objectively best forecast can be determined even in cases where this task is difficult to accomplish by naked eye. Random rain fields prove to be a useful test bed for any verification tool that aims for an assessment of structure.


2019 ◽  
Author(s):  
Sebastian Buschow ◽  
Jakiw Pidstrigach ◽  
Petra Friederichs

Abstract. The quality of precipitation forecasts is difficult to evaluate objectively because images with disjoint features surrounded by zero intensities cannot easily be compared pixel by pixel: Any displacement between observed and predicted field is punished twice, generally leading to better marks for coarser models. To answer the question whether a highly resolved model truly delivers an improved representation of precipitation processes, alternative tools are thus needed. Wavelet transformations can be used to summarize high-dimensional data in a few numbers which characterize the field's texture. A comparison of the transformed fields judges models solely based on their ability to predict spatial correlations. The fidelity of the forecast's overall structure is thus investigated separately from potential errors in feature location. This study introduces several new wavelet based structure-scores for the verification of deterministic as well as ensemble predictions. Their properties are rigorously tested in an idealized setting: A recently developed stochastic model for precipitation extremes generates realistic pairs of synthetic observations and forecasts with prespecified spatial correlations. The wavelet-scores are found to react sensitively to differences in structural properties, meaning that the objectively best forecast can be determined even in cases where this task is difficult to accomplish by naked eye. Random rain fields prove to be a useful test-bed for any verification tool that aims for an assessment of structure.


Author(s):  
M.P. Mittermaier

AbstractThe Fractions Skill Score (FSS) is arguably one of the most popular spatial verification metrics in use today. The fraction of grid points exceeding a threshold within a forecast and observed field neighbourhood are examined to compute a score. By definition a perfect forecast has a FSS of 1, and a “no skill” forecast has a score of 0.It is shown that the denominator defines the score’s characteristics. The FSS is undefined for instances where both the forecast and the observed field do not exceed a threshold. In the limiting case, the FSS for a perfect null (zero) forecast is also undefined, unless a threshold of ≥ 0 is used, in which case it would be 1 (i.e. perfect). Furthermore the FSS is 0 if either the forecast or the observed field does not exceed a threshold. This symmetry means it cannot differentiate between what are traditionally referred to as false alarms or misses. Additional supplementary information is required. The FSS is greater than 0 if and only if there are values exceeding a given threshold in both the forecast and the observed field.The magnitude of an overall score computed over many forecasts is sensitive to the pooling method. Zero scores are non-trivial. Excluding them implies excluding all situations associated with false alarms or misses. Omitting near-zero scores is a more credible decision, but only if it can be proven that these are related to spurious artefacts in the observed field. To avoid ambiguity the components of the FSS should be aggregated separately for computing an overall score for most applications and purposes.


Sign in / Sign up

Export Citation Format

Share Document