Toward Better Understanding of the Contiguous Rain Area (CRA) Method for Spatial Forecast Verification

2009 ◽  
Vol 24 (5) ◽  
pp. 1401-1415 ◽  
Author(s):  
Elizabeth E. Ebert ◽  
William A. Gallus

Abstract The contiguous rain area (CRA) method for spatial forecast verification is a features-based approach that evaluates the properties of forecast rain systems, namely, their location, size, intensity, and finescale pattern. It is one of many recently developed spatial verification approaches that are being evaluated as part of a Spatial Forecast Verification Methods Intercomparison Project. To better understand the strengths and weaknesses of the CRA method, it has been tested here on a set of idealized geometric and perturbed forecasts with known errors, as well as nine precipitation forecasts from three high-resolution numerical weather prediction models. The CRA method was able to identify the known errors for the geometric forecasts, but only after a modification was introduced to allow nonoverlapping forecast and observed features to be matched. For the perturbed cases in which a radar rain field was spatially translated and amplified to simulate forecast errors, the CRA method also reproduced the known errors except when a high-intensity threshold was used to define the CRA (≥10 mm h−1) and a large translation error was imposed (>200 km). The decomposition of total error into displacement, volume, and pattern components reflected the source of the error almost all of the time when a mean squared error formulation was used, but not necessarily when a correlation-based formulation was used. When applied to real forecasts, the CRA method gave similar results when either best-fit criteria, minimization of the mean squared error, or maximization of the correlation coefficient, was chosen for matching forecast and observed features. The diagnosed displacement error was somewhat sensitive to the choice of search distance. Of the many diagnostics produced by this method, the errors in the mean and peak rain rate between the forecast and observed features showed the best correspondence with subjective evaluations of the forecasts, while the spatial correlation coefficient (after matching) did not reflect the subjective judgments.

2020 ◽  
Author(s):  
Rafael Massahiro Yassue ◽  
José Felipe Gonzaga Sabadin ◽  
Giovanni Galli ◽  
Filipe Couto Alves ◽  
Roberto Fritsche-Neto

AbstractUsually, the comparison among genomic prediction models is based on validation schemes as Repeated Random Subsampling (RRS) or K-fold cross-validation. Nevertheless, the design of training and validation sets has a high effect on the way and subjectiveness that we compare models. Those procedures cited above have an overlap across replicates that might cause an overestimated estimate and lack of residuals independence due to resampling issues and might cause less accurate results. Furthermore, posthoc tests, such as ANOVA, are not recommended due to assumption unfulfilled regarding residuals independence. Thus, we propose a new way to sample observations to build training and validation sets based on cross-validation alpha-based design (CV-α). The CV-α was meant to create several scenarios of validation (replicates x folds), regardless of the number of treatments. Using CV-α, the number of genotypes in the same fold across replicates was much lower than K-fold, indicating higher residual independence. Therefore, based on the CV-α results, as proof of concept, via ANOVA, we could compare the proposed methodology to RRS and K-fold, applying four genomic prediction models with a simulated and real dataset. Concerning the predictive ability and bias, all validation methods showed similar performance. However, regarding the mean squared error and coefficient of variation, the CV-α method presented the best performance under the evaluated scenarios. Moreover, as it has no additional cost nor complexity, it is more reliable and allows the use of non-subjective methods to compare models and factors. Therefore, CV-α can be considered a more precise validation methodology for model selection.


2015 ◽  
Vol 143 (12) ◽  
pp. 5115-5133 ◽  
Author(s):  
Michael A. Hollan ◽  
Brian C. Ancell

Abstract The use of ensembles in numerical weather prediction models is becoming an increasingly effective method of forecasting. Many studies have shown that using the mean of an ensemble as a deterministic solution produces the most accurate forecasts. However, the mean will eventually lose its usefulness as a deterministic forecast in the presence of nonlinearity. At synoptic scales, this appears to occur between 12- and 24-h forecast time, and on storm scales it may occur significantly faster due to stronger nonlinearity. When this does occur, the question then becomes the following: Should the mean still be adhered to, or would a different approach produce better results? This paper will investigate the usefulness of the mean within a WRF Model utilizing an ensemble Kalman filter for severe convective events. To determine when the mean becomes unrealistic, the divergence of the mean of the ensemble (“mean”) and a deterministic forecast initialized from a set of mean initial conditions (“control”) are examined. It is found that significant divergence between the mean and control emerges no later than 6 h into a convective event. The mean and control are each compared to observations, with the control being more accurate for nearly all forecasts studied. For the case where the mean provides a better forecast than the control, an approach is offered to identify the member or group of members that is closest to the mean. Such a forecast will contain similar forecast errors as the mean, but unlike the mean, will be on an actual forecast trajectory.


2017 ◽  
Vol 32 (2) ◽  
pp. 733-741 ◽  
Author(s):  
Craig S. Schwartz

Abstract As high-resolution numerical weather prediction models are now commonplace, “neighborhood” verification metrics are regularly employed to evaluate forecast quality. These neighborhood approaches relax the requirement that perfect forecasts must match observations at the grid scale, contrasting traditional point-by-point verification methods. One recently proposed metric, the neighborhood equitable threat score, is calculated from 2 × 2 contingency tables that are populated within a neighborhood framework. However, the literature suggests three subtly different methods of populating neighborhood-based contingency tables. Thus, this work compares and contrasts these three variants and shows they yield statistically significantly different conclusions regarding forecast performance, illustrating that neighborhood-based contingency tables should be constructed carefully and transparently. Furthermore, this paper shows how two of the methods use inconsistent event definitions and suggests a “neighborhood maximum” approach be used to fill neighborhood-based contingency tables.


Atmosphere ◽  
2022 ◽  
Vol 13 (1) ◽  
pp. 88
Author(s):  
Wei He ◽  
Taisong Xiong ◽  
Hao Wang ◽  
Jianxin He ◽  
Xinyue Ren ◽  
...  

Precipitation nowcasting is extremely important in disaster prevention and mitigation, and can improve the quality of meteorological forecasts. In recent years, deep learning-based spatiotemporal sequence prediction models have been widely used in precipitation nowcasting, obtaining better prediction results than numerical weather prediction models and traditional radar echo extrapolation results. Because existing deep learning models rarely consider the inherent interactions between the model input data and the previous output, model prediction results do not sufficiently meet the actual forecast requirement. We propose a Modified Convolutional Gated Recurrent Unit (M-ConvGRU) model that performs convolution operations on the input data and previous output of a GRU network. Moreover, this adopts an encoder–forecaster structure to better capture the characteristics of spatiotemporal correlation in radar echo maps. The results of multiple experiments demonstrate the effectiveness of the proposed model. The balanced mean absolute error (B-MAE) and balanced mean squared error (B-MSE) of M-ConvGRU are slightly lower than Convolutional Long Short-Term Memory (ConvLSTM), but the mean absolute error (MAE) and mean squared error (MSE) of M-ConvGRU are 6.29% and 10.25% lower than ConvLSTM, and the prediction accuracy and prediction performance for strong echo regions were also improved.


2011 ◽  
Vol 60 (2) ◽  
pp. 248-255 ◽  
Author(s):  
Sangmun Shin ◽  
Funda Samanlioglu ◽  
Byung Rae Cho ◽  
Margaret M. Wiecek

2018 ◽  
Vol 10 (12) ◽  
pp. 4863 ◽  
Author(s):  
Chao Huang ◽  
Longpeng Cao ◽  
Nanxin Peng ◽  
Sijia Li ◽  
Jing Zhang ◽  
...  

Photovoltaic (PV) modules convert renewable and sustainable solar energy into electricity. However, the uncertainty of PV power production brings challenges for the grid operation. To facilitate the management and scheduling of PV power plants, forecasting is an essential technique. In this paper, a robust multilayer perception (MLP) neural network was developed for day-ahead forecasting of hourly PV power. A generic MLP is usually trained by minimizing the mean squared loss. The mean squared error is sensitive to a few particularly large errors that can lead to a poor estimator. To tackle the problem, the pseudo-Huber loss function, which combines the best properties of squared loss and absolute loss, was adopted in this paper. The effectiveness and efficiency of the proposed method was verified by benchmarking against a generic MLP network with real PV data. Numerical experiments illustrated that the proposed method performed better than the generic MLP network in terms of root mean squared error (RMSE) and mean absolute error (MAE).


2016 ◽  
Vol 5 (1) ◽  
pp. 39 ◽  
Author(s):  
Abbas Najim Salman ◽  
Maymona Ameen

<p>This paper is concerned with minimax shrinkage estimator using double stage shrinkage technique for lowering the mean squared error, intended for estimate the shape parameter (a) of Generalized Rayleigh distribution in a region (R) around available prior knowledge (a<sub>0</sub>) about the actual value (a) as initial estimate in case when the scale parameter (l) is known .</p><p>In situation where the experimentations are time consuming or very costly, a double stage procedure can be used to reduce the expected sample size needed to obtain the estimator.</p><p>The proposed estimator is shown to have smaller mean squared error for certain choice of the shrinkage weight factor y(<strong>×</strong>) and suitable region R.</p><p>Expressions for Bias, Mean squared error (MSE), Expected sample size [E (n/a, R)], Expected sample size proportion [E(n/a,R)/n], probability for avoiding the second sample and percentage of overall sample saved  for the proposed estimator are derived.</p><p>Numerical results and conclusions for the expressions mentioned above were displayed when the consider estimator are testimator of level of significanceD.</p><p>Comparisons with the minimax estimator and with the most recent studies were made to shown the effectiveness of the proposed estimator.</p>


2020 ◽  
Vol 2020 ◽  
pp. 1-22
Author(s):  
Byung-Kwon Son ◽  
Do-Jin An ◽  
Joon-Ho Lee

In this paper, a passive localization of the emitter using noisy angle-of-arrival (AOA) measurements, called Brown DWLS (Distance Weighted Least Squares) algorithm, is considered. The accuracy of AOA-based localization is quantified by the mean-squared error. Various estimates of the AOA-localization algorithm have been derived (Doğançay and Hmam, 2008). Explicit expression of the location estimate of the previous study is used to get an analytic expression of the mean-squared error (MSE) of one of the various estimates. To validate the derived expression, we compare the MSE from the Monte Carlo simulation with the analytically derived MSE.


Sign in / Sign up

Export Citation Format

Share Document