scholarly journals A Variable-Correlation Model to Characterize Asymmetric Dependence for Postprocessing Short-Term Precipitation Forecasts

2019 ◽  
Vol 148 (1) ◽  
pp. 241-257 ◽  
Author(s):  
Wentao Li ◽  
Quan J. Wang ◽  
Qingyun Duan

Abstract Statistical postprocessing methods can be used to correct bias and dispersion error in raw ensemble forecasts from numerical weather prediction models. Existing postprocessing models generally perform well when they are assessed on all events, but their performance for extreme events still needs to be investigated. Commonly used joint probability postprocessing models are based on the correlation between forecasts and observations. Because the correlation may be lower for extreme events as a result of larger forecast uncertainty, the dependence between forecasts and observations can be asymmetric with respect to the magnitude of the precipitation. However, the constant correlation coefficient in the traditional joint probability model lacks the flexibility to model asymmetric dependence. In this study, we formulated a new postprocessing model with a decreasing correlation coefficient to characterize asymmetric dependence. We carried out experiments using Global Ensemble Forecast System reforecasts for daily precipitation in the Huai River basin in China. The results show that, although it performs well in terms of continuous ranked probability score or reliability for all events, the traditional joint probability model suffers from overestimation for extreme events defined by the largest 2.5% or 5% of raw forecasts. On the contrary, the proposed variable-correlation model is able to alleviate the overestimation and achieves better reliability for extreme events than the traditional model. The proposed variable-correlation model can be seen as a flexible extension of the traditional joint probability model to improve the performance for extreme events.

2019 ◽  
Vol 34 (6) ◽  
pp. 2067-2084
Author(s):  
Wentao Li ◽  
Qingyun Duan ◽  
Quan J. Wang

Abstract Statistical postprocessing models can be used to correct bias and dispersion errors in raw precipitation forecasts from numerical weather prediction models. In this study, we conducted experiments to investigate four factors that influence the performance of regression-based postprocessing models with normalization transformations for short-term precipitation forecasts. The factors are 1) normalization transformations, 2) incorporation of ensemble spread as a predictor in the model, 3) objective function for parameter inference, and 4) two postprocessing schemes, including distributional regression and joint probability models. The experiments on the first three factors are based on variants of a censored regression model with conditional heteroscedasticity (CRCH). For the fourth factor, we compared CRCH as an example of the distributional regression with a joint probability model. The results show that the CRCH with normal quantile transformation (NQT) or power transformation performs better than the CRCH with log–sinh transformation for most of the subbasins in Huai River basin with a subhumid climate. The incorporation of ensemble spread as a predictor in CRCH models can improve forecast skill in our research region at short lead times. The influence of different objective functions (minimum continuous ranked probability score or maximum likelihood) on postprocessed results is limited to a few relatively dry subbasins in the research region. Both the distributional regression and the joint probability models have their advantages, and they are both able to achieve reliable and skillful forecasts.


2007 ◽  
Vol 22 (5) ◽  
pp. 1089-1100 ◽  
Author(s):  
Christopher A. T. Ferro

Abstract This article proposes a method for verifying deterministic forecasts of rare, extreme events defined by exceedance above a high threshold. A probability model for the joint distribution of forecasts and observations, and based on extreme-value theory, characterizes the quality of forecasting systems with two key parameters. This enables verification measures to be estimated for any event rarity and helps to reduce the uncertainty associated with direct estimation. Confidence regions are obtained and the method is used to compare daily precipitation forecasts from two operational numerical weather prediction models.


2021 ◽  
Vol 112 ◽  
pp. 102710
Author(s):  
Xiaoyu Bai ◽  
Hui Jiang ◽  
Xiaoyu Huang ◽  
Guangsong Song ◽  
Xinyi Ma

2015 ◽  
Vol 528 ◽  
pp. 329-340 ◽  
Author(s):  
Tongtiegang Zhao ◽  
Q.J. Wang ◽  
James C. Bennett ◽  
David E. Robertson ◽  
Quanxi Shao ◽  
...  

2020 ◽  
Vol 10 (8) ◽  
pp. 2919
Author(s):  
Jian Li ◽  
Mengmin He ◽  
Gaofeng Cui ◽  
Xiaoming Wang ◽  
Weidong Wang ◽  
...  

The detection of seismic signals is vital in seismic data processing and analysis. Many algorithms have been proposed to resolve this issue, such as the ratio of short-term and long-term power averages (STA/LTA), F detector, Generalize F, and etc. However, the detection performance will be affected by the noise signals severely. In this paper, we propose a novel seismic signal detection method based on the historical waveform features to improve the seismic signals detection performance and reduce the affection from the noise signals. We use the historical events location information in a specific area and waveform features information to build the joint probability model. For the new signal from this area, we can determine whether it is the seismic signal according to the value of the joint probability. The waveform features used to construct the model include the average spectral energy on a specific frequency band, the energy of the component obtained by decomposing the signal through empirical mode decomposition (EMD), and the peak and the ratio of STA/LTA trace. We use the Gaussian process (GP) to build each feature model and finally get a multi-features joint probability model. The historical events location information is used as the kernel of the GP, and the historical waveform features are used to train the hyperparameters of GP. The beamforming data of the seismic array KSRS of International Monitoring System are used to train and test the model. The testing results show the effectiveness of the proposed method.


2020 ◽  
Vol 35 (3) ◽  
pp. 1067-1080
Author(s):  
Michael Foley ◽  
Nicholas Loveday

Abstract We compare single-valued forecasts from a consensus of numerical weather prediction models to forecasts from a single model across a range of user decision thresholds and sensitivities, using the relative economic value framework, and present this comparison in a new graphical format. With the help of a simple linear error model, we obtain theoretical results and perform synthetic calculations to gain insights into how the results relate to the characteristics of the different forecast systems. We find that multimodel consensus forecasts are more beneficial for users interested in decisions near the climatological mean, due to their reduced spread of errors compared to the constituent models. Single model forecasts may present greater benefit for users sensitive to extreme events if the forecasts have smaller conditional biases than the consensus forecasts and hence better resolution of such events. The results support use of consensus averaging approaches for single-valued forecast services in typical conditions. However, it is hard to cater for all user sensitivities in more extreme conditions. This underscores the importance of providing probability-based services for unusual conditions.


2009 ◽  
Vol 24 (5) ◽  
pp. 1401-1415 ◽  
Author(s):  
Elizabeth E. Ebert ◽  
William A. Gallus

Abstract The contiguous rain area (CRA) method for spatial forecast verification is a features-based approach that evaluates the properties of forecast rain systems, namely, their location, size, intensity, and finescale pattern. It is one of many recently developed spatial verification approaches that are being evaluated as part of a Spatial Forecast Verification Methods Intercomparison Project. To better understand the strengths and weaknesses of the CRA method, it has been tested here on a set of idealized geometric and perturbed forecasts with known errors, as well as nine precipitation forecasts from three high-resolution numerical weather prediction models. The CRA method was able to identify the known errors for the geometric forecasts, but only after a modification was introduced to allow nonoverlapping forecast and observed features to be matched. For the perturbed cases in which a radar rain field was spatially translated and amplified to simulate forecast errors, the CRA method also reproduced the known errors except when a high-intensity threshold was used to define the CRA (≥10 mm h−1) and a large translation error was imposed (>200 km). The decomposition of total error into displacement, volume, and pattern components reflected the source of the error almost all of the time when a mean squared error formulation was used, but not necessarily when a correlation-based formulation was used. When applied to real forecasts, the CRA method gave similar results when either best-fit criteria, minimization of the mean squared error, or maximization of the correlation coefficient, was chosen for matching forecast and observed features. The diagnosed displacement error was somewhat sensitive to the choice of search distance. Of the many diagnostics produced by this method, the errors in the mean and peak rain rate between the forecast and observed features showed the best correspondence with subjective evaluations of the forecasts, while the spatial correlation coefficient (after matching) did not reflect the subjective judgments.


Sign in / Sign up

Export Citation Format

Share Document