scholarly journals Probabilistic Precipitation Forecast Postprocessing Using Quantile Mapping and Rank-Weighted Best-Member Dressing

2018 ◽  
Vol 146 (12) ◽  
pp. 4079-4098 ◽  
Author(s):  
Thomas M. Hamill ◽  
Michael Scheuerer

Abstract Hamill et al. described a multimodel ensemble precipitation postprocessing algorithm that is used operationally by the U.S. National Weather Service (NWS). This article describes further changes that produce improved, reliable, and skillful probabilistic quantitative precipitation forecasts (PQPFs) for single or multimodel prediction systems. For multimodel systems, final probabilities are produced through the linear combination of PQPFs from the constituent models. The new methodology is applied to each prediction system. Prior to adjustment of the forecasts, parametric cumulative distribution functions (CDFs) of model and analyzed climatologies are generated using the previous 60 days’ forecasts and analyses and supplemental locations. The CDFs, which can be stored with minimal disk space, are then used for quantile mapping to correct state-dependent bias for each member. In this stage, the ensemble is also enlarged using a stencil of forecast values from the 5 × 5 surrounding grid points. Different weights and dressing distributions are assigned to the sorted, quantile-mapped members, with generally larger weights for outlying members and broader dressing distributions for members with heavier precipitation. Probability distributions are generated from the weighted sum of the dressing distributions. The NWS Global Ensemble Forecast System (GEFS), the Canadian Meteorological Centre (CMC) global ensemble, and the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble forecast data are postprocessed for April–June 2016. Single prediction system postprocessed forecasts are generally reliable and skillful. Multimodel PQPFs are roughly as skillful as the ECMWF system alone. Postprocessed guidance was generally more skillful than guidance using the Gamma distribution approach of Scheuerer and Hamill, with coefficients generated from data pooled across the United States.


Author(s):  
Q. Liu ◽  
L. S Chiu ◽  
X. Hao

The abundance or lack of rainfall affects peoples’ life and activities. As a major component of the global hydrological cycle (Chokngamwong & Chiu, 2007), accurate representations at various spatial and temporal scales are crucial for a lot of decision making processes. Climate models show a warmer and wetter climate due to increases of Greenhouse Gases (GHG). However, the models’ resolutions are often too coarse to be directly applicable to local scales that are useful for mitigation purposes. Hence disaggregation (downscaling) procedures are needed to transfer the coarse scale products to higher spatial and temporal resolutions. The aim of this paper is to examine the changes in the statistical parameters of rainfall at various spatial and temporal resolutions. The TRMM Multi-satellite Precipitation Analysis (TMPA) at 0.25 degree, 3 hourly grid rainfall data for a summer is aggregated to 0.5,1.0, 2.0 and 2.5 degree and at 6, 12, 24 hourly, pentad (five days) and monthly resolutions. The probability distributions (PDF) and cumulative distribution functions(CDF) of rain amount at these resolutions are computed and modeled as a mixed distribution. Parameters of the PDFs are compared using the Kolmogrov-Smironov (KS) test, both for the mixed and the marginal distribution. These distributions are shown to be distinct. The marginal distributions are fitted with Lognormal and Gamma distributions and it is found that the Gamma distributions fit much better than the Lognormal.



2020 ◽  
Vol 12 (13) ◽  
pp. 2102
Author(s):  
Pari-Sima Katiraie-Boroujerdy ◽  
Matin Rahnamay Naeini ◽  
Ata Akbari Asanjan ◽  
Ali Chavoshian ◽  
Kuo-lin Hsu ◽  
...  

High-resolution real-time satellite-based precipitation estimation datasets can play a more essential role in flood forecasting and risk analysis of infrastructures. This is particularly true for extended deserts or mountainous areas with sparse rain gauges like Iran. However, there are discrepancies between these satellite-based estimations and ground measurements, and it is necessary to apply adjustment methods to reduce systematic bias in these products. In this study, we apply a quantile mapping method with gauge information to reduce the systematic error of the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS). Due to the availability and quality of the ground-based measurements, we divide Iran into seven climate regions to increase the sample size for generating cumulative probability distributions within each region. The cumulative distribution functions (CDFs) are then employed with a quantile mapping 0.6° × 0.6° filter to adjust the values of PERSIANN-CCS. We use eight years (2009–2016) of historical data to calibrate our method, generating nonparametric cumulative distribution functions of ground-based measurements and satellite estimations for each climate region, as well as two years (2017–2018) of additional data to validate our approach. The results show that the bias correction approach improves PERSIANN-CCS data at aggregated to monthly, seasonal and annual scales for both the calibration and validation periods. The areal average of the annual bias and annual root mean square errors are reduced by 98% and 56% during the calibration and validation periods, respectively. Furthermore, the averages of the bias and root mean square error of the monthly time series decrease by 96% and 26% during the calibration and validation periods, respectively. There are some limitations in bias correction in the Southern region of the Caspian Sea because of shortcomings of the satellite-based products in recognizing orographic clouds.



2014 ◽  
Vol 29 (2) ◽  
Author(s):  
Amrutha Buddana ◽  
Tomasz J. Kozubowski

AbstractWe review several common discretization schemes and study a particular class of power-tail probability distributions on integers, obtained by discretizing continuous Pareto II (Lomax) distribution through one of them. Our results include expressions for the density and cumulative distribution functions, probability generating function, moments and related parameters, stability and divisibility properties, stochastic representations, and limiting distributions of random sums with discrete-Pareto number of terms. We also briefly discuss issues of simulation and estimation and extensions to multivariate setting.



2018 ◽  
Author(s):  
David M. Hyman ◽  
Andrea Bevilacqua ◽  
Marcus I. Bursik

Abstract. The study of volcanic mass flow hazards in a probabilistic framework centers around systematic experimental numerical modelling of the hazardous phenomenon and the subsequent generation and interpretation of a probabilistic hazard map (PHM). For a given volcanic flow (e.g., lava flow, lahar, pyroclastic flow, etc.), the PHM is typically interpreted as the point-wise probability of flow material inundation. In the current work, we present new methods for calculating spatial representations of the mean, standard deviation, median, and modal locations of the hazard's boundary as ensembles of many deterministic runs of a physical model. By formalizing its generation and properties, we show that a PHM may be used to construct these statistical measures of the hazard boundary which have been unrecognized in previous probabilistic hazard analyses. Our formalism shows that a typical PHM for a volcanic mass flow not only gives the point-wise inundation probability, but also represents a set of cumulative distribution functions for the location of the inundation boundary with a corresponding set of probability density functions. These distributions run over curves of steepest ascent on the PHM. Consequently, 2D space curves can be constructed on the map which represent the mean, median and modal locations of the likely inundation boundary. These curves give well-defined answers to the question of the likely boundary location of the area impacted by the hazard. Additionally, methods of calculation for higher moments including the standard deviation are presented which take the form of map regions surrounding the mean boundary location. These measures of central tendency and variance add significant value to spatial probabilistic hazard analyses, giving a new statistical description of the probability distributions underlying PHMs. The theory presented here may be used to construct improved hazard maps, which could prove useful for planning and emergency management purposes. This formalism also allows for application to simplified processes describable by analytic solutions. In that context, the connection between the PHM, its moments, and the underlying parameter variation is explicit, allowing for better source parameter estimation from natural data, yielding insights about natural controls on those parameters.



2017 ◽  
Vol 32 (1) ◽  
pp. 117-139 ◽  
Author(s):  
Sanjib Sharma ◽  
Ridwan Siddique ◽  
Nicholas Balderas ◽  
Jose D. Fuentes ◽  
Seann Reed ◽  
...  

Abstract The quality of ensemble precipitation forecasts across the eastern United States is investigated, specifically, version 2 of the National Centers for Environmental Prediction (NCEP) Global Ensemble Forecast System Reforecast (GEFSRv2) and Short Range Ensemble Forecast (SREF) system, as well as NCEP’s Weather Prediction Center probabilistic quantitative precipitation forecast (WPC-PQPF) guidance. The forecasts are verified using multisensor precipitation estimates and various metrics conditioned upon seasonality, precipitation threshold, lead time, and spatial aggregation scale. The forecasts are verified, over the geographic domain of each of the four eastern River Forecasts Centers (RFCs) in the United States, by considering first 1) the three systems or guidance, using a common period of analysis (2012–13) for lead times from 1 to 3 days, and then 2) GEFSRv2 alone, using a longer period (2004–13) and lead times from 1 to 16 days. The verification results indicate that, across the eastern United States, precipitation forecast bias decreases and the skill and reliability improve as the spatial aggregation scale increases; however, all the forecasts exhibit some underforecasting bias. The skill of the forecasts is appreciably better in the cool season than in the warm one. The WPC-PQPFs tend to be superior, in terms of the correlation coefficient, relative mean error, reliability, and forecast skill scores, than both GEFSRv2 and SREF, but the performance varies with the RFC and lead time. Based on GEFSRv2, medium-range precipitation forecasts tend to have skill up to approximately day 7 relative to sampled climatology.



Author(s):  
Michael T. Tong

A probabilistic approach is described for aeropropulsion system assessment. To demonstrate this approach, the technical performance of a wave rotor-enhanced gas turbine engine (i.e. engine net thrust, specific fuel consumption, and engine weight) is assessed. The assessment accounts for the uncertainties in component efficiencies/flows and mechanical design variables, using probability distributions. The results are presented in the form of cumulative distribution functions (CDFs) and sensitivity analyses, and are compared with those from the traditional deterministic approach. The comparison shows that the probabilistic approach provides a more realistic and systematic way to assess an aeropropulsion system.



2019 ◽  
Vol 19 (7) ◽  
pp. 1347-1363 ◽  
Author(s):  
David M. Hyman ◽  
Andrea Bevilacqua ◽  
Marcus I. Bursik

Abstract. The study of volcanic flow hazards in a probabilistic framework centers around systematic experimental numerical modeling of the hazardous phenomenon and the subsequent generation and interpretation of a probabilistic hazard map (PHM). For a given volcanic flow (e.g., lava flow, lahar, pyroclastic flow, ash cloud), the PHM is typically interpreted as the point-wise probability of inundation by flow material. In the current work, we present new methods for calculating spatial representations of the mean, standard deviation, median, and modal locations of the hazard's boundary as ensembles of many deterministic runs of a physical model. By formalizing its generation and properties, we show that a PHM may be used to construct these statistical measures of the hazard boundary which have been unrecognized in previous probabilistic hazard analyses. Our formalism shows that a typical PHM for a volcanic flow not only gives the point-wise inundation probability, but also represents a set of cumulative distribution functions for the location of the inundation boundary with a corresponding set of probability density functions. These distributions run over curves of steepest probability gradient ascent on the PHM. Consequently, 2-D space curves can be constructed on the map which represents the mean, median, and modal locations of the likely inundation boundary. These curves give well-defined answers to the question of the likely boundary location of the area impacted by the hazard. Additionally, methods of calculation for higher moments including the standard deviation are presented, which take the form of map regions surrounding the mean boundary location. These measures of central tendency and variance add significant value to spatial probabilistic hazard analyses, giving a new statistical description of the probability distributions underlying PHMs. The theory presented here may be used to aid construction of improved hazard maps, which could prove useful for planning and emergency management purposes. This formalism also allows for application to simplified processes describable by analytic solutions. In that context, the connection between the PHM, its moments, and the underlying parameter variation is explicit, allowing for better source parameter estimation from natural data, yielding insights about natural controls on those parameters.



2020 ◽  
Vol 27 (3) ◽  
pp. 411-427
Author(s):  
Josh Jacobson ◽  
William Kleiber ◽  
Michael Scheuerer ◽  
Joseph Bellier

Abstract. Most available verification metrics for ensemble forecasts focus on univariate quantities. That is, they assess whether the ensemble provides an adequate representation of the forecast uncertainty about the quantity of interest at a particular location and time. For spatially indexed ensemble forecasts, however, it is also important that forecast fields reproduce the spatial structure of the observed field and represent the uncertainty about spatial properties such as the size of the area for which heavy precipitation, high winds, critical fire weather conditions, etc., are expected. In this article we study the properties of the fraction of threshold exceedance (FTE) histogram, a new diagnostic tool designed for spatially indexed ensemble forecast fields. Defined as the fraction of grid points where a prescribed threshold is exceeded, the FTE is calculated for the verification field and separately for each ensemble member. It yields a projection of a – possibly high-dimensional – multivariate quantity onto a univariate quantity that can be studied with standard tools like verification rank histograms. This projection is appealing since it reflects a spatial property that is intuitive and directly relevant in applications, though it is not obvious whether the FTE is sufficiently sensitive to misrepresentation of spatial structure in the ensemble. In a comprehensive simulation study we find that departures from uniformity of the FTE histograms can indeed be related to forecast ensembles with biased spatial variability and that these histograms detect shortcomings in the spatial structure of ensemble forecast fields that are not obvious by eye. For demonstration, FTE histograms are applied in the context of spatially downscaled ensemble precipitation forecast fields from NOAA's Global Ensemble Forecast System.



2015 ◽  
Vol 28 (8) ◽  
pp. 3289-3310 ◽  
Author(s):  
Liang Ning ◽  
Emily E. Riddle ◽  
Raymond S. Bradley

Projections of historical and future changes in climate extremes are examined by applying the bias-correction spatial disaggregation (BCSD) statistical downscaling method to five general circulation models (GCMs) from phase 5 of the Coupled Model Intercomparison Project (CMIP5). For this analysis, 11 extreme temperature and precipitation indices that are relevant across multiple disciplines (e.g., agriculture and conservation) are chosen. Over the historical period, the simulated means, variances, and cumulative distribution functions (CDFs) of each of the 11 indices are first compared with observations, and the performance of the downscaling method is quantitatively evaluated. For the future period, the ensemble average of the five GCM simulations points to more warm extremes, fewer cold extremes, and more precipitation extremes with greater intensities under all three scenarios. The changes are larger under higher emissions scenarios. The inter-GCM uncertainties and changes in probability distributions are also assessed. Changes in the probability distributions indicate an increase in both the number and interannual variability of future climate extreme events. The potential deficiencies of the method in projecting future extremes are also discussed.



Atmosphere ◽  
2019 ◽  
Vol 10 (2) ◽  
pp. 87
Author(s):  
Hanbin Zhang ◽  
Min Chen ◽  
Shuiyong Fan

The regional ensemble prediction system (REPS) of North China is currently under development at the Institute of Urban Meteorology, China Meteorological Administration, with initial condition perturbations provided by global ensemble dynamical downscaling. To improve the performance of the REPS, a comparison of two initial condition perturbation methods is conducted in this paper: (i) Breeding, which was specifically designed for the REPS, and (ii) Dynamical downscaling. Consecutive tests were implemented to evaluate the performances of both methods in the operational REPS environment. The perturbation characteristics were analyzed, and ensemble forecast verifications were conducted. Furthermore, a heavy precipitation case was investigated. The main conclusions are as follows: the Breeding perturbations were more powerful at small scales, while the downscaling perturbations were more powerful at large scales; the difference between the two perturbation types gradually decreased with the forecast lead time. The downscaling perturbation growth was more remarkable than that of the Breeding perturbations at short forecast lead times, while the perturbation magnitudes of both schemes were similar for long-range forecasts. However, the Breeding perturbations contained more abundant small-scale components than downscaling for the short-range forecasts. The ensemble forecast verification indicated a slightly better downscaling ensemble performance than that of the Breeding ensemble. A precipitation case study indicated that the Breeding ensemble performance was better than that of downscaling, particularly in terms of location and strength of the precipitation forecast.



Sign in / Sign up

Export Citation Format

Share Document