scholarly journals On the Uncertainty and Changeability of the Estimates of Seasonal Maximum Flows

Water ◽  
2020 ◽  
Vol 12 (3) ◽  
pp. 704 ◽  
Author(s):  
Iwona Markiewicz ◽  
Ewa Bogdanowicz ◽  
Krzysztof Kochanek

A classical approach to flood frequency modeling is based on the choice of the probability distribution to best describe the analyzed series of annual or seasonal maximum flows. In the paper, we discuss the two main problems, the uncertainty and instability of the upper quantile estimates, which serve as the design values. Ways to mitigate the above-mentioned problems are proposed and illustrated by seasonal maximum flows at the Proszówki gauging station on the Raba River. The inverse Gaussian and generalized exponential distributions, which are not commonly used for flood frequency modeling, were found to be suitable for Polish data of seasonal peak flows. At the same time, the heavy tailed distributions, which are currently recommended for extreme hydrological phenomena modeling, were found to be inappropriate. Applying the classical approach of selecting the best fitted model to the peak flows data, significant shifts in the upper quantile estimates were often observed when a new observation was added to the data series. The method of aggregation, proposed by the authors, mitigates this problem. Elimination of distributions that are poorly fitted to the data series increases the stability of the upper quantile estimates over time.

2011 ◽  
Vol 15 (3) ◽  
pp. 819-830 ◽  
Author(s):  
S. Das ◽  
C. Cunnane

Abstract. Flood frequency analysis is a necessary and important part of flood risk assessment and management studies. Regional flood frequency methods, in which flood data from groups of catchments are pooled together in order to enhance the precision of flood estimates at project locations, is an accepted part of such studies. This enhancement of precision is based on the assumption that catchments so pooled together are homogeneous in their flood producing properties. If homogeneity is assured then a homogeneous pooling group of sites lead to a reduction in the error of quantile estimates, relative to estimators based on single at-site data series alone. Homogeneous pooling groups are selected by using a previously nominated rule and this paper examines how effective one such rule is in selecting homogeneous groups. In this paper a study, based on annual maximum series obtained from 85 Irish gauging stations, examines how successful a common method of identifying pooling group membership is in selecting groups that actually are homogeneous. Each station has its own unique pooling group selected by use of a Euclidean distance measure in catchment descriptor space, commonly denoted dij and with a minimum of 500 station years of data in the pooling group. It was found that dij could be effectively defined in terms of catchment area, mean rainfall and baseflow index. The study then investigated how effective this selected method is in selecting groups of catchments that are actually homogenous as indicated by their L-Cv values. The sampling distribution of L-CV (t2) in each pooling group and the 95% confidence limits about the pooled estimate of t2 are obtained by simulation. The t2 values of the selected group members are compared with these confidence limits both graphically and numerically. Of the 85 stations, only 1 station's pooling group members have all their t2 values within the confidence limits, while 7, 33 and 44 of them have 1, 2 or 3 or more, t2 values outside the confidence limits. The outcomes are also compared with the heterogeneity measures H1 and H2. The H1 values show an upward trend with the ranges of t2 values in the pooling group whereas the H2 values do not show any such dependency. A selection of 27 pooling groups, found to be heterogeneous, were further examined with the help of box-plots of catchment descriptor values and one particular case is considered in detail. Overall the results show that even with a carefully considered selection procedure, it is not certain that perfectly homogeneous pooling groups are identified.


2011 ◽  
Vol 42 (2-3) ◽  
pp. 171-192 ◽  
Author(s):  
Witold G. Strupczewski ◽  
Krzysztof Kochanek ◽  
Iwona Markiewicz ◽  
Ewa Bogdanowicz ◽  
Stanislaw Weglarczyk ◽  
...  

This study discusses an application of heavy-tailed distributions to modelling of annual peak flows in general and of Polish data sets in particular. One- and two-shape parameter heavy-tailed distributions are obtained by transformations of random variables. The correct selection of a flood frequency model with emphasis on heavy-tailed distribution discrimination is then discussed. If a distribution is wrongly assumed, the error, in the upper quantile, arising as a result, depends on the method of parameter estimation and is shown analytically for three methods. Asymptotic and sampling values (got by simulation) were assessed for the pair log-Gumbel (LG) as a false distribution and log-normal (LN) as a true distribution. Comparing the upper quantiles of various distributions with the same values of moments, it is found that heavy-tailed distributions do not consistently provide higher flood frequency estimates than do soft-tailed distributions. Based on L-moment ratio diagrams and the test of linearity on log–log plots, it is concluded that Polish datasets of annual peak flows should be modelled using soft-tailed distributions, such as the three-parameter Inverse Gaussian, rather than heavy-tailed distributions.


1994 ◽  
Vol 21 (6) ◽  
pp. 1074-1080 ◽  
Author(s):  
J. Llamas ◽  
C. Diaz Delgado ◽  
M.-L. Lavertu

In this paper, an improved probabilistic method for flood analysis using the probable maximum flood, the beta function, and orthogonal Jacobi’s polynomials is proposed. The shape of the beta function depends on the sample's characteristics and the bounds of the phenomenon. On the other hand, a serial of Jacobi’s polynomials has been used improving the beta function and increasing its convergence degree toward the real flood probability density function. This mathematical model has been tested using a sample of 1000 generated beta random data. Finally, some practical applications with real data series, from important Quebec's rivers, have been performed; the model solutions for these rivers showed the accuracy of this new method in flood frequency estimation. Key words: probable maximum flood, beta function, orthogonal polynomials, distribution function, flood frequency estimation, data generation, convergency.


2016 ◽  
Vol 20 (12) ◽  
pp. 4717-4729 ◽  
Author(s):  
Martin Durocher ◽  
Fateh Chebana ◽  
Taha B. M. J. Ouarda

Abstract. This study investigates the utilization of hydrological information in regional flood frequency analysis (RFFA) to enforce desired properties for a group of gauged stations. Neighbourhoods are particular types of regions that are centred on target locations. A challenge for using neighbourhoods in RFFA is that hydrological information is not available at target locations and cannot be completely replaced by the available physiographical information. Instead of using the available physiographic characteristics to define the centre of a target location, this study proposes to introduce estimates of reference hydrological variables to ensure a better homogeneity. These reference variables represent nonlinear relations with the site characteristics obtained by projection pursuit regression, a nonparametric regression method. The resulting neighbourhoods are investigated in combination with commonly used regional models: the index-flood model and regression-based models. The complete approach is illustrated in a real-world case study with gauged sites from the southern part of the province of Québec, Canada, and is compared with the traditional approaches such as region of influence and canonical correlation analysis. The evaluation focuses on the neighbourhood properties as well as prediction performances, with special attention devoted to problematic stations. Results show clear improvements in neighbourhood definitions and quantile estimates.


2014 ◽  
Vol 18 (1) ◽  
pp. 353-365 ◽  
Author(s):  
U. Haberlandt ◽  
I. Radtke

Abstract. Derived flood frequency analysis allows the estimation of design floods with hydrological modeling for poorly observed basins considering change and taking into account flood protection measures. There are several possible choices regarding precipitation input, discharge output and consequently the calibration of the model. The objective of this study is to compare different calibration strategies for a hydrological model considering various types of rainfall input and runoff output data sets and to propose the most suitable approach. Event based and continuous, observed hourly rainfall data as well as disaggregated daily rainfall and stochastically generated hourly rainfall data are used as input for the model. As output, short hourly and longer daily continuous flow time series as well as probability distributions of annual maximum peak flow series are employed. The performance of the strategies is evaluated using the obtained different model parameter sets for continuous simulation of discharge in an independent validation period and by comparing the model derived flood frequency distributions with the observed one. The investigations are carried out for three mesoscale catchments in northern Germany with the hydrological model HEC-HMS (Hydrologic Engineering Center's Hydrologic Modeling System). The results show that (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated using a small sample of extreme values works quite well for the simulation of continuous time series with moderate length but not vice versa, and (III) the best performance with small uncertainty is obtained when stochastic precipitation data and the observed probability distribution of peak flows are used for model calibration. This outcome suggests to calibrate a hydrological model directly on probability distributions of observed peak flows using stochastic rainfall as input if its purpose is the application for derived flood frequency analysis.


Biometrika ◽  
2020 ◽  
Vol 107 (3) ◽  
pp. 647-660
Author(s):  
H Dehling ◽  
R Fried ◽  
M Wendler

Summary We present a robust and nonparametric test for the presence of a changepoint in a time series, based on the two-sample Hodges–Lehmann estimator. We develop new limit theory for a class of statistics based on two-sample U-quantile processes in the case of short-range dependent observations. Using this theory, we derive the asymptotic distribution of our test statistic under the null hypothesis of a constant level. The proposed test shows better overall performance under normal, heavy-tailed and skewed distributions than several other modifications of the popular cumulative sums test based on U-statistics, one-sample U-quantiles or M-estimation. The new theory does not involve moment conditions, so any transform of the observed process can be used to test the stability of higher-order characteristics such as variability, skewness and kurtosis.


Water ◽  
2020 ◽  
Vol 12 (7) ◽  
pp. 1867
Author(s):  
Chunlai Qu ◽  
Jing Li ◽  
Lei Yan ◽  
Pengtao Yan ◽  
Fang Cheng ◽  
...  

Under changing environments, the most widely used non-stationary flood frequency analysis (NFFA) method is the generalized additive models for location, scale and shape (GAMLSS) model. However, the model structure of the GAMLSS model is relatively complex due to the large number of statistical parameters, and the relationship between statistical parameters and covariates is assumed to be unchanged in future, which may be unreasonable. In recent years, nonparametric methods have received increasing attention in the field of NFFA. Among them, the linear quantile regression (QR-L) model and the non-linear quantile regression model of cubic B-spline (QR-CB) have been introduced into NFFA studies because they do not need to determine statistical parameters and consider the relationship between statistical parameters and covariates. However, these two quantile regression models have difficulties in estimating non-stationary design flood, since the trend of the established model must be extrapolated infinitely to estimate design flood. Besides, the number of available observations becomes scarcer when estimating design values corresponding to higher return periods, leading to unreasonable and inaccurate design values. In this study, we attempt to propose a cubic B-spline-based GAMLSS model (GAMLSS-CB) for NFFA. In the GAMLSS-CB model, the relationship between statistical parameters and covariates is fitted by the cubic B-spline under the GAMLSS model framework. We also compare the performance of different non-stationary models, namely the QR-L, QR-CB, and GAMLSS-CB models. Finally, based on the optimal non-stationary model, the non-stationary design flood values are estimated using the average design life level method (ADLL). The annual maximum flood series of four stations in the Weihe River basin and the Pearl River basin are taken as examples. The results show that the GAMLSS-CB model displays the best model performance compared with the QR-L and QR-CB models. Moreover, it is feasible to estimate design flood values based on the GAMLSS-CB model using the ADLL method, while the estimation of design flood based on the quantile regression model requires further studies.


2016 ◽  
Vol 11 (2) ◽  
pp. 373-383
Author(s):  
Majid Mirzaei ◽  
Mina Faghih ◽  
Tan Pei Ying ◽  
Ahmed El-Shafie ◽  
Yuk Feng Huang ◽  
...  

Rapid growth in recent decades has changed engineering concepts about the approach to controlling storm water in cities. Over the past years flood events have occurred more frequently in several countries in the tropics. In this study the behavior of Langat River in Malaysia was analyzed using the hydrodynamic modeling software (HEC-RAS) developed by the ‘Hydrologic Engineering Center, U.S. Army Corps of Engineers’, to simulate different water levels and flow rates corresponding to different return periods from the available database. The aim was to forecast peak flows, based on rainfall data and the maximum rate of precipitation in different return periods in storms of different duration. The maximum flows were obtained from the Automated Geospatial Watershed Assessment tool for the different return periods, and the peak flows from extreme rainfall were applied to HEC-RAS to simulate different water levels and flow rates corresponding to different return periods. The water level along the river and its tributaries could then be analyzed for different flow conditions.


2012 ◽  
Author(s):  
Ani Shabri

Siri banjir tahunan maksimum (Annual Maximum, AM) merupakan pendekatan yang begitu terkenal dalam analisis frekuensi banjir. Siri puncak melebihi paras (peaks over threshold, POT) telah digunakan sebagai alternatif kepada siri banjir tahunan maksimum. Masalah utama dalam pendekatan POT adalah berkaitan pemilihan paras yang sesuai. Dalam kajian ini, kesan perubahaan paras bagi siri POT ke atas nilai anggaran dikaji. Model POT dengan andaian bahawa bilangan puncak melebihi paras bertabur secara Poisson dan magnitud puncak melebihi paras tertabur secara Pareto Umum (General Pareto Distribution, GPD) dibincangkan. Parameter taburan GPD dianggar menggunakan kaedah kebarangkalian pemberat momen (Probability Weighted Moment, PWM) untuk paras yang diketahui. Perbandingan kesesuaian model POT dan model AM dalam menganggarkan nilai hujung atas taburan dibuat. Hasil kajian menunjukkan bahawa apabila paras siri POT boleh disuaikan oleh taburan Pareto dengan proses Poisson, model POT didapati dapat menghasilkan anggaran nilai hujung atas taburan lebih baik berbanding model aliran maksimum. Kata kunci: Siri puncak melebihi paras, proses poisson, taburan pareto umum, GEV, hujung atas taburan Annual maximum flood series remains the most popular approach to flood frequency analysis. Peaks over threshold series have been used as an alternative to annual maximum series. One specific difficulty of the POT approach is the selection of the threshold level. In this study the effect of raising the threshold of the POT series on heavy-tailed distributions estimation is investigated. The POT model described by the generalized Pareto distribution for peak magnitudes with the Poisson process for the occurrence of peaks is discussed. Estimation of the GPD parameters by the method of probability weighted moment (PWM) is formulated for known thresholds. A comparison of the efficiencies of the POT and AM models in heavy-tailed distributions is made. The result showed that when the threshold of POT series can be fitted by GPD with the Poisson process, the POT model is more efficient than the annual maximum (AM) model in estimating the highest extreme value. Key words: Peaks over threshold, poisson process, pareto distribution, GEV, heavy tailed distributions


1990 ◽  
Vol 17 (4) ◽  
pp. 597-609 ◽  
Author(s):  
K. C. Ander Chow ◽  
W. E. Watt

Single-station flood frequency analysis is an important element in hydrotechnical planning and design. In Canada, no single statistical distribution has been specified for floods; hence, the conventional approach is to select a distribution based on its fit to the observed sample. This selection is not straightforward owing to typically short record lengths and attendant sampling error, magnified influence of apparent outliers, and limited evidence of two populations. Nevertheless, experienced analysts confidently select a distribution for a station based only on a few heuristics. A knowledge-based expert system has been developed to emulate these expert heuristics. It can perform data analyses, suggest an appropriate distribution, detect outliers, and provide means to justify a design flood on physical grounds. If the sample is too small to give reliable quantile estimates, the system performs a Bayesian analysis to combine regional information with station-specific data. The system was calibrated and tested for 52 stations across Canada. Its performance was evaluated by comparing the distributions selected by experts with those given by the developed system. The results indicated that the system can perform at an expert level in the task of selecting distributions. Key words: flood frequency, expert system, single-station, fuzzy logic, inductive reasoning, production system.


Sign in / Sign up

Export Citation Format

Share Document