A comparison of regional flood frequency estimation methods using a resampling method

1990 ◽  
Vol 26 (3) ◽  
pp. 415-424 ◽  
Author(s):  
Kenneth W. Potter ◽  
Dennis P. Lettenmaier
2008 ◽  
Vol 348 (1-2) ◽  
pp. 40-58 ◽  
Author(s):  
T.B.M.J. Ouarda ◽  
K.M. Bâ ◽  
C. Diaz-Delgado ◽  
A. Cârsteanu ◽  
K. Chokmani ◽  
...  

2001 ◽  
Vol 254 (1-4) ◽  
pp. 157-173 ◽  
Author(s):  
Taha B.M.J. Ouarda ◽  
Claude Girard ◽  
George S. Cavadias ◽  
Bernard Bobée

2012 ◽  
Vol 44 (1) ◽  
pp. 180-197 ◽  
Author(s):  
J. D. Miller ◽  
T. R. Kjeldsen ◽  
J. Hannaford ◽  
D. G. Morris

In November 2009, record-breaking rainfall resulted in severe, damaging flooding in Cumbria, in the north-west of England. This paper presents an analysis of the river flows and lake levels experienced during the event. Comparison with previous maxima shows the exceptional nature of this event, with new maximum flows being established at 17 river flow gauging stations, particularly on catchments influenced by lakes. The return periods of the flood peaks are estimated using the latest Flood Estimation Handbook statistical procedures. Results demonstrate that the event has had a considerable impact on estimates of flood frequency and associated uncertainty. Analysis of lake levels suggests that their record high levels reduced their attenuating effect, significantly affecting the timing and magnitude of downstream peaks. The peak flow estimate of 700 m3s–1 at Workington, the lowest station on the Derwent, was examined in the context of upstream inputs and was found to be plausible. The results of this study have important implications for the future development of flood frequency estimation methods for the UK. It is recommended that further research is undertaken on the role of abnormally elevated lake levels and that flood frequency estimation procedures in lake-influenced catchments are reviewed.


2016 ◽  
Vol 64 (4) ◽  
pp. 426-437 ◽  
Author(s):  
Mojca Šraj ◽  
Alberto Viglione ◽  
Juraj Parajka ◽  
Günter Blöschl

Abstract Substantial evidence shows that the frequency of hydrological extremes has been changing and is likely to continue to change in the near future. Non-stationary models for flood frequency analyses are one method of accounting for these changes in estimating design values. The objective of the present study is to compare four models in terms of goodness of fit, their uncertainties, the parameter estimation methods and the implications for estimating flood quantiles. Stationary and non-stationary models using the GEV distribution were considered, with parameters dependent on time and on annual precipitation. Furthermore, in order to study the influence of the parameter estimation approach on the results, the maximum likelihood (MLE) and Bayesian Monte Carlo Markov chain (MCMC) methods were compared. The methods were tested for two gauging stations in Slovenia that exhibit significantly increasing trends in annual maximum (AM) discharge series. The comparison of the models suggests that the stationary model tends to underestimate flood quantiles relative to the non-stationary models in recent years. The model with annual precipitation as a covariate exhibits the best goodness-of-fit performance. For a 10% increase in annual precipitation, the 10-year flood increases by 8%. Use of the model for design purposes requires scenarios of future annual precipitation. It is argued that these may be obtained more reliably than scenarios of extreme event precipitation which makes the proposed model more practically useful than alternative models.


2010 ◽  
Vol 7 (4) ◽  
pp. 5099-5130 ◽  
Author(s):  
S. Das ◽  
C. Cunnane

Abstract. In regional flood frequency estimation, a homogeneous pooling group of sites leads to a reduction in the error of quantile estimators which is the main aim of a regional flood frequency analysis. Examination of the homogeneity of regions/pooling groups is usually based on a statistic that relates to the formulation of a frequency distribution model, e.g. the coefficient of variation (Wiltshire, 1986; Fill and Stedinger, 1995) and/or skew coefficient, their L-moment equivalents (Chowdhury et al., 1991; Hosking and Wallis, 1997) or of dimensionless quantiles such as the 10-yr event (Dalrymple, 1960; Lu and Stedinger, 1992). Hosking andWallis (1993, 1997) proposed homogeneity tests based on L-moment ratios such as L-CV alone (H1) and L-CV & L-skewness jointly (H2) which were also recently investigated by Viglione et al. (2007). In this paper a study, based on annual maximum series obtained from 85 Irish gauging stations, examines how successful a common method of identifying pooling group membership is in selecting groups that actually are homogeneous. Each station has its own unique pooling group selected by use of a Euclidean distance measure in catchment descriptor space, commonly denoted dij and with a minimum of 500 station years of data in the pooling group, which satisfies the 5T rule (FEH, 1999, 3, p. 169) for the 100 yr quantile. It was found that dij could be effectively defined in terms of catchment area, mean rainfall and baseflow index. The sampling distribution of L-CV (t2) in each pooling group and the 95% confidence limits about the pooled estimate of t2 are obtained by simulation. The t2 values of the selected group members are compared with these confidence limits both graphically and numerically. Of the 85 stations, only 1 station's pooling group members have all their t2 values within the confidence limits, while 7, 33 and 44 of them have 1, 2 or 3 or more, t2 values outside the confidence limits. The outcomes are also compared with the heterogeneity measures H1 and H2. The H1 values show an upward trend with the ranges of t2 values in the pooling group whereas the H2 values do not show any such dependency. A selection of 27 pooling groups, found to be heterogeneous, were further examined with the help of box-plots of catchment descriptor values and one particular case is considered in detail. Overall the results show that even with a carefully considered selection procedure, it is not certain that perfectly homogeneous pooling groups are identified. As a compromise it is recommended that a group containing more than 2 values of t2 outside the confidence limits should not be considered homogeneous.


2020 ◽  
Author(s):  
Valeriya Fillipova ◽  
David Leedal ◽  
Anthony Hammond

<p>We have recently demonstrated the utility of a machine learning-based regional peak flow quantile regression model that is currently providing flood frequency estimation for the re/insurance industry across the contiguous US river network. The scheme uses an artificial neural network (ANN) regression model to estimate flood frequency quantiles from physical catchment descriptors. This circumvents the difficult-to-justify assumption of homogeneity required by alternative ‘region of hydrological similarity’ approaches. The structure of the model is as follows: the output (dependent) variable is a set of peak flow quantiles where the distributions used to derive the quantiles were parameterised from observations at 4,079 gauge sites using the USGS Bulletin 17C extreme value estimation method (notable for its inclusion of pre-instrumental flood events). The features (regressors) for the model were formed from 25 catchment descriptors covering; geometry, elevation, land cover, soil type and climate type for both the gauged sites and the catchments related to a further 906,000 ungauged sites where peak flow quantile estimation was undertaken. The feature collection requires massive computational resource to achieve catchment delineation and GIS processing of land-use, soil-type and precipitation data.</p><p>This project integrates many modelling and computational science elements. Here we focus attention on the ANN modelling component as this is of interest to the wider hydrology research community. We pass on our experience of working with this modelling approach and the unique challenges of working on a problem of this scale.</p><p>A baseline multiple linear regression model was generated, as were several non-linear alternative formulations. The ANN model was chosen as the best approach according to a root mean square error (RMSE) criterion. Alternative ANN formulations were evaluated. The RMSE indicated that a single hidden layer performed better than more complex multiple hidden layer models. Variable importance algorithms were used to assess the mechanistic credibility of the ANN model and showed that catchment area and mean annual rainfall were consistently identified as dominant features in agreement with the expectations of domain experts together with more subtle region-specific factors.</p><p>The results of this study show that ANN models, used as part of a carefully configured large-scale  computational hydrology project, produce very useful regional flood frequency estimates that can be used to inform flood risk management decision-making or drive further hydrodynamic 2D-modelling and are appropriate to the ever-increasing scale of contemporary hydrological modelling problems.</p>


Sign in / Sign up

Export Citation Format

Share Document