scholarly journals An Investigation of Modelling Accuracy Needs for Urban Design Flood Estimation

Author(s):  
James E. Ball

Flood Management remains a major problem in many urban environments. Commonly, catchment models are used to generate the data needed for estimation of flood risk; event-based and continuous-based models have been used for this purpose. Use of catchment models requires calibration and validation with a calibration metric used to assess the predicted catchment response against the recorded catchment response. In this study, a continuous model based on SWMM using the Powells Creek catchment as a case study is investigated. Calibration of the model was obtained using 25 selected events from the monitored data for the catchment. Assessment of the calibration used a normalised peak flow error. Using alternative sets of parameter values to obtain estimates of the peak flow for each of the selected events and different accuracy criteria, the best datasets for each of the accuracy criteria were identified. These datasets were used with SWMM in a continuous simulation mode to predict flow sequences for extraction of Annual Maxima Series for an At-Site Flood Frequency Analysis. From analysis of these At-Site Flood Frequency Analyses, it was concluded that the normalised peak flow error needed to be less than 10% if reliable design flood quantile estimates were to be obtained.

2014 ◽  
Vol 14 (5) ◽  
pp. 1283-1298 ◽  
Author(s):  
D. Lawrence ◽  
E. Paquet ◽  
J. Gailhard ◽  
A. K. Fleig

Abstract. Simulation methods for extreme flood estimation represent an important complement to statistical flood frequency analysis because a spectrum of catchment conditions potentially leading to extreme flows can be assessed. In this paper, stochastic, semi-continuous simulation is used to estimate extreme floods in three catchments located in Norway, all of which are characterised by flood regimes in which snowmelt often has a significant role. The simulations are based on SCHADEX, which couples a precipitation probabilistic model with a hydrological simulation such that an exhaustive set of catchment conditions and responses is simulated. The precipitation probabilistic model is conditioned by regional weather patterns, and a bottom–up classification procedure was used to define a set of weather patterns producing extreme precipitation in Norway. SCHADEX estimates for the 1000-year (Q1000) discharge are compared with those of several standard methods, including event-based and long-term simulations which use a single extreme precipitation sequence as input to a hydrological model, statistical flood frequency analysis based on the annual maximum series, and the GRADEX method. The comparison suggests that the combination of a precipitation probabilistic model with a long-term simulation of catchment conditions, including snowmelt, produces estimates for given return periods which are more in line with those based on statistical flood frequency analysis, as compared with the standard simulation methods, in two of the catchments. In the third case, the SCHADEX method gives higher estimates than statistical flood frequency analysis and further suggests that the seasonality of the most likely Q1000 events differs from that of the annual maximum flows. The semi-continuous stochastic simulation method highlights the importance of considering the joint probability of extreme precipitation, snowmelt rates and catchment saturation states when assigning return periods to floods estimated by precipitation-runoff methods. The SCHADEX methodology, as applied here, is dependent on observed discharge data for calibration of a hydrological model, and further study to extend its application to ungauged catchments would significantly enhance its versatility.


1990 ◽  
Vol 17 (4) ◽  
pp. 597-609 ◽  
Author(s):  
K. C. Ander Chow ◽  
W. E. Watt

Single-station flood frequency analysis is an important element in hydrotechnical planning and design. In Canada, no single statistical distribution has been specified for floods; hence, the conventional approach is to select a distribution based on its fit to the observed sample. This selection is not straightforward owing to typically short record lengths and attendant sampling error, magnified influence of apparent outliers, and limited evidence of two populations. Nevertheless, experienced analysts confidently select a distribution for a station based only on a few heuristics. A knowledge-based expert system has been developed to emulate these expert heuristics. It can perform data analyses, suggest an appropriate distribution, detect outliers, and provide means to justify a design flood on physical grounds. If the sample is too small to give reliable quantile estimates, the system performs a Bayesian analysis to combine regional information with station-specific data. The system was calibrated and tested for 52 stations across Canada. Its performance was evaluated by comparing the distributions selected by experts with those given by the developed system. The results indicated that the system can perform at an expert level in the task of selecting distributions. Key words: flood frequency, expert system, single-station, fuzzy logic, inductive reasoning, production system.


2017 ◽  
Vol 49 (2) ◽  
pp. 466-486 ◽  
Author(s):  
Kolbjørn Engeland ◽  
Donna Wilson ◽  
Péter Borsányi ◽  
Lars Roald ◽  
Erik Holmqvist

Abstract There is a need to estimate design floods for areal planning and the design of important infrastructure. A major challenge is the mismatch between the length of the flood records and needed return periods. A majority of flood time series are shorter than 50 years, and the required return periods might be 200, 500, or 1,000 years. Consequently, the estimation uncertainty is large. In this paper, we investigated how the use of historical information might improve design flood estimation. We used annual maximum data from four selected Norwegian catchments, and historical flood information to provide an indication of water levels for the largest floods in the last two to three hundred years. We assessed the added value of using historical information and demonstrated that both reliability and stability improves, especially for short record lengths and long return periods. In this study, we used information on water levels, which showed the stability of river profiles to be a major challenge.


2021 ◽  
Author(s):  
Lei Yan ◽  
Lihua Xiong ◽  
Gusong Ruan ◽  
Chong-Yu Xu ◽  
Mengjie Zhang

Abstract In traditional flood frequency analysis, a minimum of 30 observations is required to guarantee the accuracy of design results with an allowable uncertainty; however, there has not been a recommendation for the requirement on the length of data in NFFA (nonstationary flood frequency analysis). Therefore, this study has been carried out with three aims: (i) to evaluate the predictive capabilities of nonstationary (NS) and stationary (ST) models with varying flood record lengths; (ii) to examine the impacts of flood record lengths on the NS and ST design floods and associated uncertainties; and (iii) to recommend the probable requirements of flood record length in NFFA. To achieve these objectives, 20 stations with record length longer than 100 years in Norway were selected and investigated by using both GEV (generalized extreme value)-ST and GEV-NS models with linearly varying location parameter (denoted by GEV-NS0). The results indicate that the fitting quality and predictive capabilities of GEV-NS0 outperform those of GEV-ST models when record length is approximately larger than 60 years for most stations, and the stability of the GEV-ST and GEV-NS0 is improved as record lengths increase. Therefore, a minimum of 60 years of flood observations is recommended for NFFA for the selected basins in Norway.


2013 ◽  
Vol 1 (6) ◽  
pp. 6785-6828 ◽  
Author(s):  
D. Lawrence ◽  
E. Paquet ◽  
J. Gailhard ◽  
A. K. Fleig

Abstract. Simulation methods for extreme flood estimation represent an important complement to statistical flood frequency analysis because a spectrum of catchment conditions potentially leading to extreme flows can be assessed. In this paper, stochastic, semi-continuous simulation is used to estimate extreme floods in three catchments located in Norway, all of which are characterised by flood regimes in which snowmelt often has a significant role. The simulations are based on SCHADEX, which couples a precipitation probabilistic model with a hydrological simulation such that an exhaustive set of catchment conditions and responses are simulated. The precipitation probabilistic model is conditioned by regional weather patterns, and a "bottom-up" classification procedure was used for defining a set of weather patterns producing extreme precipitation in Norway. SCHADEX estimates for the 1000 yr (Q1000) discharge are compared with those of several standard methods, including event-based and long-term simulations which use a single extreme precipitation sequence as input to a hydrological model, with statistical flood frequency analysis based on the annual maximum series, and with the GRADEX method. The comparison suggests that the combination of a precipitation probabilistic model with a long-term simulation of catchment conditions, including snowmelt, produces estimates for given return periods which are more in line with those based on statistical flood frequency analysis, as compared with the standard simulation methods, in two of the catchments. In the third case, the SCHADEX method gives higher estimates than statistical flood frequency analysis and further suggests that the seasonality of the most likely Q1000 events differs from that of the annual maximum flows. The semi-continuous stochastic simulation method highlights the importance of considering the joint probability of extreme precipitation, snowmelt rates and catchment saturation states when assigning return periods to floods estimated by precipitation-runoff methods. The SCHADEX methodology, as applied here, is dependent on observed discharge data for calibration of a hydrological model, and further study to extend its application to ungauged catchments would significantly enhance its versatility.


2019 ◽  
Vol 11 (4) ◽  
pp. 966-979
Author(s):  
Nur Amalina Mat Jan ◽  
Ani Shabri ◽  
Ruhaidah Samsudin

Abstract Non-stationary flood frequency analysis (NFFA) plays an important role in addressing the issue of the stationary assumption (independent and identically distributed flood series) that is no longer valid in infrastructure-designed methods. This confirms the necessity of developing new statistical models in order to identify the change of probability functions over time and obtain a consistent flood estimation method in NFFA. The method of Trimmed L-moments (TL-moments) with time covariate is confronted with the L-moment method for the stationary and non-stationary generalized extreme value (GEV) models. The aims of the study are to investigate the behavior of the proposed TL-moments method in the presence of NFFA and applying the method along with GEV distribution. Comparisons of the methods are made by Monte Carlo simulations and bootstrap-based method. The simulation study showed the better performance of most levels of TL-moments method, which is TL(η,0), (η = 2, 3, 4) than the L-moment method for all models (GEV1, GEV2, and GEV3). The TL-moment method provides more efficient quantile estimates than other methods in flood quantiles estimated at higher return periods. Thus, the TL-moments method can produce better estimation results since the L-moment eliminates lowest value and gives more weight to the largest value which provides important information.


2013 ◽  
Vol 663 ◽  
pp. 768-772
Author(s):  
Li Jie Zhang

The evaluation and reducing of uncertainty is central to the task of hydrological frequency analysis. In this paper a Bayesian Markov Chain Monte Carlo (MCMC) method is employed to infer the parameter values of the probabilistic distribution model and evalue the uncertainties of design flood. Comparison to the estimated results of three-parameter log-normal distribution (LN3) and the three-parameter generalized extreme value distribution (GEV), the Pearson Type 3 distribution (PIII) provides a good approximation to flood-flow data. The choice of the appropriate probabilistic model can reduce uncertainty of design flood estimation. Historical flood events might be greatly reduced uncertainty when incorporating past extreme historical data into the flood frequency analysis.


Water ◽  
2019 ◽  
Vol 11 (8) ◽  
pp. 1717 ◽  
Author(s):  
Do-Hun Lee ◽  
Nam Won Kim

The design of hydraulic structures and the assessment of flood control measures require the estimation of flood quantiles. Since observed flood data are rarely available at the specific location, flood estimation in un-gauged or poorly gauged basins is a common problem in engineering hydrology. We investigated the flood estimation method in a poorly gauged basin. The flood estimation method applied the combination of rainfall-runoff model simulation and regional flood frequency analysis (RFFA). The L-moment based index flood method was performed using the annual maximum flood (AMF) data simulated by the rainfall-runoff model. The regional flood frequency distribution with 90% error bounds was derived in the Chungju dam basin of Korea, which has a drainage area of 6648 km2. The flood quantile estimates based on the simulated AMF data were consistent with the flood quantile estimates based on the observed AMF data. The widths of error bounds of regional flood frequency distribution increased sharply as the return period increased. The results suggest that the flood estimation approach applied in this study has the potential to estimate flood quantiles when the hourly rainfall measurements during major storms are widely available and the observed flood data are limited.


2020 ◽  
Author(s):  
Gang Zhao ◽  
Paul Bates ◽  
Jeff Neal ◽  
Bo Pang

<p>Design flood estimation in data-poor regions is a fundamental task in hydrology. In this paper, we propose a regional flood frequency analysis approach to estimate design floods anywhere on the global river network. This approach involves two stages: (i) clustering global gauging stations into subareas by a K-means model based on twelve globally available catchment descriptors and (ii) developing a regression model in each subarea for design flood estimation using the same descriptors. Nearly 12,000 discharge stations globally were selected for model development and a benchmark global index-flood method was adopted for comparison. The results showed that: (1) the proposed approach achieved the highest accuracy for design flood estimation when using all catchment descriptors for clustering; and the regression model accuracy improved by considering more descriptors in model development; (2) a support vector machine regression showed the highest accuracy among all regression models tested, with relative root mean squared error of 0.67 for mean flood and 0.83 for 100-year return period flood estimations; (3) 100-year return period flood magnitude in tropical, arid, temperate, continental and polar climate zones could be reliably estimated with relative mean biases of -0.18, -0.23, -0.18, 0.17 and -0.11 respectively by adopting a 5-fold cross-validation procedure; (4) the proposed approach outperformed the benchmark index-flood method for 10, 50 and 100 year return period estimates; We conclude that the proposed RFFA is a valid approach to generate design floods globally, improving our understanding of the flood hazard, especially in ungauged areas.</p>


2016 ◽  
Vol 20 (12) ◽  
pp. 4717-4729 ◽  
Author(s):  
Martin Durocher ◽  
Fateh Chebana ◽  
Taha B. M. J. Ouarda

Abstract. This study investigates the utilization of hydrological information in regional flood frequency analysis (RFFA) to enforce desired properties for a group of gauged stations. Neighbourhoods are particular types of regions that are centred on target locations. A challenge for using neighbourhoods in RFFA is that hydrological information is not available at target locations and cannot be completely replaced by the available physiographical information. Instead of using the available physiographic characteristics to define the centre of a target location, this study proposes to introduce estimates of reference hydrological variables to ensure a better homogeneity. These reference variables represent nonlinear relations with the site characteristics obtained by projection pursuit regression, a nonparametric regression method. The resulting neighbourhoods are investigated in combination with commonly used regional models: the index-flood model and regression-based models. The complete approach is illustrated in a real-world case study with gauged sites from the southern part of the province of Québec, Canada, and is compared with the traditional approaches such as region of influence and canonical correlation analysis. The evaluation focuses on the neighbourhood properties as well as prediction performances, with special attention devoted to problematic stations. Results show clear improvements in neighbourhood definitions and quantile estimates.


Sign in / Sign up

Export Citation Format

Share Document