Flood simulation using the gauge-adjusted radar rainfall and physics-based distributed hydrologic model

2008 ◽  
Vol 22 (22) ◽  
pp. 4400-4414 ◽  
Author(s):  
Byung Sik Kim ◽  
Bo Kyung Kim ◽  
Hung Soo Kim
2011 ◽  
Vol 15 (12) ◽  
pp. 3809-3827 ◽  
Author(s):  
A. Atencia ◽  
L. Mediero ◽  
M. C. Llasat ◽  
L. Garrote

Abstract. The performance of a hydrologic model depends on the rainfall input data, both spatially and temporally. As the spatial distribution of rainfall exerts a great influence on both runoff volumes and peak flows, the use of a distributed hydrologic model can improve the results in the case of convective rainfall in a basin where the storm area is smaller than the basin area. The aim of this study was to perform a sensitivity analysis of the rainfall time resolution on the results of a distributed hydrologic model in a flash-flood prone basin. Within such a catchment, floods are produced by heavy rainfall events with a large convective component. A second objective of the current paper is the proposal of a methodology that improves the radar rainfall estimation at a higher spatial and temporal resolution. Composite radar data from a network of three C-band radars with 6-min temporal and 2 × 2 km2 spatial resolution were used to feed the RIBS distributed hydrological model. A modification of the Window Probability Matching Method (gauge-adjustment method) was applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation by computing new Z/R relationships for both convective and stratiform reflectivities. An advection correction technique based on the cross-correlation between two consecutive images was introduced to obtain several time resolutions from 1 min to 30 min. The RIBS hydrologic model was calibrated using a probabilistic approach based on a multiobjective methodology for each time resolution. A sensitivity analysis of rainfall time resolution was conducted to find the resolution that best represents the hydrological basin behaviour.


2010 ◽  
Vol 11 (2) ◽  
pp. 520-532 ◽  
Author(s):  
Efthymios I. Nikolopoulos ◽  
Emmanouil N. Anagnostou ◽  
Faisal Hossain ◽  
Mekonnen Gebremichael ◽  
Marco Borga

Abstract The study presents a data-based numerical experiment performed to understand the scale relationships of the error propagation of satellite rainfall for flood evaluation applications in complex terrain basins. A satellite rainfall error model is devised to generate rainfall ensembles based on two satellite products with different retrieval accuracies and space–time resolutions. The generated ensembles are propagated through a distributed physics-based hydrologic model to simulate the rainfall–runoff processes at different basin scales. The resulted hydrographs are compared against the hydrograph obtained by using high-resolution radar rainfall as the “reference” rainfall input. The error propagation of rainfall to stream runoff is evaluated for a number of basin scales ranging between 100 and 1200 km2. The results from this study show that (i) use of satellite rainfall for flood simulation depends strongly on the scale of application (catchment area) and the satellite product resolution, (ii) different satellite products perform differently in terms of hydrologic error propagation, and (iii) the propagation of error depends on the basin size; for example, this study shows that small watersheds (<400 km2) exhibit a higher ability in dampening the error from rainfall to runoff than larger-sized watersheds, although the actual error increases as drainage area decreases.


2010 ◽  
Vol 7 (5) ◽  
pp. 7995-8043 ◽  
Author(s):  
A. Atencia ◽  
M. C. Llasat ◽  
L. Garrote ◽  
L. Mediero

Abstract. The performance of distributed hydrological models depends on the resolution, both spatial and temporal, of the rainfall surface data introduced. The estimation of quantitative precipitation from meteorological radar or satellite can improve hydrological model results, thanks to an indirect estimation at higher spatial and temporal resolution. In this work, composed radar data from a network of three C-band radars, with 6-minutal temporal and 2 × 2 km2 spatial resolution, provided by the Catalan Meteorological Service, is used to feed the RIBS distributed hydrological model. A Window Probability Matching Method (gage-adjustment method) is applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation in both convective and stratiform Z/R relations used over Catalonia. Once the rainfall field has been adequately obtained, an advection correction, based on cross-correlation between two consecutive images, was introduced to get several time resolutions from 1 min to 30 min. Each different resolution is treated as an independent event, resulting in a probable range of input rainfall data. This ensemble of rainfall data is used, together with other sources of uncertainty, such as the initial basin state or the accuracy of discharge measurements, to calibrate the RIBS model using probabilistic methodology. A sensitivity analysis of time resolutions was implemented by comparing the various results with real values from stream-flow measurement stations.


Water ◽  
2020 ◽  
Vol 12 (5) ◽  
pp. 1279
Author(s):  
Tyler Madsen ◽  
Kristie Franz ◽  
Terri Hogue

Demand for reliable estimates of streamflow has increased as society becomes more susceptible to climatic extremes such as droughts and flooding, especially at small scales where local population centers and infrastructure can be affected by rapidly occurring events. In the current study, the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM) (NOAA/NWS, Silver Spring, MD, USA) was used to explore the accuracy of a distributed hydrologic model to simulate discharge at watershed scales ranging from 20 to 2500 km2. The model was calibrated and validated using observed discharge data at the basin outlets, and discharge at uncalibrated subbasin locations was evaluated. Two precipitation products with nominal spatial resolutions of 12.5 km and 4 km were tested to characterize the role of input resolution on the discharge simulations. In general, model performance decreased as basin size decreased. When sub-basin area was less than 250 km2 or 20–40% of the total watershed area, model performance dropped below the defined acceptable levels. Simulations forced with the lower resolution precipitation product had better model evaluation statistics; for example, the Nash–Sutcliffe efficiency (NSE) scores ranged from 0.50 to 0.67 for the verification period for basin outlets, compared to scores that ranged from 0.33 to 0.52 for the higher spatial resolution forcing.


2004 ◽  
Vol 298 (1-4) ◽  
pp. 61-79 ◽  
Author(s):  
Theresa M. Carpenter ◽  
Konstantine P. Georgakakos

Author(s):  
Zhengtao Cui ◽  
Baxter E. Vieux ◽  
Henry Neeman ◽  
Fekadu Moreda

Sign in / Sign up

Export Citation Format

Share Document