scholarly journals Impacts of Polarimetric Radar Observations on Hydrologic Simulation

2010 ◽  
Vol 11 (3) ◽  
pp. 781-796 ◽  
Author(s):  
Jonathan J. Gourley ◽  
Scott E. Giangrande ◽  
Yang Hong ◽  
Zachary L. Flamig ◽  
Terry Schuur ◽  
...  

Abstract Rainfall estimated from the polarimetric prototype of the Weather Surveillance Radar-1988 Doppler [WSR-88D (KOUN)] was evaluated using a dense Micronet rain gauge network for nine events on the Ft. Cobb research watershed in Oklahoma. The operation of KOUN and its upgrade to dual polarization was completed by the National Severe Storms Laboratory. Storm events included an extreme rainfall case from Tropical Storm Erin that had a 100-yr return interval. Comparisons with collocated Micronet rain gauge measurements indicated all six rainfall algorithms that used polarimetric observations had lower root-mean-squared errors and higher Pearson correlation coefficients than the conventional algorithm that used reflectivity factor alone when considering all events combined. The reflectivity based relation R(Z) was the least biased with an event-combined normalized bias of −9%. The bias for R(Z), however, was found to vary significantly from case to case and as a function of rainfall intensity. This variability was attributed to different drop size distributions (DSDs) and the presence of hail. The synthetic polarimetric algorithm R(syn) had a large normalized bias of −31%, but this bias was found to be stationary. To evaluate whether polarimetric radar observations improve discharge simulation, recent advances in Markov Chain Monte Carlo simulation using the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) were used. This Bayesian approach infers the posterior probability density function of model parameters and output predictions, which allows us to quantify HL-RDHM uncertainty. Hydrologic simulations were compared to observed streamflow and also to simulations forced by rain gauge inputs. The hydrologic evaluation indicated that all polarimetric rainfall estimators outperformed the conventional R(Z) algorithm, but only after their long-term biases were identified and corrected.

2011 ◽  
Vol 8 (6) ◽  
pp. 10739-10780
Author(s):  
V. Ruiz-Villanueva ◽  
M. Borga ◽  
D. Zoccatelli ◽  
L. Marchi ◽  
E. Gaume ◽  
...  

Abstract. The 2 June 2008 flood-producing storm on the Starzel river basin in South-West Germany is examined as a prototype for organized convective systems that dominate the upper tail of the precipitation frequency distribution and are likely responsible for the flash flood peaks in this region. The availability of high-resolution rainfall estimates from radar observations and a rain gauge network, together with indirect peak discharge estimates from a detailed post-event survey, provides the opportunity to study the hydrometeorological and hydrological mechanisms associated with this extreme storm and the ensuing flood. Radar-derived rainfall, streamgauge data and indirect estimates of peak discharges are used along with a distributed hydrologic model to reconstruct hydrographs at multiple locations. The influence of storm structure, evolution and motion on the modeled flood hydrograph is examined by using the "spatial moments of catchment rainfall" (Zoccatelli et al., 2011). It is shown that downbasin storm motion had a noticeable impact on flood peak magnitude. Small runoff ratios (less than 20%) characterized the runoff response. The flood response can be reasonably well reproduced with the distributed hydrological model, using high resolution rainfall observations and model parameters calibrated at a river section which includes most of the area impacted by the storm.


2015 ◽  
Vol 72 (9) ◽  
pp. 1524-1533 ◽  
Author(s):  
A. Roy-Poirier ◽  
Y. Filion ◽  
P. Champagne

Bioretention systems are designed to treat stormwater and provide attenuated drainage between storms. Bioretention has shown great potential at reducing the volume and improving the quality of stormwater. This study introduces the bioretention hydrologic model (BHM), a one-dimensional model that simulates the hydrologic response of a bioretention system over the duration of a storm event. BHM is based on the RECARGA model, but has been adapted for improved accuracy and integration of pollutant transport models. BHM contains four completely-mixed layers and accounts for evapotranspiration, overflow, exfiltration to native soils and underdrain discharge. Model results were evaluated against field data collected over 10 storm events. Simulated flows were particularly sensitive to antecedent water content and drainage parameters of bioretention soils, which were calibrated through an optimisation algorithm. Temporal disparity was observed between simulated and measured flows, which was attributed to preferential flow paths formed within the soil matrix of the field system. Modelling results suggest that soil water storage is the most important short-term hydrologic process in bioretention, with exfiltration having the potential to be significant in native soils with sufficient permeability.


2011 ◽  
Vol 12 (5) ◽  
pp. 973-988 ◽  
Author(s):  
Jonathan J. Gourley ◽  
Yang Hong ◽  
Zachary L. Flamig ◽  
Jiahu Wang ◽  
Humberto Vergara ◽  
...  

Abstract This study evaluates rainfall estimates from the Next Generation Weather Radar (NEXRAD), operational rain gauges, Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA), and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks Cloud Classification System (PERSIANN-CCS) in the context as inputs to a calibrated, distributed hydrologic model. A high-density Micronet of rain gauges on the 342-km2 Ft. Cobb basin in Oklahoma was used as reference rainfall to calibrate the National Weather Service’s (NWS) Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) at 4-km/l-h and 0.25°/3-h resolutions. The unadjusted radar product was the overall worst product, while the stage IV radar product with hourly rain gauge adjustment had the best hydrologic skill with a Micronet relative efficiency score of −0.5, only slightly worse than the reference simulation forced by Micronet rainfall. Simulations from TRMM-3B42RT were better than PERSIANN-CCS-RT (a real-time version of PERSIANN-CSS) and equivalent to those from the operational rain gauge network. The high degree of hydrologic skill with TRMM-3B42RT forcing was only achievable when the model was calibrated at TRMM’s 0.25°/3-h resolution, thus highlighting the importance of considering rainfall product resolution during model calibration.


2015 ◽  
Vol 16 (1) ◽  
pp. 129-146 ◽  
Author(s):  
Ryan R. Spies ◽  
Kristie J. Franz ◽  
Terri S. Hogue ◽  
Angela L. Bowman

Abstract Satellite-derived potential evapotranspiration (PET) estimates computed from Moderate Resolution Imaging Spectroradiometer (MODIS) observations and the Priestley–Taylor formula (M-PET) are evaluated as input to the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM). The HL-RDHM is run at a 4-km spatial and 6-h temporal resolution for 13 watersheds in the upper Mississippi and Red River basins for 2003–10. Simulated discharge using inputs of daily M-PET is evaluated for all watersheds, and simulated evapotranspiration (ET) is evaluated at two watersheds using nearby latent heat flux observations. M-PET–derived model simulations are compared to output using the long-term average PET values (default-PET) provided as part of the HL-RDHM application. In addition, uncalibrated and calibrated simulations are evaluated for both PET data sources. Calibrating select model parameters is found to substantially improve simulated discharge for both datasets. Overall average percent bias (PBias) and Nash–Sutcliffe efficiency (NSE) values for simulated discharge are better from the default-PET than the M-PET for the calibrated models during the verification period, indicating that the time-varying M-PET input did not improve the discharge simulation in the HL-RDHM. M-PET tends to produce higher NSE values than the default-PET for the Wisconsin and Minnesota basins, but lower NSE values for the Iowa basins. M-PET–simulated ET matches the range and variability of observed ET better than the default-PET at two sites studied and may provide potential model improvements in that regard.


2018 ◽  
Vol 10 (8) ◽  
pp. 1258 ◽  
Author(s):  
Marios Anagnostou ◽  
Efthymios Nikolopoulos ◽  
John Kalogiros ◽  
Emmanouil Anagnostou ◽  
Francesco Marra ◽  
...  

In mountain basins, the use of long-range operational weather radars is often associated with poor quantitative precipitation estimation due to a number of challenges posed by the complexity of terrain. As a result, the applicability of radar-based precipitation estimates for hydrological studies is often limited over areas that are in close proximity to the radar. This study evaluates the advantages of using X-band polarimetric (XPOL) radar as a means to fill the coverage gaps and improve complex terrain precipitation estimation and associated hydrological applications based on a field experiment conducted in an area of Northeast Italian Alps characterized by large elevation differences. The corresponding rainfall estimates from two operational C-band weather radar observations are compared to the XPOL rainfall estimates for a near-range (10–35 km) mountainous basin (64 km2). In situ rainfall observations from a dense rain gauge network and two disdrometers (a 2D-video and a Parsivel) are used for ground validation of the radar-rainfall estimates. Ten storm events over a period of two years are used to explore the differences between the locally deployed XPOL vs. longer-range operational radar-rainfall error statistics. Hourly aggregate rainfall estimates by XPOL, corrected for rain-path attenuation and vertical reflectivity profile, exhibited correlations between 0.70 and 0.99 against reference rainfall data and 21% mean relative error for rainfall rates above 0.2 mm h−1. The corresponding metrics from the operational radar-network rainfall products gave a strong underestimation (50–70%) and lower correlations (0.48–0.81). For the two highest flow-peak events, a hydrological model (Kinematic Local Excess Model) was forced with the different radar-rainfall estimations and in situ rain gauge precipitation data at hourly resolution, exhibiting close agreement between the XPOL and gauge-based driven runoff simulations, while the simulations obtained by the operational radar rainfall products resulted in a greatly underestimated runoff response.


2013 ◽  
Vol 17 (12) ◽  
pp. 5109-5125 ◽  
Author(s):  
J. D. Herman ◽  
J. B. Kollat ◽  
P. M. Reed ◽  
T. Wagener

Abstract. Distributed watershed models are now widely used in practice to simulate runoff responses at high spatial and temporal resolutions. Counter to this purpose, diagnostic analyses of distributed models currently aggregate performance measures in space and/or time and are thus disconnected from the models' operational and scientific goals. To address this disconnect, this study contributes a novel approach for computing and visualizing time-varying global sensitivity indices for spatially distributed model parameters. The high-resolution model diagnostics employ the method of Morris to identify evolving patterns in dominant model processes at sub-daily timescales over a six-month period. The method is demonstrated on the United States National Weather Service's Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) in the Blue River watershed, Oklahoma, USA. Three hydrologic events are selected from within the six-month period to investigate the patterns in spatiotemporal sensitivities that emerge as a function of forcing patterns as well as wet-to-dry transitions. Events with similar magnitudes and durations exhibit significantly different performance controls in space and time, indicating that the diagnostic inferences drawn from representative events will be heavily biased by the a priori selection of those events. By contrast, this study demonstrates high-resolution time-varying sensitivity analysis, requiring no assumptions regarding representative events and allowing modelers to identify transitions between sets of dominant parameters or processes a posteriori. The proposed approach details the dynamics of parameter sensitivity in nearly continuous time, providing critical diagnostic insights into the underlying model processes driving predictions. Furthermore, the approach offers the potential to identify transition points between dominant parameters and processes in the absence of observations, such as under nonstationarity.


2016 ◽  
Vol 18 (1) ◽  
pp. 25-47 ◽  
Author(s):  
Younghyun Cho ◽  
Bernard A. Engel

Abstract A hybrid hydrologic model (lumped conceptual and distributed feature model), Distributed-Clark, is introduced to perform hydrologic simulations using spatially distributed NEXRAD quantitative precipitation estimations (QPEs). In Distributed-Clark, spatially distributed excess rainfall estimated with the Soil Conservation Service (SCS) curve number method and a GIS-based set of separated unit hydrographs are utilized to calculate a direct runoff flow hydrograph. This simple approach using few modeling parameters reduces calibration complexity relative to physically based distributed (PBD) models by only focusing on integrated flow estimation at watershed outlets. Case studies assessed the quality of NEXRAD stage IV QPEs for hydrologic simulation compared to gauge-only analyses. NEXRAD data validation against rain gauge observations and performance evaluation with model simulation result comparisons for inputs of spatially distributed stage IV and spatially averaged gauged data for four study watersheds were conducted. Results show significant differences in NEXRAD QPEs and gauged rainfall amounts, with NEXRAD data overestimated by 7.5% and 9.1% and underestimated by 15.0% and 11.4% accompanied by spatial variability. These differences affect model performance in hydrologic applications. Rainfall–runoff flow simulations using spatially distributed NEXRAD stage IV QPEs demonstrate relatively good fit [direct runoff: Nash–Sutcliffe efficiency ENS = 0.85, coefficient of determination R2 = 0.89, and percent bias (PBIAS) = 3.92%; streamflow: ENS = 0.91, R2 = 0.93, and PBIAS = 1.87%] against observed flow as well as better fit (ENS of 3.7% and R2 of 6.0% increase in direct runoff) than spatially averaged gauged rainfall for the same model calibration approach, enabling improved estimates of flow volumes and peak rates that can be underestimated in hydrologic simulations for spatially averaged rainfall.


2020 ◽  
Author(s):  
Anatolii Anisimov ◽  
Vladimir Efimov ◽  
Margarita Lvova ◽  
Viktor Popov ◽  
Suleiman Mostamandi

<p>We present a case study on extreme rainfall event in Crimea in September 2018. The event was caused by extratropical cyclone forming above the Black Sea. The cyclone approached the Crimean Mountains from the south, producing over 100 mm of rainfall in Yalta on September 6 and causing a flash flood. In the mountains, about 140 mm of rainfall was reported. </p><p>To study this extreme event, we use the WRF model v.4.0.1 forced by the boundary conditions from ECMWF operational analysis with the spatial resolution of approximately 10 × 10 km. The model was run for 8 days of September 1 – 8, and 5 microphysical schemes were tested (WDM6, Morrison, Milbrandt, NSSL, and Thompson). Other model parameters were set identical to CONUS configuration suite. The simulations were done for two one-way nested convective-resolving domains with spatial resolution of 2.7× 2.7 km and 0.9 × 0.9 km. The simulations were verified using the meteorological radar observations in Simferopol airport and GPM measurements.</p><p>All of the microphysical schemes substantially underestimate the amount of rainfall reaching the ground compared to observations. However, several schemes (Milbrandt, Morrison, and WDM6) do add value to the forecasts, producing significantly larger amount of rainfall compared to the driving model that almost completely missed it on the local scale. WDM6 performs best to capture the proper location of the squall line and to reproduce the rainfall orographic enhancement in the mountains. The amount of rainfall in the child domain was also slightly larger compared to the parent one. Despite the rainfall underestimation, we also show that the simulated reflectivity patterns are in good agreement with observations, although the convective cores are wider and less intense compared to the observed by the radar.</p>


2021 ◽  
Vol 13 (9) ◽  
pp. 5207
Author(s):  
Zed Zulkafli ◽  
Farrah Melissa Muharam ◽  
Nurfarhana Raffar ◽  
Amirparsa Jajarmizadeh ◽  
Mukhtar Jibril Abdi ◽  
...  

Good index selection is key to minimising basis risk in weather index insurance design. However, interannual, seasonal, and intra-seasonal hydroclimatic variabilities pose challenges in identifying robust proxies for crop losses. In this study, we systematically investigated 574 hydroclimatic indices for their relationships with yield in Malaysia’s irrigated double planting system, using the Muda rice granary as a case study. The responses of seasonal rice yields to seasonal and monthly averages and to extreme rainfall, temperature, and streamflow statistics from 16 years’ observations were examined by using correlation analysis and linear regression. We found that the minimum temperature during the crop flowering to the maturity phase governed yield in the drier off-season (season 1, March to July, Pearson correlation, r = +0.87; coefficient of determination, R2 = 74%). In contrast, the average streamflow during the crop maturity phase regulated yield in the main planting season (season 2, September to January, r = +0.82, R2 = 67%). During the respective periods, these indices were at their lowest in the seasons. Based on these findings, we recommend temperature- and water-supply-based indices as the foundations for developing insurance contracts for the rice system in northern Peninsular Malaysia.


Author(s):  
Daniel Bittner ◽  
Beatrice Richieri ◽  
Gabriele Chiogna

AbstractUncertainties in hydrologic model outputs can arise for many reasons such as structural, parametric and input uncertainty. Identification of the sources of uncertainties and the quantification of their impacts on model results are important to appropriately reproduce hydrodynamic processes in karst aquifers and to support decision-making. The present study investigates the time-dependent relevance of model input uncertainties, defined as the conceptual uncertainties affecting the representation and parameterization of processes relevant for groundwater recharge, i.e. interception, evapotranspiration and snow dynamic, on the lumped karst model LuKARS. A total of nine different models are applied, three to compute interception (DVWK, Gash and Liu), three to compute evapotranspiration (Thornthwaite, Hamon and Oudin) and three to compute snow processes (Martinec, Girons Lopez and Magnusson). All the input model combinations are tested for the case study of the Kerschbaum spring in Austria. The model parameters are kept constant for all combinations. While parametric uncertainties computed for the same model in previous studies do not show pronounced temporal variations, the results of the present work show that input uncertainties are seasonally varying. Moreover, the input uncertainties of evapotranspiration and snowmelt are higher than the interception uncertainties. The results show that the importance of a specific process for groundwater recharge can be estimated from the respective input uncertainties. These findings have practical implications as they can guide researchers to obtain relevant field data to improve the representation of different processes in lumped parameter models and to support model calibration.


Sign in / Sign up

Export Citation Format

Share Document