Spatial bootstrapping for model-free estimation of subcatchment parameter uncertainty for a semi-distributed rainfall runoff model

Author(s):  
Everett Snieder ◽  
Usman Khan

<p>Semi-distributed rainfall runoff models are widely used in hydrology, offering a compromise between the computational efficiency of lumped models and the representation of spatial heterogeneity offered by fully distributed models. In semi-distribute models, the catchment is divided into subcatchments, which are used as the basis for aggregating spatial characteristics. During model development, uncertainty is usually estimated from literature, however, subcatchment uncertainty is closely related to subcatchment size and level of spatial heterogeneity. Currently, there is no widely accepted systematic method for determining subcatchment size. Typically, subcatchment discretisation is a function of the spatiotemporal resolution of the available data. In our research, we evaluate the relationship between lumped parameter uncertainty and subcatchment size. Models with small subcatchments are expected to have low spatial uncertainty, as the spatial heterogeneity per subcatchment is also low. As subcatchment size increases, as does spatial uncertainty. Our objectives are to study the trade-off between subcatchment size, parameter uncertainty, and computational expense, to outline a systematic and precise framework for subcatchment discretisation. A proof of concept is presented using the Stormwater Management Model (EPA-SWMM) platform, to study a semi-urban catchment in Southwestern Ontario, Canada. Automated model creation is used to create catchment models with varying subcatchment sizes. For each model variation, uncertainty is estimated using spatial statistical bootstrapping. Applying bootstrapping to the spatial parameters directly provides a model free method for calculating the uncertainty of sample estimates. A Monte Carlo simulation is used to propagate uncertainty through the model and spatial resolution is assessed using performance criteria including the percentage of observations captured by the uncertainty envelope, the mean uncertainty envelope width, and rank histograms. The computational expense of simulations is tracked across the varying spatial resolution, achieved through subcatchment discretisation. Initial results suggest that uncertainty estimates often disagree with typical values listed in literature and vary significantly with respect to subcatchment size; this has significant implications on model calibration.</p>

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Mojtaba Sadeghi ◽  
Phu Nguyen ◽  
Matin Rahnamay Naeini ◽  
Kuolin Hsu ◽  
Dan Braithwaite ◽  
...  

AbstractAccurate long-term global precipitation estimates, especially for heavy precipitation rates, at fine spatial and temporal resolutions is vital for a wide variety of climatological studies. Most of the available operational precipitation estimation datasets provide either high spatial resolution with short-term duration estimates or lower spatial resolution with long-term duration estimates. Furthermore, previous research has stressed that most of the available satellite-based precipitation products show poor performance for capturing extreme events at high temporal resolution. Therefore, there is a need for a precipitation product that reliably detects heavy precipitation rates with fine spatiotemporal resolution and a longer period of record. Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System-Climate Data Record (PERSIANN-CCS-CDR) is designed to address these limitations. This dataset provides precipitation estimates at 0.04° spatial and 3-hourly temporal resolutions from 1983 to present over the global domain of 60°S to 60°N. Evaluations of PERSIANN-CCS-CDR and PERSIANN-CDR against gauge and radar observations show the better performance of PERSIANN-CCS-CDR in representing the spatiotemporal resolution, magnitude, and spatial distribution patterns of precipitation, especially for extreme events.


Water ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 1456
Author(s):  
Kee-Won Seong ◽  
Jang Hyun Sung

An oscillatory S-curve causes unexpected fluctuations in a unit hydrograph (UH) of desired duration or an instantaneous UH (IUH) that may affect the constraints for hydrologic stability. On the other hand, the Savitzky–Golay smoothing and differentiation filter (SG filter) is a digital filter known to smooth data without distorting the signal tendency. The present study proposes a method based on the SG filter to cope with oscillatory S-curves. Compared to previous conventional methods, the application of the SG filter to an S-curve was shown to drastically reduce the oscillation problems on the UH and IUH. In this method, the SG filter parameters are selected to give the minimum influence on smoothing and differentiation. Based on runoff reproduction results and performance criteria, it appears that the SG filter performed both smoothing and differentiation without the remarkable variation of hydrograph properties such as peak or time-to peak. The IUH, UH, and S-curve were estimated using storm data from two watersheds. The reproduced runoffs showed high levels of model performance criteria. In addition, the analyses of two other watersheds revealed that small watershed areas may experience scale problems. The proposed method is believed to be valuable when error-prone data are involved in analyzing the linear rainfall–runoff relationship.


2020 ◽  
Vol 24 (4) ◽  
pp. 2061-2081 ◽  
Author(s):  
Xudong Zhou ◽  
Jan Polcher ◽  
Tao Yang ◽  
Ching-Sheng Huang

Abstract. Ensemble estimates based on multiple datasets are frequently applied once many datasets are available for the same climatic variable. An uncertainty estimate based on the difference between the ensemble datasets is always provided along with the ensemble mean estimate to show to what extent the ensemble members are consistent with each other. However, one fundamental flaw of classic uncertainty estimates is that only the uncertainty in one dimension (either the temporal variability or the spatial heterogeneity) can be considered, whereas the variation along the other dimension is dismissed due to limitations in algorithms for classic uncertainty estimates, resulting in an incomplete assessment of the uncertainties. This study introduces a three-dimensional variance partitioning approach and proposes a new uncertainty estimation (Ue) that includes the data uncertainties in both spatiotemporal scales. The new approach avoids pre-averaging in either of the spatiotemporal dimensions and, as a result, the Ue estimate is around 20 % higher than the classic uncertainty metrics. The deviation of Ue from the classic metrics is apparent for regions with strong spatial heterogeneity and where the variations significantly differ in temporal and spatial scales. This shows that classic metrics underestimate the uncertainty through averaging, which means a loss of information in the variations across spatiotemporal scales. Decomposing the formula for Ue shows that Ue has integrated four different variations across the ensemble dataset members, while only two of the components are represented in the classic uncertainty estimates. This analysis of the decomposition explains the correlation as well as the differences between the newly proposed Ue and the two classic uncertainty metrics. The new approach is implemented and analysed with multiple precipitation products of different types (e.g. gauge-based products, merged products and GCMs) which contain different sources of uncertainties with different magnitudes. Ue of the gauge-based precipitation products is the smallest, while Ue of the other products is generally larger because other uncertainty sources are included and the constraints of the observations are not as strong as in gauge-based products. This new three-dimensional approach is flexible in its structure and particularly suitable for a comprehensive assessment of multiple datasets over large regions within any given period.


2020 ◽  
Author(s):  
Jimmy C. Yang ◽  
Angelique C. Paulk ◽  
Sang Heon Lee ◽  
Mehran Ganji ◽  
Daniel J. Soper ◽  
...  

AbstractObjectiveInterictal discharges (IIDs) and high frequency oscillations (HFOs) are neurophysiologic biomarkers of epilepsy. In this study, we use custom poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) microelectrodes to better understand their microscale dynamics.MethodsElectrodes with spatial resolution down to 50µm were used to record intraoperatively in 30 subjects. For IIDs, putative spatiotemporal paths were generated by peak-tracking, followed by clustering. For HFOs, repeating patterns were elucidated by clustering similar time windows. Fast events, consistent with multi-unit activity (MUA), were covaried with either IIDs or HFOs.ResultsIIDs seen across the entire array were detected in 93% of subjects. Local IIDs, observed across <50% of the array, were seen in 53% of subjects. IIDs appeared to travel across the array in specific paths, and HFOs appeared in similar repeated spatial patterns. Finally, microseizure events were identified spanning 50-100µm. HFOs covaried with MUA, but not with IIDs.ConclusionsOverall, these data suggest micro-domains of irritable cortex that form part of an underlying pathologic architecture that contributes to the seizure network.SignificanceMicroelectrodes in cases of human epilepsy can reveal dynamics that are not seen by conventional electrocorticography and point to new possibilities for their use in the diagnosis and treatment of epilepsy.HighlightsPEDOT:PSS microelectrodes with at least 50µm spatial resolution uniquely reveal spatiotemporal patterns of markers of epilepsyHigh spatiotemporal resolution allows interictal discharges to be tracked and reveal cortical domains involved in microseizuresHigh frequency oscillations detected by microelectrodes demonstrate localized clustering on the cortical surface


Author(s):  
Robert A. Lazenby ◽  
Ryan J. White

This review discusses a broad range of recent advances (2013-2017) of chemical imaging using electrochemical methods, with a particular focus on techniques that have been applied to study cellular processes, or techniques that show promise for use in this field in the future. Non-scanning techniques such as microelectrode arrays (MEAs) offer high time-resolution (&lt; 10 ms) imaging, however at reduced spatial resolution. In contrast, scanning electrochemical probe microscopies (SEPMs) offer higher spatial resolution (as low as a few nm per pixel) imaging, with images collected typically over many minutes. Recent significant research efforts to improve the spatial resolution of SEPMs using nanoscale probes, and to improve the temporal resolution using fast scanning have resulted in movie (multiple frame) imaging with frame rates as low as a few seconds per image. Many SEPM techniques lack chemical specificity or have poor selectivity (defined by the choice of applied potential for redox-active species). This can be improved using multifunctional probes, ion-selective electrodes and tip-integrated biosensors, although additional effort may be required to preserve sensor performance after miniaturization of these probes. We discuss advances to the field of electrochemical imaging, and technological developments which are anticipated to extend the range of processes that can be studied. This includes imaging cellular processes with increased sensor selectivity and at much improved spatiotemporal resolution than has been previously customary.


2020 ◽  
Vol 21 (9) ◽  
pp. 2023-2039
Author(s):  
Dikra Khedhaouiria ◽  
Stéphane Bélair ◽  
Vincent Fortin ◽  
Guy Roy ◽  
Franck Lespinas

AbstractConsistent and continuous fields provided by precipitation analyses are valuable for hydrometeorological applications and land data assimilation modeling, among others. Providing uncertainty estimates is a logical step in the analysis development, and a consistent approach to reach this objective is the production of an ensemble analysis. In the present study, a 6-h High-Resolution Ensemble Precipitation Analysis (HREPA) was developed for the domain covering Canada and the northern part of the contiguous United States. The data assimilation system is the same as the Canadian Precipitation Analysis (CaPA) and is based on optimal interpolation (OI). Precipitation from the Canadian national 2.5-km atmospheric prediction system constitutes the background field of the analysis, while at-site records and radar quantitative precipitation estimates (QPE) compose the observation datasets. By using stochastic perturbations, multiple observations and background field random realizations were generated to subsequently feed the data assimilation system and provide 24 HREPA members plus one control run. Based on one summer and one winter experiment, HREPA capabilities in terms of bias and skill were verified against at-site observations for different climatic regions. The results indicated HREPA’s reliability and skill for almost all types of precipitation events in winter, and for precipitation of medium intensity in summer. For both seasons, HREPA displayed resolution and sharpness. The overall good performance of HREPA and the lack of ensemble precipitation analysis (PA) at such spatiotemporal resolution in the literature motivate further investigations on transitional seasons and more advanced perturbation approaches.


Water ◽  
2020 ◽  
Vol 12 (9) ◽  
pp. 2324
Author(s):  
Peng Lin ◽  
Pengfei Shi ◽  
Tao Yang ◽  
Chong-Yu Xu ◽  
Zhenya Li ◽  
...  

Hydrological models for regions characterized by complex runoff generation process been suffer from a great weakness. A delicate hydrological balance triggered by prolonged wet or dry underlying condition and variable extreme rainfall makes the rainfall-runoff process difficult to simulate with traditional models. To this end, this study develops a novel vertically mixed model for complex runoff estimation that considers both the runoff generation in excess of infiltration at soil surface and that on excess of storage capacity at subsurface. Different from traditional models, the model is first coupled through a statistical approach proposed in this study, which considers the spatial heterogeneity of water transport and runoff generation. The model has the advantage of distributed model to describe spatial heterogeneity and the merits of lumped conceptual model to conveniently and accurately forecast flood. The model is tested through comparison with other four models in three catchments in China. The Nash–Sutcliffe efficiency coefficient and the ratio of qualified results increase obviously. Results show that the model performs well in simulating various floods, providing a beneficial means to simulate floods in regions with complex runoff generation process.


Sign in / Sign up

Export Citation Format

Share Document