Large-sample based evaluation of the spatial resolution discretization of the wflow_sbm model for the CAMELS dataset

Author(s):  
Jerom Aerts ◽  
Albrecht Weerts ◽  
Willem van Verseveld ◽  
Pieter Hazenberg ◽  
Niels Drost ◽  
...  

<p>In this study, we investigate the effect of spatial resolution discretization at 3km, 1km, and 200m by evaluating the streamflow estimation of the model. A hypothesis driven approach is used to investigate why changes in states and fluxes are taking place at different spatial resolutions and how they relate to model performance. These changes are evaluated in the context of landscape and climate characteristics as well as hydrological signatures. Answering the research question: can landscape, climate and hydrological characteristics dictate appropriate spatial modelling resolution a priori?</p><p>We use a spatially distributed wflow_sbm model (Imhoff et al., 2020, code: https://zenodo.org/record/4291730) together with the CAMELS dataset (Addor et al., 2017), covering the Continental United States. The wflow_sbm model is chosen due to flexibility in the spatial resolution of the watershed discretization while maintaining run time performance suitable for large-sample studies. The flexibility in spatial resolution is achieved by the use of point-scale (pedo)transfer functions (PTFs) with upscaling rules to global datasets to ensure flux matching across scales (Imhoff et al., 2020; Samaniego et al., 2010, 2017). The model relies on open datasets for parameter estimation and requires minimal calibration efforts as it is most sensitive to two model parameters, rooting depth and horizontal conductivity .</p><p>This study is carried out within the eWaterCycle framework; allowing for a FAIR by design research setup that is scalable in terms of case study areas and hydrological models.</p>

2017 ◽  
Vol 18 (8) ◽  
pp. 2215-2225 ◽  
Author(s):  
Andrew J. Newman ◽  
Naoki Mizukami ◽  
Martyn P. Clark ◽  
Andrew W. Wood ◽  
Bart Nijssen ◽  
...  

Abstract The concepts of model benchmarking, model agility, and large-sample hydrology are becoming more prevalent in hydrologic and land surface modeling. As modeling systems become more sophisticated, these concepts have the ability to help improve modeling capabilities and understanding. In this paper, their utility is demonstrated with an application of the physically based Variable Infiltration Capacity model (VIC). The authors implement VIC for a sample of 531 basins across the contiguous United States, incrementally increase model agility, and perform comparisons to a benchmark. The use of a large-sample set allows for statistically robust comparisons and subcategorization across hydroclimate conditions. Our benchmark is a calibrated, time-stepping, conceptual hydrologic model. This model is constrained by physical relationships such as the water balance, and it complements purely statistical benchmarks due to the increased physical realism and permits physically motivated benchmarking using metrics that relate one variable to another (e.g., runoff ratio). The authors find that increasing model agility along the parameter dimension, as measured by the number of model parameters available for calibration, does increase model performance for calibration and validation periods relative to less agile implementations. However, as agility increases, transferability decreases, even for a complex model such as VIC. The benchmark outperforms VIC in even the most agile case when evaluated across the entire basin set. However, VIC meets or exceeds benchmark performance in basins with high runoff ratios (greater than ~0.8), highlighting the ability of large-sample comparative hydrology to identify hydroclimatic performance variations.


2020 ◽  
Author(s):  
Jerom Aerts ◽  
Albrecht Weerts ◽  
Willem van Verseveld ◽  
Niels Drost ◽  
Rolf Hut ◽  
...  

<p>Large scale or global hydrological models (GHMs) show promise in enabling us to accurately predict floods, droughts, navigation hazards, reservoir operations, and many more water related issues. As opposed to regional hydrological models that have many parameters that need to be calibrated or estimated using local observation data (Sood and Smakhtin 2015). GHMs are able to simulate regions that lack observation data, whilst applying a uniform approach for parameter estimation (Döll, Kaspar, and Lehner 2003; Widén‐Nilsson et al. 2009). Up until recently the GHMs used coarse modelling grids of around 0.5 to 1 degree spatial resolution. However, due to advances in satellite data, climate data, and computational resources, GHMs are modelling on higher resolutions (up to 200 meters) that raise questions about how these models can be adjusted in order to take advantage of the finer modelling grid.</p><p>In this study, we carry out an extensive assessment of how changes in spatial resolution affect the simulations of the Wflow SBM model for 8 basins in the Continental United States. This is done by comparing the model states and fluxes at three spatial resolutions, namely 3 km, 1km, and 200m. A hypothesis driven approach is used to investigate why changes in states and fluxes are taking place at different spatial resolutions and how they relate to model performance. The latter is determined by validating river discharge, snow extent, soil moisture, and actual evaporation. In addition, we make use of two sets of parameters that rely on different pedo-transfer functions. Further investigating the role parameterization in conjunction with changes in spatial resolution.</p><p>By carrying out this study within the eWaterCycle II framework we showcase our ability to handle large datasets (forcing and validation) whilst always complying to the FAIR principles. Furthermore, this study is setup in such that it is scalable in terms of case study areas and hydrological models.</p>


2011 ◽  
Vol 8 (4) ◽  
pp. 6385-6417 ◽  
Author(s):  
R. Singh ◽  
T. Wagener ◽  
K. van Werkhoven ◽  
M. Mann ◽  
R. Crane

Abstract. Understanding the implications of potential future climatic conditions for hydrologic services and hazards is a crucial and current science question. The common approach to this problem is to force a hydrologic model, calibrated on historical data or using a priori parameter estimates, with future scenarios of precipitation and temperature. Recent studies suggest that the climatic regime of the calibration period is reflected in the resulting parameter estimates and that the model performance can be negatively impacted if the climate for which projections are made is significantly different from that during calibration. We address this issue by introducing a framework for probabilistic streamflow predictions in a changing climate wherein we quantify the impact of climate on model parameters. The strategy extends a regionalization approach (used for predictions in ungauged basins) by trading space-for-time to account for potential parameter variability in a future climate that is beyond the historically observed one. The developed methodology was tested in five US watersheds located in dry to wet climates using synthetic climate scenarios generated by increasing the historical mean temperature from 0 to 8 °C and by changing historical mean precipitation from −30 % to +40 % of the historical values. Validation on historical data shows that changed parameters perform better if future streamflow differs from historical by more than 25 %. We found that the thresholds of climate change after which the streamflow projections using adjusted parameters were significantly different from those using fixed parameters were 0 to 2 °C for temperature change and −10 % to 20 % for precipitation change depending upon the aridity of the watershed. Adjusted parameter sets simulate a more extreme watershed response for both high and low flows.


Author(s):  
Michael Withnall ◽  
Edvard Lindelöf ◽  
Ola Engkvist ◽  
Hongming Chen

We introduce Attention and Edge Memory schemes to the existing Message Passing Neural Network framework for graph convolution, and benchmark our approaches against eight different physical-chemical and bioactivity datasets from the literature. We remove the need to introduce <i>a priori</i> knowledge of the task and chemical descriptor calculation by using only fundamental graph-derived properties. Our results consistently perform on-par with other state-of-the-art machine learning approaches, and set a new standard on sparse multi-task virtual screening targets. We also investigate model performance as a function of dataset preprocessing, and make some suggestions regarding hyperparameter selection.


1982 ◽  
Vol 47 (10) ◽  
pp. 2639-2653 ◽  
Author(s):  
Pavel Moravec ◽  
Vladimír Staněk

Expressions have been derived for four possible transfer functions of a model of physical absorption of a poorly soluble gas in a packed bed column. The model has been based on axially dispersed flow of gas, plug flow of liquid through stagnant and dynamic regions and interfacial transport of the absorbed component. The obtained transfer functions have been transformed into the frequency domain and their amplitude ratios and phase lags have been evaluated using the complex arithmetic feature of the EC-1033 computer. Two of the derived transfer functions have been found directly applicable for processing of experimental data. Of the remaining two one is useable with the limitations to absorption on a shallow layer of packing, the other is entirely worthless for the case of poorly soluble gases.


2021 ◽  
Vol 13 (12) ◽  
pp. 2405
Author(s):  
Fengyang Long ◽  
Chengfa Gao ◽  
Yuxiang Yan ◽  
Jinling Wang

Precise modeling of weighted mean temperature (Tm) is critical for realizing real-time conversion from zenith wet delay (ZWD) to precipitation water vapor (PWV) in Global Navigation Satellite System (GNSS) meteorology applications. The empirical Tm models developed by neural network techniques have been proved to have better performances on the global scale; they also have fewer model parameters and are thus easy to operate. This paper aims to further deepen the research of Tm modeling with the neural network, and expand the application scope of Tm models and provide global users with more solutions for the real-time acquisition of Tm. An enhanced neural network Tm model (ENNTm) has been developed with the radiosonde data distributed globally. Compared with other empirical models, the ENNTm has some advanced features in both model design and model performance, Firstly, the data for modeling cover the whole troposphere rather than just near the Earth’s surface; secondly, the ensemble learning was employed to weaken the impact of sample disturbance on model performance and elaborate data preprocessing, including up-sampling and down-sampling, which was adopted to achieve better model performance on the global scale; furthermore, the ENNTm was designed to meet the requirements of three different application conditions by providing three sets of model parameters, i.e., Tm estimating without measured meteorological elements, Tm estimating with only measured temperature and Tm estimating with both measured temperature and water vapor pressure. The validation work is carried out by using the radiosonde data of global distribution, and results show that the ENNTm has better performance compared with other competing models from different perspectives under the same application conditions, the proposed model expanded the application scope of Tm estimation and provided the global users with more choices in the applications of real-time GNSS-PWV retrival.


2002 ◽  
Vol 59 (6) ◽  
pp. 938-951 ◽  
Author(s):  
Aline Philibert ◽  
Yves T Prairie

Despite the overwhelming tendency in paleolimnology to use both planktonic and benthic diatoms when inferring open-water chemical conditions, it remains questionable whether all taxa are appropriate and necessary to construct useful inference models. We examined this question using a 75-lake training set from Quebec (Canada) to assess whether model performance is affected by the deletion of benthic species. Because benthic species are known to experience very different chemical conditions than their planktonic counterparts, we hypothesized that they would introduce undesirable noise in the calibration. Surprisingly, such important variables as pH, total phosphorus (TP), total nitrogen (TN), and dissolved organic carbon (DOC) were well predicted from weighted-averaging partial least square (WA-PLS) models based solely on benthic species. Similar results were obtained regardless of the depth of the lakes. Although the effective number of occurrence (N2) and the tolerance of species influenced the stability of the model residual error (jackknife), the number of species was the major factor responsible for the weaker inference models when based on planktonic diatoms alone. Indeed, when controlled for the number of species in WA-PLS models, individual planktonic diatom species showed superior predictive power over individual benthic species in inferring open-water chemical conditions.


2011 ◽  
Vol 64 (S1) ◽  
pp. S3-S18 ◽  
Author(s):  
Yuanxi Yang ◽  
Jinlong Li ◽  
Junyi Xu ◽  
Jing Tang

Integrated navigation using multiple Global Navigation Satellite Systems (GNSS) is beneficial to increase the number of observable satellites, alleviate the effects of systematic errors and improve the accuracy of positioning, navigation and timing (PNT). When multiple constellations and multiple frequency measurements are employed, the functional and stochastic models as well as the estimation principle for PNT may be different. Therefore, the commonly used definition of “dilution of precision (DOP)” based on the least squares (LS) estimation and unified functional and stochastic models will be not applicable anymore. In this paper, three types of generalised DOPs are defined. The first type of generalised DOP is based on the error influence function (IF) of pseudo-ranges that reflects the geometry strength of the measurements, error magnitude and the estimation risk criteria. When the least squares estimation is used, the first type of generalised DOP is identical to the one commonly used. In order to define the first type of generalised DOP, an IF of signal–in-space (SIS) errors on the parameter estimates of PNT is derived. The second type of generalised DOP is defined based on the functional model with additional systematic parameters induced by the compatibility and interoperability problems among different GNSS systems. The third type of generalised DOP is defined based on Bayesian estimation in which the a priori information of the model parameters is taken into account. This is suitable for evaluating the precision of kinematic positioning or navigation. Different types of generalised DOPs are suitable for different PNT scenarios and an example for the calculation of these DOPs for multi-GNSS systems including GPS, GLONASS, Compass and Galileo is given. New observation equations of Compass and GLONASS that may contain additional parameters for interoperability are specifically investigated. It shows that if the interoperability of multi-GNSS is not fulfilled, the increased number of satellites will not significantly reduce the generalised DOP value. Furthermore, the outlying measurements will not change the original DOP, but will change the first type of generalised DOP which includes a robust error IF. A priori information of the model parameters will also reduce the DOP.


Author(s):  
Stephen A Solovitz

Abstract Following volcanic eruptions, forecasters need accurate estimates of mass eruption rate (MER) to appropriately predict the downstream effects. Most analyses use simple correlations or models based on large eruptions at steady conditions, even though many volcanoes feature significant unsteadiness. To address this, a superposition model is developed based on a technique used for spray injection applications, which predicts plume height as a function of the time-varying exit velocity. This model can be inverted, providing estimates of MER using field observations of a plume. The model parameters are optimized using laboratory data for plumes with physically-relevant exit profiles and Reynolds numbers, resulting in predictions that agree to within 10% of measured exit velocities. The model performance is examined using a historic eruption from Stromboli with well-documented unsteadiness, again providing MER estimates of the correct order of magnitude. This method can provide a rapid alternative for real-time forecasting of small, unsteady eruptions.


2018 ◽  
Vol 22 (8) ◽  
pp. 4565-4581 ◽  
Author(s):  
Florian U. Jehn ◽  
Lutz Breuer ◽  
Tobias Houska ◽  
Konrad Bestian ◽  
Philipp Kraft

Abstract. The ambiguous representation of hydrological processes has led to the formulation of the multiple hypotheses approach in hydrological modeling, which requires new ways of model construction. However, most recent studies focus only on the comparison of predefined model structures or building a model step by step. This study tackles the problem the other way around: we start with one complex model structure, which includes all processes deemed to be important for the catchment. Next, we create 13 additional simplified models, where some of the processes from the starting structure are disabled. The performance of those models is evaluated using three objective functions (logarithmic Nash–Sutcliffe; percentage bias, PBIAS; and the ratio between the root mean square error and the standard deviation of the measured data). Through this incremental breakdown, we identify the most important processes and detect the restraining ones. This procedure allows constructing a more streamlined, subsequent 15th model with improved model performance, less uncertainty and higher model efficiency. We benchmark the original Model 1 and the final Model 15 with HBV Light. The final model is not able to outperform HBV Light, but we find that the incremental model breakdown leads to a structure with good model performance, fewer but more relevant processes and fewer model parameters.


Sign in / Sign up

Export Citation Format

Share Document