scholarly journals Spatio-Temporal Hydrological Model Structure and Parametrization Analysis

2021 ◽  
Vol 9 (5) ◽  
pp. 467
Author(s):  
Mostafa Farrag ◽  
Gerald Corzo Perez ◽  
Dimitri Solomatine

Many grid-based spatial hydrological models suffer from the complexity of setting up a coherent spatial structure to calibrate such a complex, highly parameterized system. There are essential aspects of model-building to be taken into account: spatial resolution, the routing equation limitations, and calibration of spatial parameters, and their influence on modeling results, all are decisions that are often made without adequate analysis. In this research, an experimental analysis of grid discretization level, an analysis of processes integration, and the routing concepts are analyzed. The HBV-96 model is set up for each cell, and later on, cells are integrated into an interlinked modeling system (Hapi). The Jiboa River Basin in El Salvador is used as a case study. The first concept tested is the model structure temporal responses, which are highly linked to the runoff dynamics. By changing the runoff generation model description, we explore the responses to events. Two routing models are considered: Muskingum, which routes the runoff from each cell following the river network, and Maxbas, which routes the runoff directly to the outlet. The second concept is the spatial representation, where the model is built and tested for different spatial resolutions (500 m, 1 km, 2 km, and 4 km). The results show that the spatial sensitivity of the resolution is highly linked to the routing method, and it was found that routing sensitivity influenced the model performance more than the spatial discretization, and allowing for coarser discretization makes the model simpler and computationally faster. Slight performance improvement is gained by using different parameters’ values for each cell. It was found that the 2 km cell size corresponds to the least model error values. The proposed hydrological modeling codes have been published as open-source.

Water ◽  
2018 ◽  
Vol 10 (9) ◽  
pp. 1169 ◽  
Author(s):  
Adrián Sucozhañay ◽  
Rolando Célleri

In places with high spatiotemporal rainfall variability, such as mountain regions, input data could be a large source of uncertainty in hydrological modeling. Here we evaluate the impact of rainfall estimation on runoff modeling in a small páramo catchment located in the Zhurucay Ecohydrological Observatory (7.53 km2) in the Ecuadorian Andes, using a network of 12 rain gauges. First, the HBV-light semidistributed model was analyzed in order to select the best model structure to represent the observed runoff and its subflow components. Then, we developed six rainfall monitoring scenarios to evaluate the impact of spatial rainfall estimation in model performance and parameters. Finally, we explored how a model calibrated with far-from-perfect rainfall estimation would perform using new improved rainfall data. Results show that while all model structures were able to represent the overall runoff, the standard model structure outperformed the others for simulating subflow components. Model performance (NSeff) was improved by increasing the quality of spatial rainfall estimation from 0.31 to 0.80 and from 0.14 to 0.73 for calibration and validation period, respectively. Finally, improved rainfall data enhanced the runoff simulation from a model calibrated with scarce rainfall data (NSeff 0.14) from 0.49 to 0.60. These results confirm that in mountain regions model uncertainty is highly related to spatial rainfall and, therefore, to the number and location of rain gauges.


2014 ◽  
Vol 11 (10) ◽  
pp. 12137-12186 ◽  
Author(s):  
P. Hublart ◽  
D. Ruelland ◽  
A. Dezetter ◽  
H. Jourde

Abstract. The use of lumped, conceptual models in hydrological impact studies requires placing more emphasis on the uncertainty arising from deficiencies and/or ambiguities in the model structure. This study provides an opportunity to combine a multiple-hypothesis framework with a multi-criteria assessment scheme to reduce structural uncertainty in the conceptual modeling of a meso-scale Andean catchment (1515 km2) over a 30 year period (1982–2011). The modeling process was decomposed into six model-building decisions related to the following aspects of the system behavior: snow accumulation and melt, runoff generation, redistribution and delay of water fluxes, and natural storage effects. Each of these decisions was provided with a set of alternative modeling options, resulting in a total of 72 competing model structures. These structures were calibrated using the concept of Pareto optimality with three criteria pertaining to streamflow simulations and one to the seasonal dynamics of snow processes. The results were analyzed in the four-dimensional space of performance measures using a fuzzy c-means clustering technique and a differential split sample test, leading to identify 14 equally acceptable model hypotheses. A filtering approach was then applied to these best-performing structures in order to minimize the overall uncertainty envelope while maximizing the number of enclosed observations. This led to retain 8 model hypotheses as a representation of the minimum structural uncertainty that could be obtained with this modeling framework. Future work to better consider model predictive uncertainty should include a proper assessment of parameter equifinality and data errors, as well as the testing of new or refined hypotheses to allow for the use of additional auxiliary observations.


2020 ◽  
Author(s):  
Axel Bronstert ◽  
Tobias Pilz ◽  
Till Francke ◽  
Gabriele Baroni

<p>In the field of hydrological modeling, many alternative mathematical representations of natural processes exist. To choose specific process formulations when building a hydrological model is therefore associated with a high degree of ambiguity and subjectivity. Identifiability analysis may provide guidance by constraining the a priori range of alternatives based on observations. In this work, a flexible simulation environment is used to build a process-based hydrological model with alternative process representations, numerical integration schemes, and model parametrizations in an integrated manner. The flexible simulation environment is coupled with an approach for dynamic identifiability analysis. The objective is to investigate the applicability of the coupled framework to identify the most adequate model structure. It turned out that identifiability of model structure varies in space and time, driven by the meteorological and hydrological characteristics of the study area. Moreover, the most accurate numerical solver is often not the best performing solution. This is possibly influenced by correlation and compensation effects among process representation, numerical solver, and parametrization. Overall, the proposed coupled framework proved to be applicable for the identification of adequate process-based model structures and is therefore a useful diagnostic tool for model building and hypotheses testing.</p>


2021 ◽  
Author(s):  
Daniela Peredo Ramirez ◽  
Maria-Helena Ramos ◽  
Vazken Andréassian ◽  
Ludovic Oudin

<p><span>High-impact flood events in the Mediterranean region are often the result of a combination of local climate and topographic characteristics of the region. Therefore, the way runoff generation processes are represented in hydrological models is a key factor to simulate and forecast floods. In this study, we adapt an existing model in order to increase its versatility to simulate flood events occurring under different conditions: during or after wet periods and after long and dry summer periods. The model adaptation introduces a dependency on rainfall intensity in the production function. The impact of this adaptation is analysed considering model performance over selected flood events and also over a continuous 10-year period of flows. The event-based assessment showed that the adapted model structure performs better than or equal to the original model structure in terms of differences in the timing of peak discharges, regardless of the season of the year when the flood occurs. The most important improvement was observed in the simulation of the magnitude of the flood peaks. A visualisation of model versatility is proposed, which allows detecting the time steps when the new model structure tends to behave more similarly or differently from the original model structure in terms of runoff production. Overall, the results show the potential of the model adaptation proposed to simulate floods originated by different hydrological processes and the value of increasing hydrological model versatility to simulate extreme events.</span></p>


1999 ◽  
Vol 39 (9) ◽  
pp. 1-8 ◽  
Author(s):  
P. Harremoës ◽  
H. Madsen

Where is the balance between simplicity and complexity in model prediction of urban drainage structures? The calibration/verification approach to testing of model performance gives an exaggerated sense of certainty. Frequently, the model structure and the parameters are not identifiable by calibration/verification on the basis of the data series available, which generates elements of sheer guessing - unless the universality of the model is be based on induction, i.e. experience from the sum of all previous investigations. There is a need to deal more explicitly with uncertainty and to incorporate that in the design, operation and control of urban drainage structures.


1997 ◽  
Vol 36 (5) ◽  
pp. 177-184
Author(s):  
Lennart Heip ◽  
Johan Van Assel ◽  
Patrick Swartenbroekx

Within the framework of an EC-funded SPRINT-project, a sewer flow quality model of a typical rural Flemish catchment was set up. The applicability of such a model is demonstrated. Furthermore a methodology for model building, data collection and model calibration and verification is proposed. To this end an intensive 9 month measuring campaign was undertaken. The hydraulic behaviour of the sewer network was continuously monitored during those 9 months. During both dry weather flow (DWF) and wet weather flow (WWF) a number of sewage samples were taken and analysed for BOD, COD, TKN, TP and TSS. This resulted in 286 WWF and 269 DWF samples. The model was calibrated and verified with these data. Finally a software independent methodology for interpretation of the model results is proposed.


2020 ◽  
Vol 41 (S1) ◽  
pp. s521-s522
Author(s):  
Debarka Sengupta ◽  
Vaibhav Singh ◽  
Seema Singh ◽  
Dinesh Tewari ◽  
Mudit Kapoor ◽  
...  

Background: The rising trend of antibiotic resistance imposes a heavy burden on healthcare both clinically and economically (US$55 billion), with 23,000 estimated annual deaths in the United States as well as increased length of stay and morbidity. Machine-learning–based methods have, of late, been used for leveraging patient’s clinical history and demographic information to predict antimicrobial resistance. We developed a machine-learning model ensemble that maximizes the accuracy of such a drug-sensitivity versus resistivity classification system compared to the existing best-practice methods. Methods: We first performed a comprehensive analysis of the association between infecting bacterial species and patient factors, including patient demographics, comorbidities, and certain healthcare-specific features. We leveraged the predictable nature of these complex associations to infer patient-specific antibiotic sensitivities. Various base-learners, including k-NN (k-nearest neighbors) and gradient boosting machine (GBM), were used to train an ensemble model for confident prediction of antimicrobial susceptibilities. Base learner selection and model performance evaluation was performed carefully using a variety of standard metrics, namely accuracy, precision, recall, F1 score, and Cohen κ. Results: For validating the performance on MIMIC-III database harboring deidentified clinical data of 53,423 distinct patient admissions between 2001 and 2012, in the intensive care units (ICUs) of the Beth Israel Deaconess Medical Center in Boston, Massachusetts. From ~11,000 positive cultures, we used 4 major specimen types namely urine, sputum, blood, and pus swab for evaluation of the model performance. Figure 1 shows the receiver operating characteristic (ROC) curves obtained for bloodstream infection cases upon model building and prediction on 70:30 split of the data. We received area under the curve (AUC) values of 0.88, 0.92, 0.92, and 0.94 for urine, sputum, blood, and pus swab samples, respectively. Figure 2 shows the comparative performance of our proposed method as well as some off-the-shelf classification algorithms. Conclusions: Highly accurate, patient-specific predictive antibiogram (PSPA) data can aid clinicians significantly in antibiotic recommendation in ICU, thereby accelerating patient recovery and curbing antimicrobial resistance.Funding: This study was supported by Circle of Life Healthcare Pvt. Ltd.Disclosures: None


Atmosphere ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 238
Author(s):  
Pablo Contreras ◽  
Johanna Orellana-Alvear ◽  
Paul Muñoz ◽  
Jörg Bendix ◽  
Rolando Célleri

The Random Forest (RF) algorithm, a decision-tree-based technique, has become a promising approach for applications addressing runoff forecasting in remote areas. This machine learning approach can overcome the limitations of scarce spatio-temporal data and physical parameters needed for process-based hydrological models. However, the influence of RF hyperparameters is still uncertain and needs to be explored. Therefore, the aim of this study is to analyze the sensitivity of RF runoff forecasting models of varying lead time to the hyperparameters of the algorithm. For this, models were trained by using (a) default and (b) extensive hyperparameter combinations through a grid-search approach that allow reaching the optimal set. Model performances were assessed based on the R2, %Bias, and RMSE metrics. We found that: (i) The most influencing hyperparameter is the number of trees in the forest, however the combination of the depth of the tree and the number of features hyperparameters produced the highest variability-instability on the models. (ii) Hyperparameter optimization significantly improved model performance for higher lead times (12- and 24-h). For instance, the performance of the 12-h forecasting model under default RF hyperparameters improved to R2 = 0.41 after optimization (gain of 0.17). However, for short lead times (4-h) there was no significant model improvement (0.69 < R2 < 0.70). (iii) There is a range of values for each hyperparameter in which the performance of the model is not significantly affected but remains close to the optimal. Thus, a compromise between hyperparameter interactions (i.e., their values) can produce similar high model performances. Model improvements after optimization can be explained from a hydrological point of view, the generalization ability for lead times larger than the concentration time of the catchment tend to rely more on hyperparameterization than in what they can learn from the input data. This insight can help in the development of operational early warning systems.


2019 ◽  
Vol 20 (4) ◽  
pp. 386-409
Author(s):  
Elmar Spiegel ◽  
Thomas Kneib ◽  
Fabian Otto-Sobotka

Spatio-temporal models are becoming increasingly popular in recent regression research. However, they usually rely on the assumption of a specific parametric distribution for the response and/or homoscedastic error terms. In this article, we propose to apply semiparametric expectile regression to model spatio-temporal effects beyond the mean. Besides the removal of the assumption of a specific distribution and homoscedasticity, with expectile regression the whole distribution of the response can be estimated. For the use of expectiles, we interpret them as weighted means and estimate them by established tools of (penalized) least squares regression. The spatio-temporal effect is set up as an interaction between time and space either based on trivariate tensor product P-splines or the tensor product of a Gaussian Markov random field and a univariate P-spline. Importantly, the model can easily be split up into main effects and interactions to facilitate interpretation. The method is presented along the analysis of spatio-temporal variation of temperatures in Germany from 1980 to 2014.


Author(s):  
Cinzia Giannetti ◽  
Aniekan Essien

AbstractSmart factories are intelligent, fully-connected and flexible systems that can continuously monitor and analyse data streams from interconnected systems to make decisions and dynamically adapt to new circumstances. The implementation of smart factories represents a leap forward compared to traditional automation. It is underpinned by the deployment of cyberphysical systems that, through the application of Artificial Intelligence, integrate predictive capabilities and foster rapid decision-making. Deep Learning (DL) is a key enabler for the development of smart factories. However, the implementation of DL in smart factories is hindered by its reliance on large amounts of data and extreme computational demand. To address this challenge, Transfer Learning (TL) has been proposed to promote the efficient training of models by enabling the reuse of previously trained models. In this paper, by means of a specific example in aluminium can manufacturing, an empirical study is presented, which demonstrates the potential of TL to achieve fast deployment of scalable and reusable predictive models for Cyber Manufacturing Systems. Through extensive experiments, the value of TL is demonstrated to achieve better generalisation and model performance, especially with limited datasets. This research provides a pragmatic approach towards predictive model building for cyber twins, paving the way towards the realisation of smart factories.


Sign in / Sign up

Export Citation Format

Share Document