scholarly journals Influence of rainfall observation network on model calibration and application

2006 ◽  
Vol 3 (6) ◽  
pp. 3691-3726 ◽  
Author(s):  
A. Bárdossy ◽  
T. Das

Abstract. The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. The semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. Aggregated Nash-Sutcliffe coefficients at different temporal scales are adopted as objective function to estimate the model parameters. The performance of the hydrological model is analyzed as a function of the raingauge density. The calibrated model is validated using the same precipitation used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. The effect of missing rainfall data is investigated by using a multiple linear regression approach for filling the missing values. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need recalibration of the model parameters: model calibrated on sparse information might perform well on dense information while model calibrated on dense information fails on sparse information. Also, the model calibrated with complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as missing measurements, performs well. A meso-scale catchment located in the south-west of Germany has been selected for this study.

2008 ◽  
Vol 12 (1) ◽  
pp. 77-89 ◽  
Author(s):  
A. Bárdossy ◽  
T. Das

Abstract. The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as missing measurements, performs well.


2017 ◽  
Vol 12 (4) ◽  
Author(s):  
Yousheng Chen ◽  
Andreas Linderholt ◽  
Thomas J. S. Abrahamsson

Correlation and calibration using test data are natural ingredients in the process of validating computational models. Model calibration for the important subclass of nonlinear systems which consists of structures dominated by linear behavior with the presence of local nonlinear effects is studied in this work. The experimental validation of a nonlinear model calibration method is conducted using a replica of the École Centrale de Lyon (ECL) nonlinear benchmark test setup. The calibration method is based on the selection of uncertain model parameters and the data that form the calibration metric together with an efficient optimization routine. The parameterization is chosen so that the expected covariances of the parameter estimates are made small. To obtain informative data, the excitation force is designed to be multisinusoidal and the resulting steady-state multiharmonic frequency response data are measured. To shorten the optimization time, plausible starting seed candidates are selected using the Latin hypercube sampling method. The candidate parameter set giving the smallest deviation to the test data is used as a starting point for an iterative search for a calibration solution. The model calibration is conducted by minimizing the deviations between the measured steady-state multiharmonic frequency response data and the analytical counterparts that are calculated using the multiharmonic balance method. The resulting calibrated model's output corresponds well with the measured responses.


2012 ◽  
Vol 20 (4) ◽  
pp. 35-43 ◽  
Author(s):  
Peter Valent ◽  
Ján Szolgay ◽  
Carlo Riverso

ABSTRACTMost of the studies that assess the performance of various calibration techniques have todeal with a certain amount of uncertainty in the calibration data. In this study we testedHBV model calibration procedures in hypothetically ideal conditions under the assumptionof no errors in the measured data. This was achieved by creating an artificial time seriesof the flows created by the HBV model using the parameters obtained from calibrating themeasured flows. The artificial flows were then used to replace the original flows in thecalibration data, which was then used for testing how calibration procedures can reproduceknown model parameters. The results showed that in performing one hundred independentcalibration runs of the HBV model, we did not manage to obtain parameters that werealmost identical to those used to create the artificial flow data without a certain degree ofuncertainty. Although the calibration procedure of the model works properly froma practical point of view, it can be regarded as a demonstration of the equifinality principle,since several parameter sets were obtained which led to equally acceptable or behaviouralrepresentations of the observed flows. The study demonstrated that this concept forassessing how uncertain hydrological predictions can be applied in the further developmentof a model or the choice of calibration method using artificially generated data.


Water ◽  
2021 ◽  
Vol 13 (7) ◽  
pp. 970
Author(s):  
Safa A. Mohammed ◽  
Dimitri P. Solomatine ◽  
Markus Hrachowitz ◽  
Mohamed A. Hamouda

Many calibrated hydrological models are inconsistent with the behavioral functions of catchments and do not fully represent the catchments’ underlying processes despite their seemingly adequate performance, if measured by traditional statistical error metrics. Using such metrics for calibration is hindered if only short-term data are available. This study investigated the influence of varying lengths of streamflow observation records on model calibration and evaluated the usefulness of a signature-based calibration approach in conceptual rainfall-runoff model calibration. Scenarios of continuous short-period observations were used to emulate poorly gauged catchments. Two approaches were employed to calibrate the HBV model for the Brue catchment in the UK. The first approach used single-objective optimization to maximize Nash–Sutcliffe efficiency (NSE) as a goodness-of-fit measure. The second approach involved multiobjective optimization based on maximizing the scores of 11 signature indices, as well as maximizing NSE. In addition, a diagnostic model evaluation approach was used to evaluate both model performance and behavioral consistency. The results showed that the HBV model was successfully calibrated using short-term datasets with a lower limit of approximately four months of data (10% FRD model). One formulation of the multiobjective signature-based optimization approach yielded the highest performance and hydrological consistency among all parameterization algorithms. The diagnostic model evaluation enabled the selection of consistent models reflecting catchment behavior and allowed an accurate detection of deficiencies in other models. It can be argued that signature-based calibration can be employed for building adequate models even in data-poor situations.


Water ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 1484
Author(s):  
Dagmar Dlouhá ◽  
Viktor Dubovský ◽  
Lukáš Pospíšil

We present an approach for the calibration of simplified evaporation model parameters based on the optimization of parameters against the most complex model for evaporation estimation, i.e., the Penman–Monteith equation. This model computes the evaporation from several input quantities, such as air temperature, wind speed, heat storage, net radiation etc. However, sometimes all these values are not available, therefore we must use simplified models. Our interest in free water surface evaporation is given by the need for ongoing hydric reclamation of the former Ležáky–Most quarry, i.e., the ongoing restoration of the land that has been mined to a natural and economically usable state. For emerging pit lakes, the prediction of evaporation and the level of water plays a crucial role. We examine the methodology on several popular models and standard statistical measures. The presented approach can be applied in a general model calibration process subject to any theoretical or measured evaporation.


Author(s):  
J. Sebastian Hernandez-Suarez ◽  
A. Pouyan Nejadhashemi ◽  
Kalyanmoy Deb

2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
M F Kragh ◽  
J T Lassen ◽  
J Rimestad ◽  
J Berntsen

Abstract Study question Do AI models for embryo selection provide actual implantation probabilities that generalise across clinics and patient demographics? Summary answer AI models need to be calibrated on representative data before providing reasonable agreements between predicted scores and actual implantation probabilities. What is known already AI models have been shown to perform well at discriminating embryos according to implantation likelihood, measured by area under curve (AUC). However, discrimination performance does not relate to how models perform with regards to predicting actual implantation likelihood, especially across clinics and patient demographics. In general, prediction models must be calibrated on representative data to provide meaningful probabilities. Calibration can be evaluated and summarised by “expected calibration error” (ECE) on score deciles and tested for significant lack of calibration using Hosmer-Lemeshow goodness-of-fit. ECE describes the average deviation between predicted probabilities and observed implantation rates and is 0 for perfect calibration. Study design, size, duration Time-lapse embryo videos from 18 clinics were used to develop AI models for prediction of fetal heartbeat (FHB). Model generalisation was evaluated on clinic hold-out models for the three largest clinics. Calibration curves were used to evaluate the agreement between AI-predicted scores and observed FHB outcome and summarised by ECE. Models were evaluated 1) without calibration, 2) calibration (Platt scaling) on other clinics’ data, and 3) calibration on the clinic’s own data (30%/70% for calibration/evaluation). Participants/materials, setting, methods A previously described AI algorithm, iDAScore, based on 115,842 time-lapse sequences of embryos, including 14,644 transferred embryos with known implantation data (KID), was used as foundation for training hold-out AI models for the three largest clinics (n = 2,829;2,673;1,327 KID embryos), such that their data were not included during model training. ECEs across the three clinics (mean±SD) were compared for models with/without calibration using KID embryos only, both overall and within subgroups of patient age (<36,36-40,>40 years). Main results and the role of chance The AUC across the three clinics was 0.675±0.041 (mean±SD) and unaffected by calibration. Without calibration, overall ECE was 0.223±0.057, indicating weak agreements between scores and actual implantation rates. With calibration on other clinics’ data, overall ECE was 0.040±0.013, indicating considerable improvements with moderate clinical variation. As implantation probabilities are both affected by clinical practice and patient demographics, subgroup analysis was conducted on patient age (<36,36-40,>40 years). With calibration on other clinics’ data, age-group ECEs were (0.129±0.055 vs. 0.078±0.033 vs. 0.072±0.015). These calibration errors were thus larger than the overall average ECE of 0.040, indicating poor generalisation across age. Including age as input to the calibration, age-group ECEs were (0.088±0.042 vs. 0.075±0.046 vs. 0.051±0.025), indicating improved agreements between scores and implantation rates across both clinics and age groups. With calibration including age on the clinic’s own data, however, the best calibrations were obtained with ECEs (0.060±0.017 vs. 0.040±0.010 vs. 0.039±0.009). The results indicate that both clinical practice and patient demographics influence calibration and thus ideally should be adjusted for. Testing lack of calibration using Hosmer-Lemeshow goodness-of-fit, only one age-group from one clinic appeared miscalibrated (P = 0.02), whereas all other age-groups from the three clinics were appropriately calibrated (P > 0.10). Limitations, reasons for caution In this study, AI model calibration was conducted based on clinic and age. Other patient metadata such as BMI and patient diagnosis may be relevant to calibrate as well. However, for both calibration and evaluation on the clinic’s own data, a substantiate amount of data for each subgroup is needed. Wider implications of the findings With calibrated scores, AI models can predict actual implantation likelihood for each embryo. Probability estimates are a strong tool for patient communication and clinical decisions such as deciding when to discard/freeze embryos. Model calibration may thus be the next step in improving clinical outcome and shortening time to live birth. Trial registration number This work is partly funded by the Innovation Fund Denmark (IFD) under File No. 7039-00068B and partly funded by Vitrolife A/S


1993 ◽  
Vol 28 (11-12) ◽  
pp. 163-171 ◽  
Author(s):  
Weibo (Weber) Yuan ◽  
David Okrent ◽  
Michael K. Stenstrom

A model calibration algorithm is developed for the high-purity oxygen activated sludge process (HPO-ASP). The algorithm is evaluated under different conditions to determine the effect of the following factors on the performance of the algorithm: data quality, number of observations, and number of parameters to be estimated. The process model used in this investigation is the first HPO-ASP model based upon the IAWQ (formerly IAWPRC) Activated Sludge Model No. 1. The objective function is formulated as a relative least-squares function and the non-linear, constrained minimization problem is solved by the Complex method. The stoichiometric and kinetic coefficients of the IAWQ activated sludge model are the parameters focused on in this investigation. Observations used are generated numerically but are made close to the observations from a full-scale high-purity oxygen treatment plant. The calibration algorithm is capable of correctly estimating model parameters even if the observations are severely noise-corrupted. The accuracy of estimation deteriorates gradually with the increase of observation errors. The accuracy of calibration improves when the number of observations (n) increases, but the improvement becomes insignificant when n>96. It is also found that there exists an optimal number of parameters that can be rigorously estimated from a given set of information/data. A sensitivity analysis is conducted to determine what parameters to estimate and to evaluate the potential benefits resulted from collecting additional measurements.


Sign in / Sign up

Export Citation Format

Share Document