scholarly journals Uncertainty analysis of the HORTSYST model applied to fertigated tomatoes cultivated in a hydroponic greenhouse system

2021 ◽  
Vol 19 (3) ◽  
pp. e0802
Author(s):  
Antonio Martinez-Ruiz ◽  
Irineo L. López-Cruz ◽  
Agustín Ruiz-García ◽  
Joel Pineda-Pineda ◽  
Prometeo Sánchez-García ◽  
...  

Aim of study: The objective was to perform an uncertainty analysis (UA) of the dynamic HORTSYST model applied to greenhouse grown hydroponic tomato crop. A frequentist method based on Monte Carlo simulation and the Generalized Likelihood Uncertainty Estimation (GLUE) procedure were used.Area of study: Two tomato cultivation experiments were carried out, during autumn-winter and spring-summer crop seasons, in a research greenhouse located at University of Chapingo, Chapingo, Mexico.Material and methods: The uncertainties of the HORTSYST model predictions PTI, LAI, DMP, ETc, Nup, Pup, Kup, Caup, and Mgup uptake, were calculated, by specifying the uncertainty of model parameters 10% and 20% around their nominal values. Uniform PDFs were specified for all model parameters and LHS sampling was applied. The Monte Carlo and the GLUE methods used 10,000 and 2,000 simulations, respectively. The frequentist method included the statistical measures: minimum, maximum, average values, CV, skewness, and kurtosis whilst GLUE used CI, RMSE, and scatter plots.Main results: As parameters were changed 10%, the CV, for all outputs, were lower than 15%. The smallest values were for LAI (10.75%) and DMP (11.14%) and the largest was for ETc (14.47%). For Caup (12.15%) and Pup (12.27%), the CV was lower than the one for Nup and Kup. Kurtosis and skewness values were close as expected for a normal distribution. According to GLUE, crop density was found to be the most relevant parameter given that it yielded the lowest RMSE value between the simulated and measured values.Research highlights: Acceptable fitting of HORTSYST was achieved since its predictions were inside 95% CI with the GLUE procedure.

1997 ◽  
Vol 36 (5) ◽  
pp. 141-148 ◽  
Author(s):  
A. Mailhot ◽  
É. Gaume ◽  
J.-P. Villeneuve

The Storm Water Management Model's quality module is calibrated for a section of Québec City's sewer system using data collected during five rain events. It is shown that even for this simple model, calibration can fail: similarly a good fit between recorded data and simulation results can be obtained with quite different sets of model parameters, leading to great uncertainty on calibrated parameter values. In order to further investigate the lack of data and data uncertainty impacts on calibration, we used a new methodology based on the Metropolis Monte Carlo algorithm. This analysis shows that for a large amount of calibration data generated by the model itself, small data uncertainties are necessary to significantly decrease calibrated parameter uncertainties. This also confirms the usefulness of the Metropolis algorithm as a tool for uncertainty analysis in the context of model calibration.


2008 ◽  
Vol 5 (6) ◽  
pp. 3517-3555 ◽  
Author(s):  
M. Herbst ◽  
H. V. Gupta ◽  
M. C. Casper

Abstract. Hydrological model evaluation and identification essentially depends on the extraction of information from model time series and its processing. However, the type of information extracted by statistical measures has only very limited meaning because it does not relate to the hydrological context of the data. To overcome this inadequacy we exploit the diagnostic evaluation concept of Signature Indices, in which model performance is measured using theoretically relevant characteristics of system behaviour. In our study, a Self-Organizing Map (SOM) is used to process the Signatures extracted from Monte-Carlo simulations generated by a distributed conceptual watershed model. The SOM creates a hydrologically interpretable mapping of overall model behaviour, which immediately reveals deficits and trade-offs in the ability of the model to represent the different functional behaviours of the watershed. Further, it facilitates interpretation of the hydrological functions of the model parameters and provides preliminary information regarding their sensitivities. Most notably, we use this mapping to identify the set of model realizations (among the Monte-Carlo data) that most closely approximate the observed discharge time series in terms of the hydrologically relevant characteristics, and to confine the parameter space accordingly. Our results suggest that Signature Index based SOMs could potentially serve as tools for decision makers inasmuch as model realizations with specific Signature properties can be selected according to the purpose of the model application. Moreover, given that the approach helps to represent and analyze multi-dimensional distributions, it could be used to form the basis of an optimization framework that uses SOMs to characterize the model performance response surface. As such it provides a powerful and useful way to conduct model identification and model uncertainty analyses.


2018 ◽  
Vol 22 (9) ◽  
pp. 5021-5039 ◽  
Author(s):  
Aynom T. Teweldebrhan ◽  
John F. Burkhart ◽  
Thomas V. Schuler

Abstract. Parameter uncertainty estimation is one of the major challenges in hydrological modeling. Here we present parameter uncertainty analysis of a recently released distributed conceptual hydrological model applied in the Nea catchment, Norway. Two variants of the generalized likelihood uncertainty estimation (GLUE) methodologies, one based on the residuals and the other on the limits of acceptability, were employed. Streamflow and remote sensing snow cover data were used in conditioning model parameters and in model validation. When using the GLUE limit of acceptability (GLUE LOA) approach, a streamflow observation error of 25 % was assumed. Neither the original limits nor relaxing the limits up to a physically meaningful value yielded a behavioral model capable of predicting streamflow within the limits in 100 % of the observations. As an alternative to relaxing the limits, the requirement for the percentage of model predictions falling within the original limits was relaxed. An empirical approach was introduced to define the degree of relaxation. The result shows that snow- and water-balance-related parameters induce relatively higher streamflow uncertainty than catchment response parameters. Comparable results were obtained from behavioral models selected using the two GLUE methodologies.


2018 ◽  
Author(s):  
Aynom T. Tweldebrahn ◽  
John F. Burkhart ◽  
Thomas V. Schuler

Abstract. Parameter uncertainty estimation is one of the major challenges in hydrological modelling. Here we present parameter uncertainty analysis of a recently released distributed conceptual hydrological model applied in the Nea catchment, Norway. Two variants of the generalized likelihood uncertainty estimation (GLUE) methodologies, one based on the residuals and the other on the limits of acceptability, were employed. Streamflow and remote sensing snow cover data were used in conditioning model parameters and in model validation. When using the GLUE limit of acceptability (GLUE LOA) approach, a streamflow observation error of 25 % was assumed. Neither the original limits, nor relaxing the limits up to a physically meaningful value, yielded a behavioural model capable of predicting streamflow within the limits in 100 % of the observations. As an alternative to relaxing the limits; the requirement for percentage of model predictions falling within the original limits was relaxed. An empirical approach was introduced to define the degree of relaxation. The result shows that snow and water balance related parameters induce relatively higher streamflow uncertainty than catchment response parameters. Comparable results were obtained from behavioural models selected using the two GLUE methodologies.


2009 ◽  
Vol 13 (3) ◽  
pp. 395-409 ◽  
Author(s):  
M. Herbst ◽  
H. V. Gupta ◽  
M. C. Casper

Abstract. Hydrological model evaluation and identification essentially involves extracting and processing information from model time series. However, the type of information extracted by statistical measures has only very limited meaning because it does not relate to the hydrological context of the data. To overcome this inadequacy we exploit the diagnostic evaluation concept of Signature Indices, in which model performance is measured using theoretically relevant characteristics of system behaviour. In our study, a Self-Organizing Map (SOM) is used to process the Signatures extracted from Monte-Carlo simulations generated by the distributed conceptual watershed model NASIM. The SOM creates a hydrologically interpretable mapping of overall model behaviour, which immediately reveals deficits and trade-offs in the ability of the model to represent the different functional behaviours of the watershed. Further, it facilitates interpretation of the hydrological functions of the model parameters and provides preliminary information regarding their sensitivities. Most notably, we use this mapping to identify the set of model realizations (among the Monte-Carlo data) that most closely approximate the observed discharge time series in terms of the hydrologically relevant characteristics, and to confine the parameter space accordingly. Our results suggest that Signature Index based SOMs could potentially serve as tools for decision makers inasmuch as model realizations with specific Signature properties can be selected according to the purpose of the model application. Moreover, given that the approach helps to represent and analyze multi-dimensional distributions, it could be used to form the basis of an optimization framework that uses SOMs to characterize the model performance response surface. As such it provides a powerful and useful way to conduct model identification and model uncertainty analyses.


2007 ◽  
Author(s):  
Phil-Shik Kim ◽  
Puneet Srivastava ◽  
Kyung H Yoo ◽  
Sun Joo Kim ◽  
Yaoqi Zhang

Water ◽  
2019 ◽  
Vol 11 (3) ◽  
pp. 447 ◽  
Author(s):  
Huidae Cho ◽  
Jeongha Park ◽  
Dongkyun Kim

We tested four likelihood measures including two limits of acceptability and two absolute model residual methods within the generalized likelihood uncertainty estimation (GLUE) framework using the topography model (TOPMODEL). All these methods take the worst performance of all time steps as the likelihood of a model and none of these methods were successful in finding any behavioral models. We believe that reporting this failure is important because it shifted our attention from which likelihood measure to choose to why these four methods failed and how to improve these methods. We also observed how large parameter samples impact the performance of a hybrid uncertainty estimation method, isolated-speciation-based particle swarm optimization (ISPSO)-GLUE using the Nash–Sutcliffe (NS) coefficient. Unlike GLUE with random sampling, ISPSO-GLUE provides traditional calibrated parameters as well as uncertainty analysis, so over-conditioning the model parameters on the calibration data can affect its uncertainty analysis results. ISPSO-GLUE showed similar performance to GLUE with a lot less model runs, but its uncertainty bounds enclosed less observed flows. However, both methods failed in validation. These findings suggest that ISPSO-GLUE can be affected by over-calibration after a long evolution of samples and imply that there is a need for a likelihood measure that can better explain uncertainties from different sources without making statistical assumptions.


1996 ◽  
Vol 33 (2) ◽  
pp. 79-90 ◽  
Author(s):  
Jian Hua Lei ◽  
Wolfgang Schilling

Physically-based urban rainfall-runoff models are mostly applied without parameter calibration. Given some preliminary estimates of the uncertainty of the model parameters the associated model output uncertainty can be calculated. Monte-Carlo simulation followed by multi-linear regression is used for this analysis. The calculated model output uncertainty can be compared to the uncertainty estimated by comparing model output and observed data. Based on this comparison systematic or spurious errors can be detected in the observation data, the validity of the model structure can be confirmed, and the most sensitive parameters can be identified. If the calculated model output uncertainty is unacceptably large the most sensitive parameters should be calibrated to reduce the uncertainty. Observation data for which systematic and/or spurious errors have been detected should be discarded from the calibration data. This procedure is referred to as preliminary uncertainty analysis; it is illustrated with an example. The HYSTEM program is applied to predict the runoff volume from an experimental catchment with a total area of 68 ha and an impervious area of 20 ha. Based on the preliminary uncertainty analysis, for 7 of 10 events the measured runoff volume is within the calculated uncertainty range, i.e. less than or equal to the calculated model predictive uncertainty. The remaining 3 events include most likely systematic or spurious errors in the observation data (either in the rainfall or the runoff measurements). These events are then discarded from further analysis. After calibrating the model the predictive uncertainty of the model is estimated.


Sign in / Sign up

Export Citation Format

Share Document