scholarly journals Incremental model breakdown to assess the multi-hypotheses problem

2017 ◽  
Author(s):  
Florian U. Jehn ◽  
Lutz Breuer ◽  
Tobias Houska ◽  
Konrad Bestian ◽  
Philipp Kraft

Abstract. The ambiguous representation of hydrological processes have led to the formulation of the multiple hypotheses approach in hydrological modelling, which requires new ways of model construction. However, most recent studies focus only on the comparison of predefined model structures or building a model step-by-step. This study tackles the problem the other way around: We start with one complex model structure, which includes all processes deemed to be important for the catchment. Next, we create 13 additional simplified models, where some of the processes from the starting structure are disabled. The performance of those models is evaluated using three objective functions (logarithmic Nash-Sutcliffe, percentage bias and the ratio between root mean square error to the standard deviation of the measured data). Through this incremental breakdown, we identify the most important processes and detect the restraining ones. This procedure allows constructing a more streamlined, subsequent 15th model with improved model performance, less uncertainty and higher model efficiency. We benchmark the original Model 1 with the final Model 15 and find that the incremental model breakdown leads to a structure with good model performance, fewer but more relevant processes and less model parameters.

2018 ◽  
Vol 22 (8) ◽  
pp. 4565-4581 ◽  
Author(s):  
Florian U. Jehn ◽  
Lutz Breuer ◽  
Tobias Houska ◽  
Konrad Bestian ◽  
Philipp Kraft

Abstract. The ambiguous representation of hydrological processes has led to the formulation of the multiple hypotheses approach in hydrological modeling, which requires new ways of model construction. However, most recent studies focus only on the comparison of predefined model structures or building a model step by step. This study tackles the problem the other way around: we start with one complex model structure, which includes all processes deemed to be important for the catchment. Next, we create 13 additional simplified models, where some of the processes from the starting structure are disabled. The performance of those models is evaluated using three objective functions (logarithmic Nash–Sutcliffe; percentage bias, PBIAS; and the ratio between the root mean square error and the standard deviation of the measured data). Through this incremental breakdown, we identify the most important processes and detect the restraining ones. This procedure allows constructing a more streamlined, subsequent 15th model with improved model performance, less uncertainty and higher model efficiency. We benchmark the original Model 1 and the final Model 15 with HBV Light. The final model is not able to outperform HBV Light, but we find that the incremental model breakdown leads to a structure with good model performance, fewer but more relevant processes and fewer model parameters.


2008 ◽  
Vol 18 ◽  
pp. 31-35 ◽  
Author(s):  
X. Zhang ◽  
G. Hörmann ◽  
N. Fohrer

Abstract. This paper investigates the variations of model performance caused by different model structures in both flow processes and model complexity level. Two case studies indicate that model efficiency is strongly dependent on model structure. The resulting substantial variation in both the model efficiency and the hydrographs from different model structures is used to estimate the structural uncertainty. The results help to select the most appropriate model adapted to local situations, which reveal great conformity with the actual hydrological patterns in both study basins.


Author(s):  
Guanlin Wang ◽  
Jihong Zhu ◽  
Hui Xia

Accurately modeling the dynamic characteristics of a helicopter is difficult and time-consuming. This paper presents a new identification approach which applies the modes partition method and structure traversal (MPM/ST) algorithm. The dynamic modes, instead of model parameters of each model structure, are sequentially identified through MPM. The model with the minimum cost function (CF) is chosen from the best model set and is defined as the final model. Real flight tests of an unmanned helicopter are carried out to verify the identification approach. Time- and frequency-domain results of the identified models clearly demonstrate the potential of MPM/ST in modeling such complex systems.


2015 ◽  
Vol 12 (4) ◽  
pp. 3945-4004 ◽  
Author(s):  
S. Pande ◽  
L. Arkesteijn ◽  
H. Savenije ◽  
L. A. Bastidas

Abstract. This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting) and its simplified version SIXPAR (Six Parameter Model), are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters) does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.


2017 ◽  
Author(s):  
Iris Kriest

Abstract. The assessment of the ocean biota's role in climate climate change is often carried out with global biogeochemical ocean models that contain many components, and involve a high level of parametric uncertainty. Examination the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common, but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types – a complex seven-component model (MOPS), and a very simple two-component model (RetroMOPS) – for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model, which contains only nutrients and dissolved organic phosphorus (DOP), is sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias as a useful additional constraint on model parameters. Dissolved organic phosphorous at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer – although difficult to measure – may be an important asset for model calibration.


2017 ◽  
Vol 18 (8) ◽  
pp. 2215-2225 ◽  
Author(s):  
Andrew J. Newman ◽  
Naoki Mizukami ◽  
Martyn P. Clark ◽  
Andrew W. Wood ◽  
Bart Nijssen ◽  
...  

Abstract The concepts of model benchmarking, model agility, and large-sample hydrology are becoming more prevalent in hydrologic and land surface modeling. As modeling systems become more sophisticated, these concepts have the ability to help improve modeling capabilities and understanding. In this paper, their utility is demonstrated with an application of the physically based Variable Infiltration Capacity model (VIC). The authors implement VIC for a sample of 531 basins across the contiguous United States, incrementally increase model agility, and perform comparisons to a benchmark. The use of a large-sample set allows for statistically robust comparisons and subcategorization across hydroclimate conditions. Our benchmark is a calibrated, time-stepping, conceptual hydrologic model. This model is constrained by physical relationships such as the water balance, and it complements purely statistical benchmarks due to the increased physical realism and permits physically motivated benchmarking using metrics that relate one variable to another (e.g., runoff ratio). The authors find that increasing model agility along the parameter dimension, as measured by the number of model parameters available for calibration, does increase model performance for calibration and validation periods relative to less agile implementations. However, as agility increases, transferability decreases, even for a complex model such as VIC. The benchmark outperforms VIC in even the most agile case when evaluated across the entire basin set. However, VIC meets or exceeds benchmark performance in basins with high runoff ratios (greater than ~0.8), highlighting the ability of large-sample comparative hydrology to identify hydroclimatic performance variations.


2018 ◽  
Author(s):  
Francesco Reggiani ◽  
Marco Carraro ◽  
Anna Belligoli ◽  
Marta Sanna ◽  
Chiara dal Pra' ◽  
...  

In this work we present a framework for blood cholesterol levels prediction from genotype data. The predictor is based on an algorithm for cholesterol metabolism simulation available in literature, implemented and optimized by our group in R language. Main weakness of the former simulation algorithm was the need of experimental data to simulate mutations in genes altering the cholesterol metabolism. This caveat strongly limited the application of the model in the clinical practice. In this work we present how this limitation could be bypassed thanks to an optimization of model parameters based on patients cholesterol levels retrieved from literature. Prediction performance has been assessed taking in consideration several scoring indices currently used for performance evaluation of machine learning methods. Our assessment shows how the optimization phase improved model performance, compared to the original version available in literature.


Author(s):  
Antonio Hurtado-Beltran ◽  
Laurence R. Rilett

In the current version of the Highway Capacity Manual (HCM-6), equal-capacity passenger car equivalencies (EC-PCEs) are used to account for the effect of trucks for capacity analyses. The EC-PCEs for freeway segments were estimated using a microsimulation-based methodology where the capacities of the mixed-traffic and car-only flow scenarios were modeled. A nonlinear regression (NLR) model was used to develop capacity adjustment factor (CAF) models using the microsimulation data as input. The NLR model has a complex model structure and includes 15 model parameters. It is argued in this paper that simpler regression models could provide comparable results. This would allow CAF and EC-PCE equations to be used directly in the HCM-6 rather than tables. It would also allow for the development of new regression models for exploring new technologies such as connected and automated vehicles (CAVs). The objective of this paper was to develop alternative and simpler regression models of CAFs needed to derive the EC-PCE values in the HCM-6 methodology for freeway and multilane highway segments. It was found that simpler regression models provided similar results as those obtained with the current NLR model. Additionally, it was found that the current NLR model may not be adequate for analyzing CAV traffic conditions. If the HCM-6 EC-PCE methodology is expected to be used to analyze traffic conditions beyond the scope of the HCM-6, it is important to perform a deeper assessment of the form and error of the regression models used in fitting the simulated and estimated data.


2012 ◽  
Vol 9 (10) ◽  
pp. 11437-11485 ◽  
Author(s):  
S. Van Hoey ◽  
P. Seuntjens ◽  
J. van der Kwast ◽  
I. Nopens

Abstract. When applying hydrological models, different sources of uncertainty are present and the incorporation of these uncertainties in evaluations of model performance are needed to assess model outcomes correctly. Nevertheless, uncertainty in the discharge observations complicate the model identification, both in terms of model structure and parameterization. In this paper, two different lumped model structures (PDM and NAM) are compared taking into account the uncertainty coming from the rating curve. The derived uncertainty bounds of the observations are used to derive limits of acceptance for the model simulations. The DYNamic Identifiability Approach (DYNIA) is applied to identify structural failure of both models and to evaluate the configuration of their structures. The analysis focuses on different parts of the hydrograph and evaluates the seasonal performance. In general, similar model performance is observed. However, the model structures tend to behave differently in function of the time. Based on the analyses we did, the probability based soil storage representation of the PDM model outperformed the NAM structure. The incorporation of the observation error did not prevent the DYNIA analysis to identify potential model structural deficiencies that are limiting the representation of the seasonal variation.


2016 ◽  
Author(s):  
L. Menichetti ◽  
T. Kätterer ◽  
J. Leifeld

Abstract. Soil organic carbon (SOC) dynamics result from different interacting processes and controls on spatial scales from sub-aggregate to pedon to the whole ecosystem. These complex dynamics are translated into models as abundant degrees of freedom. This high number of not directly measurable variables and, on the other hand, very limited data at disposal result in equifinality and parameter uncertainty. Carbon radioisotope measurements are a proxy for SOC age both at annual to decadal (bomb peak based) and centennial to millennial time scales (radio decay based), and thus can be used in addition to total organic C for constraining SOC models. By considering this additional information, uncertainties in model structure and parameters may be reduced. To test this hypothesis we studied SOC dynamics and their defining kinetic parameters in the ZOFE experiment, a >60-years old controlled cropland experiment in Switzerland, by utilising SOC and SO14C time-series. To represent different processes we applied five model structures, all stemming from a simple mother model (ICBM): I) two decomposing pools, II) an inert pool added, III) three decomposing pools, IV) two decomposing pools with a substrate control feedback on decomposition, V) as IV but with also an inert pool. These structures were extended to explicitly represent total SOC and 14C pools. The use of different model structures allowed us to explore model structural uncertainty and the impact of 14C on kinet ic parameters. We considered parameter uncertainty by calibrating in a formal Bayesian framework. By varying the relative importance of total SOC and SO14C data in the calibration, we could quantify the effect of the information from these two data streams on estimated model parameters. The weighing of the two data streams was crucial for determining model outcomes, and we suggest including it in future modelling efforts whenever SO14C data are available. The measurements and all model structures indicated a dramatic decline in SOC in the ZOFE experiment after an initial land use change in 1949 from grass- to cropland, followed by a constant but smaller decline. According to all structures, the three treatments (control, mineral fertilizer, farmyard manure) we considered were still far from equilibrium. The estimates of mean residence time (MRT) of the C pools defined by our models were sensitive to the consideration of the SO14C data stream. Model structure had a smaller effect on estimated MRT, which ranged between 5.91 and 4.22 years and 78.93 and 98.85 years for young and old pool, respectively, for structures without substrate interactions. The simplest model structure performed the best according to information criteria, validating the idea that we still lack data for mechanistic SOC models. Although we could not exclude any of the considered processes possibly involved in SOC decomposition, it was not possible to discriminate their relative importance.


Sign in / Sign up

Export Citation Format

Share Document