scholarly journals The evolution of process-based hydrologic models: Historical challenges and the collective quest for physical realism

Author(s):  
Martyn P. Clark ◽  
Marc F. P. Bierkens ◽  
Luis Samaniego ◽  
Ross A. Woods ◽  
Remko Uijenhoet ◽  
...  

Abstract. The diversity in hydrologic models has historically led to great controversy on the “correct” approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper we revisit key modeling challenges, outlined by Freeze and Harlan nearly 50 years ago, on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, summarize modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

2017 ◽  
Vol 21 (7) ◽  
pp. 3427-3440 ◽  
Author(s):  
Martyn P. Clark ◽  
Marc F. P. Bierkens ◽  
Luis Samaniego ◽  
Ross A. Woods ◽  
Remko Uijlenhoet ◽  
...  

Abstract. The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.


2017 ◽  
Vol 52 (14) ◽  
pp. 1947-1958 ◽  
Author(s):  
Sergio González ◽  
Gianluca Laera ◽  
Sotiris Koussios ◽  
Jaime Domínguez ◽  
Fernando A Lasagni

The simulation of long life behavior and environmental aging effects on composite materials are subjects of investigation for future aerospace applications (i.e. supersonic commercial aircrafts). Temperature variation in addition to matrix oxidation involves material degradation and loss of mechanical properties. Crack initiation and growth is the main damage mechanism. In this paper, an extended finite element analysis is proposed to simulate damage on carbon fiber reinforced polymer as a consequence of thermal fatigue between −50℃ and 150℃ under atmospheres with different oxygen content. The interphase effect on the degradation process is analyzed at a microscale level. Finally, results are correlated with the experimental data in terms of material stiffness and, hence, the most suitable model parameters are selected.


2018 ◽  
Vol 612 ◽  
pp. A70 ◽  
Author(s):  
J. Olivares ◽  
E. Moraux ◽  
L. M. Sarro ◽  
H. Bouy ◽  
A. Berihuete ◽  
...  

Context. Membership analyses of the DANCe and Tycho + DANCe data sets provide the largest and least contaminated sample of Pleiades candidate members to date. Aims. We aim at reassessing the different proposals for the number surface density of the Pleiades in the light of the new and most complete list of candidate members, and inferring the parameters of the most adequate model. Methods. We compute the Bayesian evidence and Bayes Factors for variations of the classical radial models. These include elliptical symmetry, and luminosity segregation. As a by-product of the model comparison, we obtain posterior distributions for each set of model parameters. Results. We find that the model comparison results depend on the spatial extent of the region used for the analysis. For a circle of 11.5 parsecs around the cluster centre (the most homogeneous and complete region), we find no compelling reason to abandon King’s model, although the Generalised King model introduced here has slightly better fitting properties. Furthermore, we find strong evidence against radially symmetric models when compared to the elliptic extensions. Finally, we find that including mass segregation in the form of luminosity segregation in the J band is strongly supported in all our models. Conclusions. We have put the question of the projected spatial distribution of the Pleiades cluster on a solid probabilistic framework, and inferred its properties using the most exhaustive and least contaminated list of Pleiades candidate members available to date. Our results suggest however that this sample may still lack about 20% of the expected number of cluster members. Therefore, this study should be revised when the completeness and homogeneity of the data can be extended beyond the 11.5 parsecs limit. Such a study will allow for more precise determination of the Pleiades spatial distribution, its tidal radius, ellipticity, number of objects and total mass.


Author(s):  
Suryanarayana R. Pakalapati ◽  
Hayri Sezer ◽  
Ismail B. Celik

Dual number arithmetic is a well-known strategy for automatic differentiation of computer codes which gives exact derivatives, to the machine accuracy, of the computed quantities with respect to any of the involved variables. A common application of this concept in Computational Fluid Dynamics, or numerical modeling in general, is to assess the sensitivity of mathematical models to the model parameters. However, dual number arithmetic, in theory, finds the derivatives of the actual mathematical expressions evaluated by the computer code. Thus the sensitivity to a model parameter found by dual number automatic differentiation is essentially that of the combination of the actual mathematical equations, the numerical scheme and the grid used to solve the equations not just that of the model equations alone as implied by some studies. This aspect of the sensitivity analysis of numerical simulations using dual number auto derivation is explored in the current study. A simple one-dimensional advection diffusion equation is discretized using different schemes of finite volume method and the resulting systems of equations are solved numerically. Derivatives of the numerical solutions with respect to parameters are evaluated automatically using dual number automatic differentiation. In addition the derivatives are also estimated using finite differencing for comparison. The analytical solution was also found for the original PDE and derivatives of this solution are also computed analytically. It is shown that a mathematical model could potentially show different sensitivity to a model parameter depending on the numerical method employed to solve the equations and the grid resolution used. This distinction is important since such inter-dependence needs to be carefully addressed to avoid confusion when reporting the sensitivity of predictions to a model parameter using a computer code. A systematic assessment of numerical uncertainty in the sensitivities computed using automatic differentiation is presented.


2018 ◽  
Vol 7 (5) ◽  
pp. 120
Author(s):  
T. H. M. Abouelmagd

A new version of the Lomax model is introduced andstudied. The major justification for the practicality of the new model isbased on the wider use of the Lomax model. We are also motivated tointroduce the new model since the density of the new distribution exhibitsvarious important shapes such as the unimodal, the right skewed and the leftskewed. The new model can be viewed as a mixture of the exponentiated Lomaxdistribution. It can also be considered as a suitable model for fitting thesymmetric, left skewed, right skewed, and unimodal data sets. The maximumlikelihood estimation method is used to estimate the model parameters. Weprove empirically the importance and flexibility of the new model inmodeling two types of aircraft windshield lifetime data sets. The proposedlifetime model is much better than gamma Lomax, exponentiated Lomax, Lomaxand beta Lomax models so the new distribution is a good alternative to thesemodels in modeling aircraft windshield data.


Author(s):  
Alberto Godio ◽  
Francesca Pace ◽  
Andrea Vergnano

We applied a generalized SEIR epidemiological model to the recent SARS-CoV-2 outbreak in the world, with a focus on Italy and its Lombardy, Piedmont, and Veneto regions. We focused on the application of a stochastic approach in fitting the model parameters using a Particle Swarm Optimization (PSO) solver, to improve the reliability of predictions in the medium term (30 days). We analyzed the official data and the predicted evolution of the epidemic in the Italian regions, and we compared the results with the data and predictions of Spain and South Korea. We linked the model equations to the changes in people’s mobility, with reference to Google’s COVID-19 Community Mobility Reports. We discussed the effectiveness of policies taken by different regions and countries and how they have an impact on past and future infection scenarios.


1989 ◽  
Vol 111 (3) ◽  
pp. 233-240 ◽  
Author(s):  
E. Belardinelli ◽  
M. Ursino ◽  
E. Iemmi

The artero-venous system is often stressed by accelerative perturbation, not only during exceptional performances, but also in normal life. For example, when the body is subject to fast pressure changes, accelerative perturbations combined with a change in hydrostatic pressure could have severe effects on the circulation. In such cases a preliminary mathematical inquiry, whose results allow qualitative evaluation of the perturbation produced is useful. Pressure variations are studied in this work when the body is subjected both to rectilinear and rotational movements as well as posture change. The dominant modes of the hemodynamic oscillations are emphasized and the numerical simulation results presented. The artery model used for simulation is obviously simplified with respect to the anatomical structure of an artery. Nevertheless, behavior of the main arteries (like the common carotid and aorta) can be approximately described, choosing suitable model parameters. The frequency of blood oscillations strictly depends on the Young modulus of the arterial wall. This connection could be employed for new clinical tests on the state of the arteries.


2020 ◽  
Author(s):  
Diana Spieler ◽  
Juliane Mai ◽  
Bryan Tolson ◽  
James Craig ◽  
Niels Schütze

<p>A recently introduced framework for Automatic Model Structure Identification (AMSI) allows to simultaneously optimize model structure choices (integer decision variables) and parameter values (continuous decision variables) in hydrologic modelling. By combining the mixed-integer optimization algorithm DDS and the flexible hydrologic modelling framework RAVEN, AMSI is able to test a vast number of model structure and parameter combinations in order to identify the most suitable model structure for representing the rainfall runoff behavior of a catchment. The model structure and all potentially active model parameters are calibrated simultaneously. This causes a certain degree of inefficiency during the calibration process, as variables might be perturbed that are not currently relevant for the tested model structure. In order to avoid this, we propose an adaption of the current DDS algorithm allowing for conditional parameter estimation. Parameters will only be perturbed during the calibration process if they are relevant for the model structure that is currently tested. The conditional parameter estimation setup will be compared to the standard DDS algorithm for multiple AMSI test cases. We will show if and how conditional parameter estimation increases the efficiency of AMSI.</p>


2017 ◽  
Vol 26 (08) ◽  
pp. 1740010
Author(s):  
Thomas Polzer ◽  
Andreas Steininger

It is well known that every sequential element may become metastable when provided with marginal inputs, such as input transitions occurring too close or input voltage not reaching a defined HI or LO level. In this case the sequential element requires extra time to decide which digital output level to finally present, which is perceived as an output delay. The amount of this delay depends on how close the element’s state is to the balance point, at which the delay may, theoretically, become infinite. While metastability can be safely avoided within a closed timing domain, it cannot be completely ruled out at timing domain boundaries. Therefore it is important to quantify its effect. Traditionally this is done by means of a “mean time between upsets” (MTBU) which gives the expected interval between two metastable upsets. The latter is defined as the event of latching the still undecided output of one sequential element by a subsequent one. However, such a definition only makes sense in a time-safe environment like a synchronous design. In this paper we will extend the scope to so-called value-safe environments, in which a sequential element can safely finalize its decision, since the subsequent one waits for completion before capturing its output. Here metastability is not a matter of “failure” but a performance issue, and hence characterization by MTBU is not intuitive. Therefore we will put the focus on the delay aspect and derive a suitable model. This model extends existing approaches by also including the area of very weak metastability and thus providing complete coverage. We will show its validity through comparison with transistor-level simulation results for the most popular sequential elements in different implementations, point out its relation to the traditional MTBU model parameters, namely [Formula: see text] and [Formula: see text], and show how to use it for calculating the performance penalty in a value-safe environment.


2017 ◽  
Vol 2017 ◽  
pp. 1-26 ◽  
Author(s):  
Tingting Liu ◽  
Jan Lemeire

The predominant learning algorithm for Hidden Markov Models (HMMs) is local search heuristics, of which the Baum-Welch (BW) algorithm is mostly used. It is an iterative learning procedure starting with a predefined size of state spaces and randomly chosen initial parameters. However, wrongly chosen initial parameters may cause the risk of falling into a local optimum and a low convergence speed. To overcome these drawbacks, we propose to use a more suitable model initialization approach, a Segmentation-Clustering and Transient analysis (SCT) framework, to estimate the number of states and model parameters directly from the input data. Based on an analysis of the information flow through HMMs, we demystify the structure of models and show that high-impact states are directly identifiable from the properties of observation sequences. States having a high impact on the log-likelihood make HMMs highly specific. Experimental results show that even though the identification accuracy drops to 87.9% when random models are considered, the SCT method is around 50 to 260 times faster than the BW algorithm with 100% correct identification for highly specific models whose specificity is greater than 0.06.


Sign in / Sign up

Export Citation Format

Share Document