scholarly journals Impact of modellers' decisions on hydrological a priori predictions

2014 ◽  
Vol 18 (6) ◽  
pp. 2065-2085 ◽  
Author(s):  
H. M. Holländer ◽  
H. Bormann ◽  
T. Blume ◽  
W. Buytaert ◽  
G. B. Chirico ◽  
...  

Abstract. In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers – using the model of their choice – for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of added information. In this qualitative analysis of a statistically small number of predictions we learned (i) that soft information such as the modeller's system understanding is as important as the model itself (hard information), (ii) that the sequence of modelling steps matters (field visit, interactions between differently experienced experts, choice of model, selection of available data, and methods for parameter guessing), and (iii) that added process understanding can be as efficient as adding data for improving parameters needed to satisfy model requirements.

2013 ◽  
Vol 10 (7) ◽  
pp. 8875-8944
Author(s):  
H. M. Holländer ◽  
H. Bormann ◽  
T. Blume ◽  
W. Buytaert ◽  
G. B. Chirico ◽  
...  

Abstract. The purpose of this paper is to stimulate a re-thinking of how we, the catchment hydrologists, could become reliable forecasters. A group of catchment modellers predicted the hydrological response of a man-made 6 ha catchment in its initial phase (Chicken Creek) without having access to the observed records. They used conceptually different model families. Their modelling experience differed largely. The prediction exercise was organized in three steps: (1) for the 1st prediction modellers received a basic data set describing the internal structure of the catchment (somewhat more complete than usually available to a priori predictions in ungauged catchments). They did not obtain time series of stream flow, soil moisture or groundwater response. (2) Before the 2nd improved prediction they inspected the catchment on-site and attended a workshop where the modellers presented and discussed their first attempts. (3) For their improved 3rd prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step 1. Here, we detail the modeller's decisions in accounting for the various processes based on what they learned during the field visit (step 2) and add the final outcome of step 3 when the modellers made use of additional data. We document the prediction progress as well as the learning process resulting from the availability of added information. For the 2nd and 3rd step, the progress in prediction quality could be evaluated in relation to individual modelling experience and costs of added information. We learned (i) that soft information such as the modeller's system understanding is as important as the model itself (hard information), (ii) that the sequence of modelling steps matters (field visit, interactions between differently experienced experts, choice of model, selection of available data, and methods for parameter guessing), and (iii) that added process understanding can be as efficient as adding data for improving parameters needed to satisfy model requirements.


2003 ◽  
Vol 5 (4) ◽  
pp. 233-244 ◽  
Author(s):  
Vincent Guinot ◽  
Philippe Gourbesville

The modelling of extreme hydrological events often suffers from a lack of available data. Physically based models are the best available modelling option in such situations, as they can in principle provide answers about the behaviour of ungauged catchments provided that the geometry and the forcings are known with sufficient accuracy. The need for calibration is therefore limited. In some situations, calibration (seen as adjusting the model parameters so that they fit the calculation as closely to the measurements as possible) is impossible. This paper presents such a situation. The MIKE SHE physically based hydrological model is used to model a flash flood over a medium-sized catchment of the Mediterranean Alps (2820 km2). An examination of a number of modelling alternatives shows that the main factor of uncertainty in the model response is the model structure (what are the dominant processes). The second most important factor is the accuracy with which the catchment geometry is represented in the model. The model results exhibit very little sensitivity to the model parameters, and therefore calibration of these parameters is found to be useless.


2017 ◽  
Vol 21 (2) ◽  
pp. 1225-1249 ◽  
Author(s):  
Ralf Loritz ◽  
Sibylle K. Hassler ◽  
Conrad Jackisch ◽  
Niklas Allroggen ◽  
Loes van Schaik ◽  
...  

Abstract. This study explores the suitability of a single hillslope as a parsimonious representation of a catchment in a physically based model. We test this hypothesis by picturing two distinctly different catchments in perceptual models and translating these pictures into parametric setups of 2-D physically based hillslope models. The model parametrizations are based on a comprehensive field data set, expert knowledge and process-based reasoning. Evaluation against streamflow data highlights that both models predicted the annual pattern of streamflow generation as well as the hydrographs acceptably. However, a look beyond performance measures revealed deficiencies in streamflow simulations during the summer season and during individual rainfall–runoff events as well as a mismatch between observed and simulated soil water dynamics. Some of these shortcomings can be related to our perception of the systems and to the chosen hydrological model, while others point to limitations of the representative hillslope concept itself. Nevertheless, our results confirm that representative hillslope models are a suitable tool to assess the importance of different data sources as well as to challenge our perception of the dominant hydrological processes we want to represent therein. Consequently, these models are a promising step forward in the search for the optimal representation of catchments in physically based models.


2017 ◽  
Vol 17 (20) ◽  
pp. 12697-12708 ◽  
Author(s):  
Guadalupe Sanchez ◽  
Antonio Serrano ◽  
María Luisa Cancillo

Abstract. Despite its important role on the human health and numerous biological processes, the diffuse component of the erythemal ultraviolet irradiance (UVER) is scarcely measured at standard radiometric stations and therefore needs to be estimated. This study proposes and compares 10 empirical models to estimate the UVER diffuse fraction. These models are inspired from mathematical expressions originally used to estimate total diffuse fraction, but, in this study, they are applied to the UVER case and tested against experimental measurements. In addition to adapting to the UVER range the various independent variables involved in these models, the total ozone column has been added in order to account for its strong impact on the attenuation of ultraviolet radiation. The proposed models are fitted to experimental measurements and validated against an independent subset. The best-performing model (RAU3) is based on a model proposed by Ruiz-Arias et al. (2010) and shows values of r2 equal to 0.91 and relative root-mean-square error (rRMSE) equal to 6.1 %. The performance achieved by this entirely empirical model is better than those obtained by previous semi-empirical approaches and therefore needs no additional information from other physically based models. This study expands on previous research to the ultraviolet range and provides reliable empirical models to accurately estimate the UVER diffuse fraction.


2018 ◽  
Vol 616 ◽  
pp. A13 ◽  
Author(s):  
◽  
F. Spoto ◽  
P. Tanga ◽  
F. Mignard ◽  
J. Berthier ◽  
...  

Context. The Gaia spacecraft of the European Space Agency (ESA) has been securing observations of solar system objects (SSOs) since the beginning of its operations. Data Release 2 (DR2) contains the observations of a selected sample of 14,099 SSOs. These asteroids have been already identified and have been numbered by the Minor Planet Center repository. Positions are provided for each Gaia observation at CCD level. As additional information, complementary to astrometry, the apparent brightness of SSOs in the unfiltered G band is also provided for selected observations. Aims. We explain the processing of SSO data, and describe the criteria we used to select the sample published in Gaia DR2. We then explore the data set to assess its quality. Methods. To exploit the main data product for the solar system in Gaia DR2, which is the epoch astrometry of asteroids, it is necessary to take into account the unusual properties of the uncertainty, as the position information is nearly one-dimensional. When this aspect is handled appropriately, an orbit fit can be obtained with post-fit residuals that are overall consistent with the a-priori error model that was used to define individual values of the astrometric uncertainty. The role of both random and systematic errors is described. The distribution of residuals allowed us to identify possible contaminants in the data set (such as stars). Photometry in the G band was compared to computed values from reference asteroid shapes and to the flux registered at the corresponding epochs by the red and blue photometers (RP and BP). Results. The overall astrometric performance is close to the expectations, with an optimal range of brightness G ~ 12 − 17. In this range, the typical transit-level accuracy is well below 1 mas. For fainter asteroids, the growing photon noise deteriorates the performance. Asteroids brighter than G ~ 12 are affected by a lower performance of the processing of their signals. The dramatic improvement brought by Gaia DR2 astrometry of SSOs is demonstrated by comparisons to the archive data and by preliminary tests on the detection of subtle non-gravitational effects.


2015 ◽  
Vol 12 (4) ◽  
pp. 4081-4155 ◽  
Author(s):  
A. Gallice ◽  
B. Schaefli ◽  
M. Lehning ◽  
M. P. Parlange ◽  
H. Huwald

Abstract. The development of stream temperature regression models at regional scales has regained some popularity over the past years. These models are used to predict stream temperature in ungauged catchments to assess the impact of human activities or climate change on riverine fauna over large spatial areas. A comprehensive literature review presented in this study shows that the temperature metrics predicted by the majority of models correspond to yearly aggregates, such as the popular annual maximum weekly mean temperature (MWMT). As a consequence, current models are often unable to predict the annual cycle of stream temperature, nor can the majority of them forecast the interannual variation of stream temperature. This study presents a new model to estimate the monthly mean stream temperature of ungauged rivers over multiple years in an Alpine country (Switzerland). Contrary to the models developed to date, which mostly rely upon statistical regression to express stream temperature as a function of physiographic and climatic variables, this one rests upon the analytical solution to a simplified version of the energy-balance equation over an entire stream network. This physically-based approach presents some advantages: (1) the functional form linking stream temperature to the predictor variables is directly obtained from first principles, (2) the spatial extent over which the predictor variables are averaged naturally arises during model development, and (3) the regression coefficients can be interpreted from a physical point of view – their values can therefore be constrained to remain within plausible bounds. The evaluation of the model over a new freely available data set shows that the monthly mean stream temperature curve can be reproduced with a root mean square error of ±1.3 °C, which is similar in precision to the predictions obtained with a multi-linear regression model. We illustrate through a simple example how the physical basis of the model can be used to gain more insight into the stream temperature dynamics at regional scales.


2021 ◽  
Author(s):  
Vincent Schmitz ◽  
Grégoire Wylock ◽  
Kamal El Kadi Abderrezzak ◽  
Ismail Rifai ◽  
Michel Pirotton ◽  
...  

<p>Failure of fluvial dykes often leads to devastating consequences in the protected areas. Overtopping flow is, by far, the most frequent cause of failure of fluvial dykes. Numerical modeling of the breaching mechanisms and induced flow is crucial to assess the risk and guide emergency plans.</p><p>Various types of numerical models have been developed for dam and dyke breach simulations, including 2D and 3D morphodynamic models (e.g., <em>Voltz et al.</em>, 2017 ; <em>Dazzi et al.</em>, 2019 ; <em>Onda et al.</em>, 2019). Nevertheless, simpler models are a valuable complement to the detailed models, since they enable fast multiple model runs to test, e.g. a broad range of possible breach locations or to perform uncertainty analysis. Moreover, unlike statistical formulae, physically-based lumped models are reasonably accurate and remain interesting in terms of process-understanding (<em>Wu</em>, 2013 ; <em>Zhong et al.</em>, 2017 ; <em>Yanlong</em>, 2020).</p><p>Nonetheless, existing lumped physically-based models were developed and tested mostly in frontal configurations, i.e. for the case of breaching of an embankment dam and not a fluvial dyke. Despite similarities in the processes, the breaching mechanisms involved in the case of fluvial dykes differ due to several factors such as a loss of symmetry and flow momentum parallel to the breach (<em>Rifai et al.</em>, 2017). Therefore, there is a need to assess the transfer of existing lumped physically-based models to configurations involving fluvial dyke breaching.</p><p>Here, we have developed a modular computational modeling framework, in which we are able to implement various physically-based lumped models of dyke breaching. In this framework, we started with our own implementation of the model presented by <em>Wu</em> (2013) and we incorporated a number of changes to the model. Next, we evaluated the model performance for a number of laboratory and field tests covering both frontal (<em>Frank</em>, 2016; <em>Hassan and Morris</em>, 2008) and fluvial (<em>Rifai et al.</em>, 2017; 2018; <em>Kakinuma and Shimizu</em>, 2014) configurations. The modular framework we have developed proves also particularly suitable for testing the sensitivity and uncertainties arising from assumptions in the model structure and parameters.</p>


2017 ◽  
Author(s):  
Guadalupe Sanchez Hernandez ◽  
Antonio Serrano ◽  
Maria Luisa Cancillo

Abstract. Although being extremely interesting, the diffuse component of the erythemal ultraviolet irradiance (UVER) is scarcely measured at standard radiometric stations and, therefore, needs to be estimated. This study proposes and compares ten empirical models to estimate the UVER diffuse fraction. These models are inspired on mathematical expressions originally used to estimate total diffuse fraction, but, in this study, they are applied to the UVER case and tested against experimental measurements. In addition to adapting to the UVER range the various independent variables involved in these models, the total ozone column has been added in order to account for its strong impact on the attenuation of ultraviolet radiation. The proposed models are fitted to experimental measurements and validated against an independent subset. Six models perform notably well, with the best performing model RAU3 showing values of r2 equal to 0.91 and rRMSE equal to 6.1 %. The performance achieved by this model is better than those obtained by previous semi-empirical approaches, with the advantage of being entirely empirical and, therefore, needing no additional information from physically-based models. This study expands previous research to the ultraviolet range, and provides reliable empirical models to accurately estimate the UVER diffuse fraction.


1994 ◽  
Vol 363 ◽  
Author(s):  
Robert J. Kee ◽  
Aili Ting ◽  
Paul A. Spence

AbstractMost physically based modeling software accepts input in the form of geometry definition, physical parameters, initial conditions, and boundary conditions; and then, on the basis of solving physical conservation equations, predicts the steady-state or transient behavior of a system or process. There is a growing need to create software tools that can themselves control or manipulate the physically based models in certain ways to enhance the usability of models for equipment design and process optimization. These required tools can be described broadly in the following categories: sensitivity analysis, parameter estimation, inverse problems, dynamic optimization, and real-time control. This paper discusses generally the development and application of such modeling tools, drawing examples from a specific RTP reactor design. These techniques accelerate significantly the optimal design of processes and the concurrent engineering of real-time process-control algorithms.


The review article discusses the possibilities of using fractal mathematical analysis to solve scientific and applied problems of modern biology and medicine. The authors show that only such an approach, related to the section of nonlinear mechanics, allows quantifying the chaotic component of the structure and function of living systems, that is a priori important additional information and expands, in particular, the possibilities of diagnostics, differential diagnosis and prediction of the course of physiological and pathological processes. A number of examples demonstrate the specific advantages of using fractal analysis for these purposes. The conclusion can be made that the expanded use of fractal analysis methods in the research work of medical and biological specialists is promising.


Sign in / Sign up

Export Citation Format

Share Document