Patterns in time-dependent parameters reveal deficits of a catchment-scale herbicide transport model

Author(s):  
Lorenz Ammann ◽  
Fabrizio Fenicia ◽  
Tobias Doppler ◽  
Christian Stamm ◽  
Peter Reichert

<p>Many hydrological systems have a stochastic behavior at the spatiotemporal scales we observe them. The reasons are insufficient quantity or quality of input observations, and model structural errors, the effects of which vary over time. Assuming model parameters as time-dependent, stochastic processes can account for such effects. This approach differs from using deterministic models in combination with a stochastic error term on output and or input. We start from an existing deterministic conceptual bucket model, which was developed and calibrated to jointly predict streamflow and herbicide pollution observed in a small stream with an agricultural catchment in the Swiss midlands. The model considers sorption and degradation of herbicides, as well as fast transport processes such as overland flow to shortcuts and macropore flow to tile drains. Subsequently, the model is made stochastic by replacing selected constant parameter values by time-varying stochastic processes. We perform parameter inference according to the Bayesian approach using a Gibbs-sampler to combine Metropolis sampling of the remaining constant parameters with sampling from an Ornstein-Uhlenbeck process for the time-dependent parameter. A preliminary analysis of the resulting time series of the parameters reveals, for example, model deficits w.r.t. baseflow, in particular during dry conditions. We show that the resulting patterns can inspire model improvements by providing information that can be interpreted by the modeler. These findings indicate that stochastic models with time-dependent parameters are a promising tool for uncertainty quantification of water quality models and for facilitating the scientific learning process, which may ultimately lead to better predictions.</p>

2020 ◽  
Author(s):  
Andrea Bottacin-Busolin

<p>Inverse modeling approaches based on tracer data are often used to characterize transport processes in streams and rivers. This generally involves the calibration of a one-dimensional transport model using concentrations measured in the surface water at one or multiple locations along a stream reach. A major concern is whether the calibrated model parameters are representative of the physical transport processes occurring in the water column and the underlying sediment bed. This study looks at the identifiability of the parameters of a physically based one-dimensional stream transport model that represents hyporheic exchange as a vertically attenuated mixing process in accordance with recent experimental evidence. It is shown that, if the average flow velocity and hydraulic radius are not predetermined, there are infinite sets of parameter values that generate the same space-time concentration distributions in the water column. The result implies that in-stream transport and hyporheic exchange parameters cannot be determined from sole measurements of solute breakthrough curves in the surface water unless stream discharge and average cross-sectional geometry can be independently estimated.</p>


Soil Research ◽  
2012 ◽  
Vol 50 (2) ◽  
pp. 145 ◽  
Author(s):  
G. M. Lucci ◽  
R. W. McDowell ◽  
L. M. Condron

It is important to recognise source areas of phosphorus (P) in agricultural catchments and to understand how they contribute to catchment losses of P in order to effectively target mitigation strategies to decrease losses to surface waters. In a small dairy catchment (4.1 ha), soil physical properties and overland flow from pasture, a laneway, and around a watering trough were measured, together with subsurface flows from pasture and catchment discharge. Soil measured around the trough and in the laneway was found to be enriched in Olsen P (56 and 201 mg P/kg, respectively) compared with the pasture (24 mg P/kg), as well as having a greater bulk density resulting from more frequent use by animals. Dissolved P losses from lane and trough plots were greatly enhanced via dung. At the catchment scale, sources and transport processes resulted in losses mainly in the particulate P form (0.21 mg/L), while dissolved reactive P (DRP) concentrations were enriched during storm events (0.08 mg/L). Subsurface flow was found to be an important contributor of discharge and likely P losses, and this warrants further investigation. The scaling up of overland-flow plot data suggested that the laneway contributed up to 89% of the DRP load when surface overland flow was likely. This represents a substantial source of P loss on dairy farms. Additionally, the variation of sources and transport processes with season adds another aspect to the critical source area concept, and suggests that given the loss during summer and high algal availability of dissolved P, mitigation strategies should target decreasing dissolved P loss from the laneway.


2009 ◽  
Vol 13 (4) ◽  
pp. 503-517 ◽  
Author(s):  
W. Castaings ◽  
D. Dartus ◽  
F.-X. Le Dimet ◽  
G.-M. Saulnier

Abstract. Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised) with respect to model inputs. In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations) but didactic application case. It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run) and the singular value decomposition (SVD) of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation. For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers) is adopted. Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.


2020 ◽  
Author(s):  
Sina Khatami

Catchment models are conventionally evaluated in terms of their response surface or likelihood surface constructed from model runs using different sets of model parameters. Model evaluation methods are mainly based upon the concept of the equifinality of model structures or parameter sets. The operational definition of equifinality is that multiple model structures/parameters are equally capable of producing acceptable simulations of catchment processes such as runoff. Examining various aspects of this convention, in this thesis I demonstrate their shortcomings and introduce improvements including new approaches and insights for evaluating catchment models as multiple working hypotheses (MWH). First (Chapter 2), arguing that there is more to equifinality than just model structures/parameters, I propose a theoretical framework to conceptualise various facets of equifinality, based on a meta-synthesis of a broad range of literature across geosciences, system theory, and philosophy of science. I distinguish between process-equifinality (equifinality within the real-world systems/processes) and model-equifinality (equifinality within models of real-world systems), explain various aspects of each of these two facets, and discuss their implications for hypothesis testing and modelling of hydrological systems under uncertainty. Second (Chapter 3), building up on this theoretical framework, I propose that characterising model-equifinality based on model internal fluxes — instead of model parameters which is the current approach to account for model-equifinality — provides valuable insights for evaluating catchment models. I developed a new method for model evaluation — called flux mapping — based on the equifinality of runoff generating fluxes of large ensembles of catchment model simulations (1 million model runs for each catchment). Evaluating the model behaviour within the flux space is a powerful approach, beyond the convention, to formulate testable hypotheses for runoff generation processes at the catchment scale. Third (Chapter 4), I further explore the dependency of the flux map of a catchment model upon the choice of model structure and parameterisation, error metric, and data information content. I compare two catchment models (SIMHYD and SACRAMENTO) across 221 Australian catchments (known as Hydrologic Reference Stations, HRS) using multiple error metrics. I particularly demonstrate the fundamental shortcomings of two widely used error metrics — i.e. Nash–Sutcliffe efficiency and Willmott’s refined index of agreement — in model evaluation. I develop the skill score version of Kling–Gupta efficiency (KGEss), and argue it is a more reliable error metric that the other metrics. I also compare two strategies of random sampling (Latin Hypercube Sampling) and guided search (Shuffled Complex Evolution) for model parameterisation, and discuss their implications in evaluating catchment models as MWH. Finally (Chapter 5), I explore how catchment characteristics (physiographic, climatic, and streamflow response characteristics) control the flux map of catchment models (i.e. runoff generation hypotheses). To this end, I formulate runoff generating hypotheses from a large ensemble of SIMHYD simulations (1 million model runs in each catchment). These hypotheses are based on the internal runoff fluxes of SIMHYD — namely infiltration excess overland flow, interflow and saturation excess overland flow, and baseflow — which represent runoff generation at catchment scale. I examine the dependency of these hypotheses on 22 different catchment attributes across 186 of the HRS catchments with acceptable model performance and sufficient parameter sampling. The model performance of each simulation is evaluated using KGEss metric benchmarked against the catchment-specific calendar day average observed flow model, which is more informative than the conventional benchmark of average overall observed flow. I identify catchment attributes that control the degree of equifinality of model runoff fluxes. Higher degree of flux equifinality implies larger uncertainties associated with the representation of runoff processes at catchment scale, and hence pose a greater challenge for reliable and realistic simulation and prediction of streamflow. The findings of this chapter provides insights into the functional connectivity of catchment attributes and the internal dynamics of model runoff fluxes.


1992 ◽  
Vol 23 (2) ◽  
pp. 89-104 ◽  
Author(s):  
Ole H. Jacobsen ◽  
Feike J. Leij ◽  
Martinus Th. van Genuchten

Breakthrough curves of Cl and 3H2O were obtained during steady unsaturated flow in five lysimeters containing an undisturbed coarse sand (Orthic Haplohumod). The experimental data were analyzed in terms of the classical two-parameter convection-dispersion equation and a four-parameter two-region type physical nonequilibrium solute transport model. Model parameters were obtained by both curve fitting and time moment analysis. The four-parameter model provided a much better fit to the data for three soil columns, but performed only slightly better for the two remaining columns. The retardation factor for Cl was about 10 % less than for 3H2O, indicating some anion exclusion. For the four-parameter model the average immobile water fraction was 0.14 and the Peclet numbers of the mobile region varied between 50 and 200. Time moments analysis proved to be a useful tool for quantifying the break through curve (BTC) although the moments were found to be sensitive to experimental scattering in the measured data at larger times. Also, fitted parameters described the experimental data better than moment generated parameter values.


Author(s):  
Daniel Bittner ◽  
Beatrice Richieri ◽  
Gabriele Chiogna

AbstractUncertainties in hydrologic model outputs can arise for many reasons such as structural, parametric and input uncertainty. Identification of the sources of uncertainties and the quantification of their impacts on model results are important to appropriately reproduce hydrodynamic processes in karst aquifers and to support decision-making. The present study investigates the time-dependent relevance of model input uncertainties, defined as the conceptual uncertainties affecting the representation and parameterization of processes relevant for groundwater recharge, i.e. interception, evapotranspiration and snow dynamic, on the lumped karst model LuKARS. A total of nine different models are applied, three to compute interception (DVWK, Gash and Liu), three to compute evapotranspiration (Thornthwaite, Hamon and Oudin) and three to compute snow processes (Martinec, Girons Lopez and Magnusson). All the input model combinations are tested for the case study of the Kerschbaum spring in Austria. The model parameters are kept constant for all combinations. While parametric uncertainties computed for the same model in previous studies do not show pronounced temporal variations, the results of the present work show that input uncertainties are seasonally varying. Moreover, the input uncertainties of evapotranspiration and snowmelt are higher than the interception uncertainties. The results show that the importance of a specific process for groundwater recharge can be estimated from the respective input uncertainties. These findings have practical implications as they can guide researchers to obtain relevant field data to improve the representation of different processes in lumped parameter models and to support model calibration.


Atmosphere ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 272
Author(s):  
Ning Li ◽  
Junli Xu ◽  
Xianqing Lv

Numerous studies have revealed that the sparse spatiotemporal distributions of ground-level PM2.5 measurements affect the accuracy of PM2.5 simulation, especially in large geographical regions. However, the high precision and stability of ground-level PM2.5 measurements make their role irreplaceable in PM2.5 simulations. This article applies a dynamically constrained interpolation methodology (DCIM) to evaluate sparse PM2.5 measurements captured at scattered monitoring sites for national-scale PM2.5 simulations and spatial distributions. The DCIM takes a PM2.5 transport model as a dynamic constraint and provides the characteristics of the spatiotemporal variations of key model parameters using the adjoint method to improve the accuracy of PM2.5 simulations. From the perspective of interpolation accuracy and effect, kriging interpolation and orthogonal polynomial fitting using Chebyshev basis functions (COPF), which have been proved to have high PM2.5 simulation accuracy, were adopted to make a comparative assessment of DCIM performance and accuracy. Results of the cross validation confirm the feasibility of the DCIM. A comparison between the final interpolated values and observations show that the DCIM is better for national-scale simulations than kriging or COPF. Furthermore, the DCIM presents smoother spatially interpolated distributions of the PM2.5 simulations with smaller simulation errors than the other two methods. Admittedly, the sparse PM2.5 measurements in a highly polluted region have a certain degree of influence on the interpolated distribution accuracy and rationality. To some extent, adding the right amount of observations can improve the effectiveness of the DCIM around existing monitoring sites. Compared with the kriging interpolation and COPF, the results show that the DCIM used in this study would be more helpful for providing reasonable information for monitoring PM2.5 pollution in China.


2017 ◽  
Author(s):  
M. Victoria Carpio-Bernido ◽  
Wilson I. Barredo ◽  
Christopher C. Bernido

2013 ◽  
Vol 17 (2) ◽  
pp. 817-828 ◽  
Author(s):  
M. Stoelzle ◽  
K. Stahl ◽  
M. Weiler

Abstract. Streamflow recession has been investigated by a variety of methods, often involving the fit of a model to empirical recession plots to parameterize a non-linear storage–outflow relationship based on the dQ/dt−Q method. Such recession analysis methods (RAMs) are used to estimate hydraulic conductivity, storage capacity, or aquifer thickness and to model streamflow recession curves for regionalization and prediction at the catchment scale. Numerous RAMs have been published, but little is known about how comparably the resulting recession models distinguish characteristic catchment behavior. In this study we combined three established recession extraction methods with three different parameter-fitting methods to the power-law storage–outflow model to compare the range of recession characteristics that result from the application of these different RAMs. Resulting recession characteristics including recession time and corresponding storage depletion were evaluated for 20 meso-scale catchments in Germany. We found plausible ranges for model parameterization; however, calculated recession characteristics varied over two orders of magnitude. While recession characteristics of the 20 catchments derived with the different methods correlate strongly, particularly for the RAMs that use the same extraction method, not all rank the catchments consistently, and the differences among some of the methods are larger than among the catchments. To elucidate this variability we discuss the ambiguous roles of recession extraction procedures and the parameterization of the storage–outflow model and the limitations of the presented recession plots. The results suggest strong limitations to the comparability of recession characteristics derived with different methods, not only in the model parameters but also in the relative characterization of different catchments. A multiple-methods approach to investigating streamflow recession characteristics should be considered for applications whenever possible.


Sign in / Sign up

Export Citation Format

Share Document