scholarly journals MFIT 1.0.0: Multi-Flow Inversion of Tracer breakthrough curves in fractured and karst aquifers

2020 ◽  
Vol 13 (6) ◽  
pp. 2905-2924
Author(s):  
Jacques Bodin

Abstract. More than half of the Earth's population depends largely or entirely on fractured or karst aquifers for their drinking water supply. Both the characterization and modeling of these groundwater reservoirs are therefore of worldwide concern. Artificial tracer testing is a widely used method for the characterization of solute (including contaminant) transport in groundwater. Tracer experiments consist of a two-step procedure: (1) introducing a conservative tracer-labeled solution into an aquifer, usually through a sinkhole or a well, and (2) measuring the concentration breakthrough curve (BTC) response(s) at one or several downstream monitoring locations, usually spring(s) or pumping well(s). However, the modeling and interpretation of tracer test responses can be a challenging task in some cases, notably when the BTCs exhibit multiple local peaks and/or extensive backward tailing. MFIT (Multi-Flow Inversion of Tracer breakthrough curves) is a new open-source Windows-based computer package for the analytical modeling of tracer BTCs. This software integrates four transport models that are all capable of simulating single- or multiple-peak and/or heavy-tailed BTCs. The four transport models are encapsulated in a general multiflow modeling framework, which assumes that the spatial heterogeneity of an aquifer can be approximated by a combination of independent one-dimensional channels. Two of the MFIT transport models are believed to be new, as they combine the multiflow approach and the double-porosity concept, which is applied at the scale of the individual channels. Another salient feature of MFIT is its compatibility and interface with the advanced optimization tools of the PEST suite of programs. Hence, MFIT is the first BTC fitting tool that allows for regularized inversion and nonlinear analysis of the postcalibration uncertainty of model parameters.

2020 ◽  
Author(s):  
Jacques Bodin

Abstract. More than half of the Earth’s population depends largely or entirely on fractured or karst aquifers for their drinking water supply. Both the characterization and modeling of these groundwater reservoirs are therefore of worldwide concern. Artificial tracer testing is a widely used method for the characterization of solute (including contaminant) transport in groundwater. Tracer experiments consist of a two-step procedure: 1) introducing a conservative tracer-labeled solution into an aquifer, usually through a sinkhole or a well, and 2) measuring the concentration breakthrough curve (BTC) response(s) at one or several downstream monitoring locations, usually spring(s) or pumping well(s). However, the modeling and interpretation of tracer test responses can be a challenging task in some cases, notably when the BTCs exhibit multiple local peaks and/or extensive backward tailing. MFIT is a new open-source, Windows-based computer package for the analytical modeling of tracer BTCs. This software integrates four transport models that are all capable of simulating single- or multiple-peak and/or heavy-tailed BTCs. The four transport models are encapsulated in a general multiflow modeling framework, which assumes that the spatial heterogeneity of an aquifer can be approximated by a combination of independent one-dimensional channels. Two of the MFIT transport models are believed to be new, as they combine the multiflow approach and the double-porosity concept, which is applied at the scale of the individual channels. Another salient feature of MFIT is its compatibility and interface with the advanced optimization tools of the PEST suite of programs. Hence, MFIT is the first BTC fitting tool that allows regularized inversion and nonlinear analysis of the postcalibration uncertainty of model parameters.


2013 ◽  
Vol 13 (6) ◽  
pp. 15907-15947 ◽  
Author(s):  
K. C. Barsanti ◽  
A. G. Carlton ◽  
S. H. Chung

Abstract. Despite the critical importance for air quality and climate predictions, accurate representation of secondary organic aerosol (SOA) formation remains elusive. An essential addition to the ongoing discussion of improving model predictions is an acknowledgement of the linkages between experimental conditions, parameter optimization and model output, as well as the linkage between empirically-derived partitioning parameters and the physicochemical properties of SOA they represent in models. In this work, advantages of the volatility basis set (VBS) modeling approach are exploited to develop parameters for use in the computationally-efficient and widely-used two product (2p) SOA modeling framework, standard in chemical transport models such as CMAQ (Community Multiscale Air Quality) and GEOS-Chem (Goddard Earth Observing System–Chemistry). Calculated SOA yields and mass loadings obtained using the newly-developed 2p-VBS parameters and existing 2p and VBS parameters are compared with observed yields and mass loadings from a comprehensive list of published smog chamber studies to determine a "best available" set of SOA modeling parameters. SOA and PM2.5 levels are simulated using CMAQv.4.7.1; results are compared for a base case (with default 2p CMAQ parameters) and two "best available" parameter cases chosen to illustrate the high- and low-NOx limits of biogenic SOA formation from monoterpenes. Comparisons of published smog chamber data with SOA yield predictions illustrate that: (1) SOA yields for naphthalene and cyclic and > C5 alkanes are not well represented using either newly developed (2p-VBS) or existing (2p and VBS) parameters for low-yield aromatics and lumped alkanes, respectively; and (2) for 4 of 7 volatile organic compound + oxidant systems, the 2p-VBS parameters better represent existing data. Using the "best available" parameters (combination of published 2p and newly derived 2p-VBS), predicted SOA mass and PM2.5 concentrations increase by up to 10–15% and 7%, respectively, for the high-NOx case and up to 215% (~ 3 μg m−3) and 55%, respectively, for the low-NOx case. The ability to robustly assign "best available" parameters, however, is limited due to insufficient data for photo-oxidation of diverse monoterpenes and sesquiterpenes under a variety of atmospherically relevant NOx conditions. These results are discussed in terms of implications for current chemical transport model simulations and recommendations are provided for future measurement and modeling efforts.


1992 ◽  
Vol 23 (2) ◽  
pp. 89-104 ◽  
Author(s):  
Ole H. Jacobsen ◽  
Feike J. Leij ◽  
Martinus Th. van Genuchten

Breakthrough curves of Cl and 3H2O were obtained during steady unsaturated flow in five lysimeters containing an undisturbed coarse sand (Orthic Haplohumod). The experimental data were analyzed in terms of the classical two-parameter convection-dispersion equation and a four-parameter two-region type physical nonequilibrium solute transport model. Model parameters were obtained by both curve fitting and time moment analysis. The four-parameter model provided a much better fit to the data for three soil columns, but performed only slightly better for the two remaining columns. The retardation factor for Cl was about 10 % less than for 3H2O, indicating some anion exclusion. For the four-parameter model the average immobile water fraction was 0.14 and the Peclet numbers of the mobile region varied between 50 and 200. Time moments analysis proved to be a useful tool for quantifying the break through curve (BTC) although the moments were found to be sensitive to experimental scattering in the measured data at larger times. Also, fitted parameters described the experimental data better than moment generated parameter values.


2021 ◽  
Vol 17 (9) ◽  
pp. e1009332
Author(s):  
Fredrik Allenmark ◽  
Ahu Gokce ◽  
Thomas Geyer ◽  
Artyom Zinchenko ◽  
Hermann J. Müller ◽  
...  

In visual search tasks, repeating features or the position of the target results in faster response times. Such inter-trial ‘priming’ effects occur not just for repetitions from the immediately preceding trial but also from trials further back. A paradigm known to produce particularly long-lasting inter-trial effects–of the target-defining feature, target position, and response (feature)–is the ‘priming of pop-out’ (PoP) paradigm, which typically uses sparse search displays and random swapping across trials of target- and distractor-defining features. However, the mechanisms underlying these inter-trial effects are still not well understood. To address this, we applied a modeling framework combining an evidence accumulation (EA) model with different computational updating rules of the model parameters (i.e., the drift rate and starting point of EA) for different aspects of stimulus history, to data from a (previously published) PoP study that had revealed significant inter-trial effects from several trials back for repetitions of the target color, the target position, and (response-critical) target feature. By performing a systematic model comparison, we aimed to determine which EA model parameter and which updating rule for that parameter best accounts for each inter-trial effect and the associated n-back temporal profile. We found that, in general, our modeling framework could accurately predict the n-back temporal profiles. Further, target color- and position-based inter-trial effects were best understood as arising from redistribution of a limited-capacity weight resource which determines the EA rate. In contrast, response-based inter-trial effects were best explained by a bias of the starting point towards the response associated with a previous target; this bias appeared largely tied to the position of the target. These findings elucidate how our cognitive system continually tracks, and updates an internal predictive model of, a number of separable stimulus and response parameters in order to optimize task performance.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e10975
Author(s):  
Nicos Haralabidis ◽  
Gil Serrancolí ◽  
Steffi Colyer ◽  
Ian Bezodis ◽  
Aki Salo ◽  
...  

Biomechanical simulation and modelling approaches have the possibility to make a meaningful impact within applied sports settings, such as sprinting. However, for this to be realised, such approaches must first undergo a thorough quantitative evaluation against experimental data. We developed a musculoskeletal modelling and simulation framework for sprinting, with the objective to evaluate its ability to reproduce experimental kinematics and kinetics data for different sprinting phases. This was achieved by performing a series of data-tracking calibration (individual and simultaneous) and validation simulations, that also featured the generation of dynamically consistent simulated outputs and the determination of foot-ground contact model parameters. The simulated values from the calibration simulations were found to be in close agreement with the corresponding experimental data, particularly for the kinematics (average root mean squared differences (RMSDs) less than 1.0° and 0.2 cm for the rotational and translational kinematics, respectively) and ground reaction force (highest average percentage RMSD of 8.1%). Minimal differences in tracking performance were observed when concurrently determining the foot-ground contact model parameters from each of the individual or simultaneous calibration simulations. The validation simulation yielded results that were comparable (RMSDs less than 1.0° and 0.3 cm for the rotational and translational kinematics, respectively) to those obtained from the calibration simulations. This study demonstrated the suitability of the proposed framework for performing future predictive simulations of sprinting, and gives confidence in its use to assess the cause-effect relationships of technique modification in relation to performance. Furthermore, this is the first study to provide dynamically consistent three-dimensional muscle-driven simulations of sprinting across different phases.


2021 ◽  
Vol 28 (4) ◽  
Author(s):  
Selwin Hageraats ◽  
Mathieu Thoury ◽  
Stefan Stanescu ◽  
Katrien Keune

X-ray linear dichroism (XLD) is a fundamental property of many ordered materials that can for instance provide information on the origin of magnetic properties and the existence of differently ordered domains. Conventionally, measurements of XLD are performed on single crystals, crystalline thin films, or highly ordered nanostructure arrays. Here, it is demonstrated how quantitative measurements of XLD can be performed on powders, relying on the random orientation of many particles instead of the controlled orientation of a single ordered structure. The technique is based on a scanning X-ray transmission microscope operated in the soft X-ray regime. The use of a Fresnel zone plate allows X-ray absorption features to be probed at ∼40 nm lateral resolution – a scale small enough to probe the individual crystallites in most powders. Quantitative XLD parameters were then retrieved by determining the intensity distributions of certain diagnostic dichroic absorption features, estimating the angle between their transition dipole moments, and fitting the distributions with four-parameter dichroic models. Analysis of several differently produced ZnO powders shows that the experimentally obtained distributions indeed follow the theoretical model for XLD. Making use of Monte Carlo simulations to estimate uncertainties in the calculated dichroic model parameters, it was established that longer X-ray exposure times lead to a decrease in the amplitude of the XLD effect of ZnO.


2010 ◽  
Vol 21 (4-5) ◽  
pp. 421-440 ◽  
Author(s):  
J.-P. NADAL ◽  
M. B. GORDON ◽  
J. R. IGLESIAS ◽  
V. SEMESHENKO

We introduce a general framework for modelling the dynamics of the propensity to offend in a population of (possibly interacting) agents. We consider that each agent has an ‘honesty index’ which parameterizes his probability of abiding by the law. This probability also depends on a composite parameter associated to the attractiveness of the crime outcome and of the crime setting (the context which makes a crime more or less likely to occur, such as the presence or not of a guardian). Within this framework we explore some consequences of the working hypothesis that punishment has a deterrent effect, assuming that, after a criminal act, an agent's honesty index may increase if he is caught and decrease otherwise. We provide both analytical and numerical results. We show that in the space of parameters characterizing the probability of punishment, there are two ‘phases’: one corresponding to a population with a low crime rate and the other to a population with a large crime rate. We speculate on the possible existence of a self-organized state in which, due to the society reaction against crime activities, the population dynamics would be stabilized on the critical line, leading to a wide distribution of propensities to offend in the population. In view of empirical works on the causes of the recent evolution of crime rates in developed countries, we discuss how changes of socio-economic conditions may affect the model parameters, and hence the crime rate in the population. We suggest possible extensions of the model that will allow us to take into account more realistic features.


1985 ◽  
Vol 17 (9) ◽  
pp. 13-21 ◽  
Author(s):  
W K. H. Kinzelbach

At present chlorinated hydrocarbon solvents rank among the major pollutants found in groundwater. In the interpretation of field data and the planning of decontamination measures numerical transport models may be a valuable tool of the environmental engineer. The applicability of one such model is tested on a case of groundwater pollution by 1,1,1,-trichloroethane. The model is composed of a horizontally 2-D flow model and a 3-D ‘random-walk' transport model. It takes into account convective and dispersive transport as well as linear adsorption and a first order decay reaction. Under certain simplifying assumptions the model allows an adequate reproduction of observed concentrations. Due to uncertainty in data and limited comparabili ty of simulated and measured concentrations the model parameters can only be estimated within bounds. The decay rate of 1,1,1-trichloroethane is estimated to lie between 0 and 0.0005 l/d.


2020 ◽  
pp. 107699862094120
Author(s):  
Jean-Paul Fox ◽  
Jeremias Wenzel ◽  
Konrad Klotzke

Standard item response theory (IRT) models have been extended with testlet effects to account for the nesting of items; these are well known as (Bayesian) testlet models or random effect models for testlets. The testlet modeling framework has several disadvantages. A sufficient number of testlet items are needed to estimate testlet effects, and a sufficient number of individuals are needed to estimate testlet variance. The prior for the testlet variance parameter can only represent a positive association among testlet items. The inclusion of testlet parameters significantly increases the number of model parameters, which can lead to computational problems. To avoid these problems, a Bayesian covariance structure model (BCSM) for testlets is proposed, where standard IRT models are extended with a covariance structure model to account for dependences among testlet items. In the BCSM, the dependence among testlet items is modeled without using testlet effects. This approach does not imply any sample size restrictions and is very efficient in terms of the number of parameters needed to describe testlet dependences. The BCSM is compared to the well-known Bayesian random effects model for testlets using a simulation study. Specifically for testlets with a few items, a small number of test takers, or weak associations among testlet items, the BCSM shows more accurate estimation results than the random effects model.


Sign in / Sign up

Export Citation Format

Share Document