scholarly journals The Impact of Global Sensitivities and Design Measures in Model-Based Optimal Experimental Design

Processes ◽  
2018 ◽  
Vol 6 (4) ◽  
pp. 27 ◽  
Author(s):  
René Schenkendorf ◽  
Xiangzhong Xie ◽  
Moritz Rehbein ◽  
Stephan Scholl ◽  
Ulrike Krewer

In the field of chemical engineering, mathematical models have been proven to be an indispensable tool for process analysis, process design, and condition monitoring. To gain the most benefit from model-based approaches, the implemented mathematical models have to be based on sound principles, and they need to be calibrated to the process under study with suitable model parameter estimates. Often, the model parameters identified by experimental data, however, pose severe uncertainties leading to incorrect or biased inferences. This applies in particular in the field of pharmaceutical manufacturing, where usually the measurement data are limited in quantity and quality when analyzing novel active pharmaceutical ingredients. Optimally designed experiments, in turn, aim to increase the quality of the gathered data in the most efficient way. Any improvement in data quality results in more precise parameter estimates and more reliable model candidates. The applied methods for parameter sensitivity analyses and design criteria are crucial for the effectiveness of the optimal experimental design. In this work, different design measures based on global parameter sensitivities are critically compared with state-of-the-art concepts that follow simplifying linearization principles. The efficient implementation of the proposed sensitivity measures is explicitly addressed to be applicable to complex chemical engineering problems of practical relevance. As a case study, the homogeneous synthesis of 3,4-dihydro-1H-1-benzazepine-2,5-dione, a scaffold for the preparation of various protein kinase inhibitors, is analyzed followed by a more complex model of biochemical reactions. In both studies, the model-based optimal experimental design benefits from global parameter sensitivities combined with proper design measures.

1985 ◽  
Vol 248 (3) ◽  
pp. R378-R386 ◽  
Author(s):  
M. H. Nathanson ◽  
G. M. Saidel

Optimal experimental design is used to predict the experimental conditions that will allow the "best" estimates of model parameters. A variety of criteria must be considered before an optimal design is chosen. Maximizing the determinant of the information matrix (D optimality), which tends to produce the most precise simultaneous estimates of all parameters, is commonly considered as the primary criterion. To complement this criterion, we present another whose effect is to reduce the interaction among the parameter estimates so that changes in any one parameter can be more distinct. This new criterion consists of maximizing the determinant of an appropriately scaled information matrix (M optimality). These criteria are applied jointly in a multiple-objective function. To illustrate the use of these concepts, we develop an optimal experimental design of blood sampling schedules using a detailed ferrokinetic model.


Hydrology ◽  
2021 ◽  
Vol 8 (3) ◽  
pp. 102
Author(s):  
Frauke Kachholz ◽  
Jens Tränckner

Land use changes influence the water balance and often increase surface runoff. The resulting impacts on river flow, water level, and flood should be identified beforehand in the phase of spatial planning. In two consecutive papers, we develop a model-based decision support system for quantifying the hydrological and stream hydraulic impacts of land use changes. Part 1 presents the semi-automatic set-up of physically based hydrological and hydraulic models on the basis of geodata analysis for the current state. Appropriate hydrological model parameters for ungauged catchments are derived by a transfer from a calibrated model. In the regarded lowland river basins, parameters of surface and groundwater inflow turned out to be particularly important. While the calibration delivers very good to good model results for flow (Evol =2.4%, R = 0.84, NSE = 0.84), the model performance is good to satisfactory (Evol = −9.6%, R = 0.88, NSE = 0.59) in a different river system parametrized with the transfer procedure. After transferring the concept to a larger area with various small rivers, the current state is analyzed by running simulations based on statistical rainfall scenarios. Results include watercourse section-specific capacities and excess volumes in case of flooding. The developed approach can relatively quickly generate physically reliable and spatially high-resolution results. Part 2 builds on the data generated in part 1 and presents the subsequent approach to assess hydrologic/hydrodynamic impacts of potential land use changes.


1995 ◽  
Vol 46 (1) ◽  
pp. 359 ◽  
Author(s):  
J Persson ◽  
L Hakanson

Bottom dynamic conditions (areas of accumulation, erosion or transportation) in aquatic ecosystems influence the dispersal, sedimentation and recirculation of most substances, such as metals, organic toxins and nutrients. The aim of the present work was to establish a simple and general method to predict sediment types/bottom dynamic conditions in Baltic coastal areas. As a working hypothesis, it is proposed that the morphometry and the absence or presence of an archipelago outside a given coastal area regulate what factors determine the prevailing bottom dynamic conditions. Empirical data on the proportion of accumulation bottoms (BA) were collected from 38 relatively small (1-14 km²) and enclosed coastal areas in the Baltic Sea. Morphometric data were obtained by using a digital technique to transfer information from standard bathymetric maps into a computer. Data were processed by means of multivariate statistical methods. In the first model, based on data from all 38 areas, 55% of the variation in BA among the areas was statistically explained by five morphometric parameters. The data set was then divided into two parts: areas in direct connection with the open sea, and areas inside an archipelago. In the second model, based on data from 15 areas in direct connection with the open sea, 77% of the variation in BA was statistically explained by the mean depth of the deep water (the water mass below 10 m) and the mean slope. In the third model, based on data from 23 areas inside an archipelago, 70% of the variation in BA was statistically explained by the mean slope, the topographic form factor, the proportion of islands and the mean filter factor (which is a relative measure of the impact of winds and waves from outside the area). The model parameters describe the sediment trapping capacity of the areas investigated.


2017 ◽  
Vol 33 (5) ◽  
pp. 1278-1293 ◽  
Author(s):  
Timothy Van Daele ◽  
Krist V. Gernaey ◽  
Rolf H. Ringborg ◽  
Tim Börner ◽  
Søren Heintz ◽  
...  

2020 ◽  
Author(s):  
Gabriel Weindel ◽  
Royce anders ◽  
F.-Xavier Alario ◽  
Boris BURLE

Decision-making models based on evidence accumulation processes (the most prolific one being the drift-diffusion model – DDM) are widely used to draw inferences about latent psychological processes from chronometric data. While the observed goodness of fit in a wide range of tasks supports the model’s validity, the derived interpretations have yet to be sufficiently cross-validated with other measures that also reflect cognitive processing. To do so, we recorded electromyographic (EMG) activity along with response times (RT), and used it to decompose every RT into two components: a pre-motor (PMT) and motor time (MT). These measures were mapped to the DDM's parameters, thus allowing a test, beyond quality of fit, of the validity of the model’s assumptions and their usual interpretation. In two perceptual decision tasks, performed within a canonical task setting, we manipulated stimulus contrast, speed-accuracy trade-off, and response force, and assessed their effects on PMT, MT, and RT. Contrary to common assumptions, these three factors consistently affected MT. DDM parameter estimates of non-decision processes are thought to include motor execution processes, and they were globally linked to the recorded response execution MT. However, when the assumption of independence between decision and non-decision processes was not met, in the fastest trials, the link was weaker. Overall, the results show a fair concordance between model-based and EMG-based decompositions of RTs, but also establish some limits on the interpretability of decision model parameters linked to response execution.


2018 ◽  
Vol 5 (8) ◽  
pp. 180384 ◽  
Author(s):  
Andrew Parker ◽  
Matthew J. Simpson ◽  
Ruth E. Baker

To better understand development, repair and disease progression, it is useful to quantify the behaviour of proliferative and motile cell populations as they grow and expand to fill their local environment. Inferring parameters associated with mechanistic models of cell colony growth using quantitative data collected from carefully designed experiments provides a natural means to elucidate the relative contributions of various processes to the growth of the colony. In this work, we explore how experimental design impacts our ability to infer parameters for simple models of the growth of proliferative and motile cell populations. We adopt a Bayesian approach, which allows us to characterize the uncertainty associated with estimates of the model parameters. Our results suggest that experimental designs that incorporate initial spatial heterogeneities in cell positions facilitate parameter inference without the requirement of cell tracking, while designs that involve uniform initial placement of cells require cell tracking for accurate parameter inference. As cell tracking is an experimental bottleneck in many studies of this type, our recommendations for experimental design provide for significant potential time and cost savings in the analysis of cell colony growth.


2019 ◽  
Vol 68 (5) ◽  
pp. 730-743 ◽  
Author(s):  
Kris V Parag ◽  
Oliver G Pybus

Abstract The coalescent process describes how changes in the size or structure of a population influence the genealogical patterns of sequences sampled from that population. The estimation of (effective) population size changes from genealogies that are reconstructed from these sampled sequences is an important problem in many biological fields. Often, population size is characterized by a piecewise-constant function, with each piece serving as a population size parameter to be estimated. Estimation quality depends on both the statistical coalescent inference method employed, and on the experimental protocol, which controls variables such as the sampling of sequences through time and space, or the transformation of model parameters. While there is an extensive literature on coalescent inference methodology, there is comparatively little work on experimental design. The research that does exist is largely simulation-based, precluding the development of provable or general design theorems. We examine three key design problems: temporal sampling of sequences under the skyline demographic coalescent model, spatio-temporal sampling under the structured coalescent model, and time discretization for sequentially Markovian coalescent models. In all cases, we prove that 1) working in the logarithm of the parameters to be inferred (e.g., population size) and 2) distributing informative coalescent events uniformly among these log-parameters, is uniquely robust. “Robust” means that the total and maximum uncertainty of our parameter estimates are minimized, and made insensitive to their unknown (true) values. This robust design theorem provides rigorous justification for several existing coalescent experimental design decisions and leads to usable guidelines for future empirical or simulation-based investigations. Given its persistence among models, this theorem may form the basis of an experimental design paradigm for coalescent inference.


2011 ◽  
Vol 15 (11) ◽  
pp. 3591-3603 ◽  
Author(s):  
R. Singh ◽  
T. Wagener ◽  
K. van Werkhoven ◽  
M. E. Mann ◽  
R. Crane

Abstract. Projecting how future climatic change might impact streamflow is an important challenge for hydrologic science. The common approach to solve this problem is by forcing a hydrologic model, calibrated on historical data or using a priori parameter estimates, with future scenarios of precipitation and temperature. However, several recent studies suggest that the climatic regime of the calibration period is reflected in the resulting parameter estimates and model performance can be negatively impacted if the climate for which projections are made is significantly different from that during calibration. So how can we calibrate a hydrologic model for historically unobserved climatic conditions? To address this issue, we propose a new trading-space-for-time framework that utilizes the similarity between the predictions under change (PUC) and predictions in ungauged basins (PUB) problems. In this new framework we first regionalize climate dependent streamflow characteristics using 394 US watersheds. We then assume that this spatial relationship between climate and streamflow characteristics is similar to the one we would observe between climate and streamflow over long time periods at a single location. This assumption is what we refer to as trading-space-for-time. Therefore, we change the limits for extrapolation to future climatic situations from the restricted locally observed historical variability to the variability observed across all watersheds used to derive the regression relationships. A typical watershed model is subsequently calibrated (conditioned) on the predicted signatures for any future climate scenario to account for the impact of climate on model parameters within a Bayesian framework. As a result, we can obtain ensemble predictions of continuous streamflow at both gauged and ungauged locations. The new method is tested in five US watersheds located in historically different climates using synthetic climate scenarios generated by increasing mean temperature by up to 8 °C and changing mean precipitation by −30% to +40% from their historical values. Depending on the aridity of the watershed, streamflow projections using adjusted parameters became significantly different from those using historically calibrated parameters if precipitation change exceeded −10% or +20%. In general, the trading-space-for-time approach resulted in a stronger watershed response to climate change for both high and low flow conditions.


2015 ◽  
Vol 8 (3) ◽  
pp. 791-804 ◽  
Author(s):  
J. Reimer ◽  
M. Schuerch ◽  
T. Slawig

Abstract. The geosciences are a highly suitable field of application for optimizing model parameters and experimental designs especially because many data are collected. In this paper, the weighted least squares estimator for optimizing model parameters is presented together with its asymptotic properties. A popular approach to optimize experimental designs called local optimal experimental designs is described together with a lesser known approach which takes into account the potential nonlinearity of the model parameters. These two approaches have been combined with two methods to solve their underlying discrete optimization problem. All presented methods were implemented in an open-source MATLAB toolbox called the Optimal Experimental Design Toolbox whose structure and application is described. In numerical experiments, the model parameters and experimental design were optimized using this toolbox. Two existing models for sediment concentration in seawater and sediment accretion on salt marshes of different complexity served as an application example. The advantages and disadvantages of these approaches were compared based on these models. Thanks to optimized experimental designs, the parameters of these models could be determined very accurately with significantly fewer measurements compared to unoptimized experimental designs. The chosen optimization approach played a minor role for the accuracy; therefore, the approach with the least computational effort is recommended.


Sign in / Sign up

Export Citation Format

Share Document