GIS-based applications of sensitivity analysis for sewer models

2012 ◽  
Vol 65 (7) ◽  
pp. 1215-1222 ◽  
Author(s):  
M. Mair ◽  
R. Sitzenfrei ◽  
M. Kleidorfer ◽  
M. Möderl ◽  
W. Rauch

Sensitivity analysis (SA) evaluates the impact of changes in model parameters on model predictions. Such an analysis is commonly used when developing or applying environmental models to improve the understanding of underlying system behaviours and the impact and interactions of model parameters. The novelty of this paper is a geo-referenced visualization of sensitivity indices for model parameters in a combined sewer model using geographic information system (GIS) software. The result is a collection of maps for each analysis, where sensitivity indices (calculated for model parameters of interest) are illustrated according to a predefined symbology. In this paper, four types of maps (an uncertainty map, calibration map, vulnerability map, and design map) are created for an example case study. This article highlights the advantages and limitations of GIS-based SA of sewer models. The conclusion shows that for all analyzed applications, GIS-based SA is useful for analyzing, discussing and interpreting the model parameter sensitivity and its spatial dimension. The method can lead to a comprehensive view of the sewer system.

Author(s):  
Sarah C. Baxter ◽  
Philip A. Voglewede

Mathematical modeling is an important part of the engineering design cycle. Most models require application specific input parameters that are established by calculation or experiment. The accuracy of model predictions depends on underlying model assumptions as well as how uncertainty in knowledge of the parameters is transmitted through the mathematical structure of the model. Knowledge about the relative impact of individual parameters can help establish priorities in developing/choosing specific parameters and provide insight into a range of parameters that produce ‘equally good’ designs. In this work Global Sensitivity Analysis (GSA) is examined as a technique that can contribute to this insight by developing Sensitivity Indices, a measure of the relative importance, for each parameter. The approach is illustrated on a kinematic model of a metamorphic 4-bar mechanism. The model parameters are the lengths of the four links. The results of this probabilistic analysis highlight the synergy that must exist between all four link lengths to create a design that can follow the desired motion path. The impact of individual link lengths, however, rises and falls depending on where the mechanism is along its motion path.


2021 ◽  
Author(s):  
Sabine Bauer ◽  
Ivanna Kramer

The knowledge about the impact of structure-specific parameters on the biomechanical behavior of a computer model has an essential meaning for the realistic modeling and system improving. Especially the biomechanical parameters of the intervertebral discs, the ligamentous structures and the facet joints are seen in the literature as significant components of a spine model, which define the quality of the model. Therefore, it is important to understand how the variations of input parameters for these components affect the entire model and its individual structures. Sensitivity analysis can be used to gain the required knowledge about the correlation of the input and output variables in a complex spinal model. The present study analyses the influence of the biomechanical parameters of the intervertebral disc using different sensitivity analysis methods to optimize the spine model parameters. The analysis is performed with a multi-body simulation model of the cervical functional spinal unit C6-C7.


Author(s):  
Souransu Nandi ◽  
Tarunraj Singh

The focus of this paper is on the global sensitivity analysis (GSA) of linear systems with time-invariant model parameter uncertainties and driven by stochastic inputs. The Sobol' indices of the evolving mean and variance estimates of states are used to assess the impact of the time-invariant uncertain model parameters and the statistics of the stochastic input on the uncertainty of the output. Numerical results on two benchmark problems help illustrate that it is conceivable that parameters, which are not so significant in contributing to the uncertainty of the mean, can be extremely significant in contributing to the uncertainty of the variances. The paper uses a polynomial chaos (PC) approach to synthesize a surrogate probabilistic model of the stochastic system after using Lagrange interpolation polynomials (LIPs) as PC bases. The Sobol' indices are then directly evaluated from the PC coefficients. Although this concept is not new, a novel interpretation of stochastic collocation-based PC and intrusive PC is presented where they are shown to represent identical probabilistic models when the system under consideration is linear. This result now permits treating linear models as black boxes to develop intrusive PC surrogates.


2021 ◽  
Author(s):  
Sabine M. Spiessl ◽  
Dirk-A. Becker ◽  
Sergei Kucherenko

<p>Due to their highly nonlinear, non-monotonic or even discontinuous behavior, sensitivity analysis of final repository models can be a demanding task. Most of the output of repository models is typically distributed over several orders of magnitude and highly skewed. Many values of a probabilistic investigation are very low or even zero. Although this is desirable in view of repository safety it can distort the evidence of sensitivity analysis. For the safety assessment of the system, the highest values of outputs are mainly essential and if those are only a few, their dependence on specific parameters may appear insignificant. By applying a transformation, different model output values are differently weighed, according to their magnitude, in sensitivity analysis. Probabilistic methods of higher-order sensitivity analysis, applied on appropriately transformed model output values, provide a possibility for more robust identification of relevant parameters and their interactions. This type of sensitivity analysis is typically done by decomposing the total unconditional variance of the model output into partial variances corresponding to different terms in the ANOVA decomposition. From this, sensitivity indices of increasing order can be computed. The key indices used most often are the first-order index (SI1) and the total-order index (SIT). SI1 refers to the individual impact of one parameter on the model and SIT represents the total effect of one parameter on the output in interactions with all other parameters. The second-order sensitivity indices (SI2) describe the interactions between two model parameters.</p><p>In this work global sensitivity analysis has been performed with three different kinds of output transformations (log, shifted and Box-Cox transformation) and two metamodeling approaches, namely the Random-Sampling High Dimensional Model Representation (RS-HDMR) [1] and the Bayesian Sparse PCE (BSPCE) [2] approaches. Both approaches are implemented in the SobolGSA software [3, 4] which was used in this work. We analyzed the time-dependent output with two approaches for sensitivity analysis, i.e., the pointwise and generalized approaches. With the pointwise approach, the output at each time step is analyzed independently. The generalized approach considers averaged output contributions at all previous time steps in the analysis of the current step. Obtained results indicate that robustness can be improved by using appropriate transformations and choice of coefficients for the transformation and the metamodel.</p><p>[1] M. Zuniga, S. Kucherenko, N. Shah (2013). Metamodelling with independent and dependent inputs. Computer Physics Communications, 184 (6): 1570-1580.</p><p>[2] Q. Shao, A. Younes, M. Fahs, T.A. Mara (2017). Bayesian sparse polynomial chaos expansion for global sensitivity analysis. Computer Methods in Applied Mechanics and Engineering, 318: 474-496.</p><p>[3] S. M. Spiessl, S. Kucherenko, D.-A. Becker, O. Zaccheus (2018). Higher-order sensitivity analysis of a final repository model with discontinuous behaviour. Reliability Engineering and System Safety, doi: https://doi.org/10.1016/j.ress.2018.12.004.</p><p>[4] SobolGSA software (2021). User manual https://www.imperial.ac.uk/process-systems-engineering/research/free-software/sobolgsa-software/.</p>


2011 ◽  
Vol 11 (9) ◽  
pp. 2567-2582 ◽  
Author(s):  
H. Roux ◽  
D. Labat ◽  
P.-A. Garambois ◽  
M.-M. Maubourguet ◽  
J. Chorda ◽  
...  

Abstract. A spatially distributed hydrological model, dedicated to flood simulation, is developed on the basis of physical process representation (infiltration, overland flow, channel routing). Estimation of model parameters requires data concerning topography, soil properties, vegetation and land use. Four parameters are calibrated for the entire catchment using one flood event. Model sensitivity to individual parameters is assessed using Monte-Carlo simulations. Results of this sensitivity analysis with a criterion based on the Nash efficiency coefficient and the error of peak time and runoff are used to calibrate the model. This procedure is tested on the Gardon d'Anduze catchment, located in the Mediterranean zone of southern France. A first validation is conducted using three flood events with different hydrometeorological characteristics. This sensitivity analysis along with validation tests illustrates the predictive capability of the model and points out the possible improvements on the model's structure and parameterization for flash flood forecasting, especially in ungauged basins. Concerning the model structure, results show that water transfer through the subsurface zone also contributes to the hydrograph response to an extreme event, especially during the recession period. Maps of soil saturation emphasize the impact of rainfall and soil properties variability on these dynamics. Adding a subsurface flow component in the simulation also greatly impacts the spatial distribution of soil saturation and shows the importance of the drainage network. Measures of such distributed variables would help discriminating between different possible model structures.


2020 ◽  
Vol 34 (11) ◽  
pp. 1813-1830
Author(s):  
Daniel Erdal ◽  
Sinan Xiao ◽  
Wolfgang Nowak ◽  
Olaf A. Cirpka

Abstract Ensemble-based uncertainty quantification and global sensitivity analysis of environmental models requires generating large ensembles of parameter-sets. This can already be difficult when analyzing moderately complex models based on partial differential equations because many parameter combinations cause an implausible model behavior even though the individual parameters are within plausible ranges. In this work, we apply Gaussian Process Emulators (GPE) as surrogate models in a sampling scheme. In an active-training phase of the surrogate model, we target the behavioral boundary of the parameter space before sampling this behavioral part of the parameter space more evenly by passive sampling. Active learning increases the subsequent sampling efficiency, but its additional costs pay off only for a sufficiently large sample size. We exemplify our idea with a catchment-scale subsurface flow model with uncertain material properties, boundary conditions, and geometric descriptors of the geological structure. We then perform a global-sensitivity analysis of the resulting behavioral dataset using the active-subspace method, which requires approximating the local sensitivities of the target quantity with respect to all parameters at all sampled locations in parameter space. The Gaussian Process Emulator implicitly provides an analytical expression for this gradient, thus improving the accuracy of the active-subspace construction. When applying the GPE-based preselection, 70–90% of the samples were confirmed to be behavioral by running the full model, whereas only 0.5% of the samples were behavioral in standard Monte-Carlo sampling without preselection. The GPE method also provided local sensitivities at minimal additional costs.


Energies ◽  
2019 ◽  
Vol 12 (17) ◽  
pp. 3322 ◽  
Author(s):  
Marieline Senave ◽  
Staf Roels ◽  
Stijn Verbeke ◽  
Evi Lambie ◽  
Dirk Saelens

Recently, there has been an increasing interest in the development of an approach to characterize the as-built heat loss coefficient (HLC) of buildings based on a combination of on-board monitoring (OBM) and data-driven modeling. OBM is hereby defined as the monitoring of the energy consumption and interior climate of in-use buildings via non-intrusive sensors. The main challenge faced by researchers is the identification of the required input data and the appropriate data analysis techniques to assess the HLC of specific building types, with a certain degree of accuracy and/or within a budget constraint. A wide range of characterization techniques can be imagined, going from simplified steady-state models applied to smart energy meter data, to advanced dynamic analysis models identified on full OBM data sets that are further enriched with geometric info, survey results, or on-site inspections. This paper evaluates the extent to which these techniques result in different HLC estimates. To this end, it performs a sensitivity analysis of the characterization outcome for a case study dwelling. Thirty-five unique input data packages are defined using a tree structure. Subsequently, four different data analysis methods are applied on these sets: the steady-state average, Linear Regression and Energy Signature method, and the dynamic AutoRegressive with eXogenous input model (ARX). In addition to the sensitivity analysis, the paper compares the HLC values determined via OBM characterization to the theoretically calculated value, and explores the factors contributing to the observed discrepancies. The results demonstrate that deviations up to 26.9% can occur on the characterized as-built HLC, depending on the amount of monitoring data and prior information used to establish the interior temperature of the dwelling. The approach used to represent the internal and solar heat gains also proves to have a significant influence on the HLC estimate. The impact of the selected input data is higher than that of the applied data analysis method.


2019 ◽  
Vol 69 (1) ◽  
pp. 39-54 ◽  
Author(s):  
Mohammad Nazari-Sharabian ◽  
Masoud Taheriyoun ◽  
Moses Karakouzian

Abstract This study investigates the impact of different digital elevation model (DEM) resolutions on the topological attributes and simulated runoff, as well as the sensitivity of runoff parameters in the Mahabad Dam watershed in Iran. The watershed and streamlines were delineated in ArcGIS, and the hydrologic analyses were performed using the Soil and Water Assessment Tool (SWAT). The sensitivity analysis on runoff parameters was performed, using the Sequential Uncertainties FItting Ver. 2 algorithm, in the SWAT Calibration and Uncertainty Procedures (SWAT-CUP) program. The results indicated that the sensitivity of runoff parameters, watershed surface area, and elevations changed under different DEM resolutions. As the distribution of slopes changed using different DEMs, surface parameters were most affected. Furthermore, higher amounts of runoff were generated when DEMs with finer resolutions were implemented. In comparison with the observed value of 8 m3/s at the watershed outlet, the 12.5 m DEM showed more realistic results (6.77 m3/s). Comparatively, the 12.5 m DEM generated 0.74% and 2.73% more runoff compared with the 30 and 90 m DEMs, respectively. The findings of this study indicate that in order to reduce computation time, researchers may use DEMs with coarser resolutions at the expense of minor decreases in accuracy.


2020 ◽  
Author(s):  
Joanna Doummar ◽  
Assaad H. Kassem

<p>Qualitative vulnerability assessment methods applied in karst aquifers rely on key factors in the hydrological compartments usually assigned different weights according to their estimated impact on groundwater vulnerability. Based on an integrated numerical groundwater model on a snow-governed karst catchment area (Assal Spring- Lebanon), the aim of this work is to quantify the importance of the most influential parameters on recharge and spring discharge and outline potential parameters that are not accounted for in standard methods, when in fact they do play a role in the intrinsic vulnerability of a system. The assessment of the model sensitivity and the ranking of parameters are conducted using an automatic calibration tool for local sensitivity analysis in addition to a variance-based local sensitivity assessment of model output time series (recharge and discharge)  for two consecutive years (2016-2017) to various model parameters. The impact of each parameter was normalized to estimate standardized weights for each of the process based key-controlling parameters. Parameters to which model was sensitive were factors related to soil, 2) fast infiltration (bypass function) typical of karst aquifers, 3) climatic parameters (melting temperature and degree day coefficient) and 4) aquifer hydraulic properties that play a major role in groundwater vulnerability inducing a temporal effect and varied recession. Other less important parameters play different roles according to different assigned weights proportional to their ranking. Additionally, the effect of slope/geomorphology (e.g., dolines) was further investigated.  In general, this study shows that the weighting coefficients assigned to key vulnerability factors in the qualitative assessment methods can be reevaluated based on this process-based approach.</p><p> </p><p> </p><p> </p>


Sign in / Sign up

Export Citation Format

Share Document