scholarly journals Emulation of a complex global aerosol model to quantify sensitivity to uncertain parameters

2011 ◽  
Vol 11 (23) ◽  
pp. 12253-12273 ◽  
Author(s):  
L. A. Lee ◽  
K. S. Carslaw ◽  
K. J. Pringle ◽  
G. W. Mann ◽  
D. V. Spracklen

Abstract. Sensitivity analysis of atmospheric models is necessary to identify the processes that lead to uncertainty in model predictions, to help understand model diversity through comparison of driving processes, and to prioritise research. Assessing the effect of parameter uncertainty in complex models is challenging and often limited by CPU constraints. Here we present a cost-effective application of variance-based sensitivity analysis to quantify the sensitivity of a 3-D global aerosol model to uncertain parameters. A Gaussian process emulator is used to estimate the model output across multi-dimensional parameter space, using information from a small number of model runs at points chosen using a Latin hypercube space-filling design. Gaussian process emulation is a Bayesian approach that uses information from the model runs along with some prior assumptions about the model behaviour to predict model output everywhere in the uncertainty space. We use the Gaussian process emulator to calculate the percentage of expected output variance explained by uncertainty in global aerosol model parameters and their interactions. To demonstrate the technique, we show examples of cloud condensation nuclei (CCN) sensitivity to 8 model parameters in polluted and remote marine environments as a function of altitude. In the polluted environment 95 % of the variance of CCN concentration is described by uncertainty in the 8 parameters (excluding their interaction effects) and is dominated by the uncertainty in the sulphur emissions, which explains 80 % of the variance. However, in the remote region parameter interaction effects become important, accounting for up to 40 % of the total variance. Some parameters are shown to have a negligible individual effect but a substantial interaction effect. Such sensitivities would not be detected in the commonly used single parameter perturbation experiments, which would therefore underpredict total uncertainty. Gaussian process emulation is shown to be an efficient and useful technique for quantifying parameter sensitivity in complex global atmospheric models.

2011 ◽  
Vol 11 (7) ◽  
pp. 20433-20485 ◽  
Author(s):  
L. A. Lee ◽  
K. S. Carslaw ◽  
K. Pringle ◽  
G. W. Mann ◽  
D. V. Spracklen

Abstract. Sensitivity analysis of atmospheric models is necessary to identify the processes that lead to uncertainty in model predictions, to help understand model diversity, and to prioritise research. Assessing the effect of parameter uncertainty in complex models is challenging and often limited by CPU constraints. Here we present a cost-effective application of variance-based sensitivity analysis to quantify the sensitivity of a 3-D global aerosol model to uncertain parameters. A Gaussian process emulator is used to estimate the model output across multi-dimensional parameter space using information from a small number of model runs at points chosen using a Latin hypercube space-filling design. Gaussian process emulation is a Bayesian approach that uses information from the model runs along with some prior assumptions about the model behaviour to predict model output everywhere in the uncertainty space. We use the Gaussian process emulator to calculate the percentage of expected output variance explained by uncertainty in global aerosol model parameters and their interactions. To demonstrate the technique, we show examples of cloud condensation nuclei (CCN) sensitivity to 8 model parameters in polluted and remote marine environments as a function of altitude. In the polluted environment 95 % of the variance of CCN concentration is described by uncertainty in the 8 parameters (excluding their interaction effects) and is dominated by the uncertainty in the sulphur emissions, which explains 80 % of the variance. However, in the remote region parameter interaction effects become important, accounting for up to 40 % of the total variance. Some parameters are shown to have a negligible individual effect but a substantial interaction effect. Such sensitivities would not be detected in the commonly used single parameter perturbation experiments, which would therefore underpredict total uncertainty. Gaussian process emulation is shown to be an efficient and useful technique for quantifying parameter sensitivity in complex global atmospheric model.


2021 ◽  
Author(s):  
Tamsin Edwards ◽  

<p><strong>The land ice contribution to global mean sea level rise has not yet been predicted with ice sheet and glacier models for the latest set of socio-economic scenarios (SSPs), nor with coordinated exploration of uncertainties arising from the various computer models involved. Two recent international projects (ISMIP6 and GlacierMIP) generated a large suite of projections using multiple models, but mostly used previous generation scenarios and climate models, and could not fully explore known uncertainties. </strong></p><p><strong>Here we estimate probability distributions for these projections for the SSPs using Gaussian Process emulation of the ice sheet and glacier model ensembles. We model the sea level contribution as a function of global mean surface air temperature forcing and (for the ice sheets) model parameters, with the 'nugget' allowing for multi-model structural uncertainty. Approximate independence of ice sheet and glacier models is assumed, because a given model responds very differently under different setups (such as initialisation). </strong></p><p><strong>We find that limiting global warming to 1.5</strong>°<strong>C </strong><strong>would halve the land ice contribution to 21<sup>st</sup> century </strong><strong>sea level rise</strong><strong>, relative to current emissions pledges: t</strong><strong>he median decreases from 25 to 13 cm sea level equivalent (SLE) by 2100. However, the Antarctic contribution does not show a clear response to emissions scenario, due to competing processes of increasing ice loss and snowfall accumulation in a warming climate. </strong></p><p><strong>However, under risk-averse (pessimistic) assumptions for climate and Antarctic ice sheet model selection and ice sheet model parameter values, Antarctic ice loss could be five times higher, increasing the median land ice contribution to 42 cm SLE under current policies and pledges, with the 95<sup>th</sup> percentile exceeding half a metre even under 1.5</strong>°<strong>C warming. </strong></p><p><strong>Gaussian Process emulation can therefore be a powerful tool for estimating probability density functions from multi-model ensembles and testing the sensitivity of the results to assumptions.</strong></p>


2021 ◽  
Author(s):  
Sabine M. Spiessl ◽  
Dirk-A. Becker ◽  
Sergei Kucherenko

<p>Due to their highly nonlinear, non-monotonic or even discontinuous behavior, sensitivity analysis of final repository models can be a demanding task. Most of the output of repository models is typically distributed over several orders of magnitude and highly skewed. Many values of a probabilistic investigation are very low or even zero. Although this is desirable in view of repository safety it can distort the evidence of sensitivity analysis. For the safety assessment of the system, the highest values of outputs are mainly essential and if those are only a few, their dependence on specific parameters may appear insignificant. By applying a transformation, different model output values are differently weighed, according to their magnitude, in sensitivity analysis. Probabilistic methods of higher-order sensitivity analysis, applied on appropriately transformed model output values, provide a possibility for more robust identification of relevant parameters and their interactions. This type of sensitivity analysis is typically done by decomposing the total unconditional variance of the model output into partial variances corresponding to different terms in the ANOVA decomposition. From this, sensitivity indices of increasing order can be computed. The key indices used most often are the first-order index (SI1) and the total-order index (SIT). SI1 refers to the individual impact of one parameter on the model and SIT represents the total effect of one parameter on the output in interactions with all other parameters. The second-order sensitivity indices (SI2) describe the interactions between two model parameters.</p><p>In this work global sensitivity analysis has been performed with three different kinds of output transformations (log, shifted and Box-Cox transformation) and two metamodeling approaches, namely the Random-Sampling High Dimensional Model Representation (RS-HDMR) [1] and the Bayesian Sparse PCE (BSPCE) [2] approaches. Both approaches are implemented in the SobolGSA software [3, 4] which was used in this work. We analyzed the time-dependent output with two approaches for sensitivity analysis, i.e., the pointwise and generalized approaches. With the pointwise approach, the output at each time step is analyzed independently. The generalized approach considers averaged output contributions at all previous time steps in the analysis of the current step. Obtained results indicate that robustness can be improved by using appropriate transformations and choice of coefficients for the transformation and the metamodel.</p><p>[1] M. Zuniga, S. Kucherenko, N. Shah (2013). Metamodelling with independent and dependent inputs. Computer Physics Communications, 184 (6): 1570-1580.</p><p>[2] Q. Shao, A. Younes, M. Fahs, T.A. Mara (2017). Bayesian sparse polynomial chaos expansion for global sensitivity analysis. Computer Methods in Applied Mechanics and Engineering, 318: 474-496.</p><p>[3] S. M. Spiessl, S. Kucherenko, D.-A. Becker, O. Zaccheus (2018). Higher-order sensitivity analysis of a final repository model with discontinuous behaviour. Reliability Engineering and System Safety, doi: https://doi.org/10.1016/j.ress.2018.12.004.</p><p>[4] SobolGSA software (2021). User manual https://www.imperial.ac.uk/process-systems-engineering/research/free-software/sobolgsa-software/.</p>


2020 ◽  
Vol 34 (11) ◽  
pp. 1813-1830
Author(s):  
Daniel Erdal ◽  
Sinan Xiao ◽  
Wolfgang Nowak ◽  
Olaf A. Cirpka

Abstract Ensemble-based uncertainty quantification and global sensitivity analysis of environmental models requires generating large ensembles of parameter-sets. This can already be difficult when analyzing moderately complex models based on partial differential equations because many parameter combinations cause an implausible model behavior even though the individual parameters are within plausible ranges. In this work, we apply Gaussian Process Emulators (GPE) as surrogate models in a sampling scheme. In an active-training phase of the surrogate model, we target the behavioral boundary of the parameter space before sampling this behavioral part of the parameter space more evenly by passive sampling. Active learning increases the subsequent sampling efficiency, but its additional costs pay off only for a sufficiently large sample size. We exemplify our idea with a catchment-scale subsurface flow model with uncertain material properties, boundary conditions, and geometric descriptors of the geological structure. We then perform a global-sensitivity analysis of the resulting behavioral dataset using the active-subspace method, which requires approximating the local sensitivities of the target quantity with respect to all parameters at all sampled locations in parameter space. The Gaussian Process Emulator implicitly provides an analytical expression for this gradient, thus improving the accuracy of the active-subspace construction. When applying the GPE-based preselection, 70–90% of the samples were confirmed to be behavioral by running the full model, whereas only 0.5% of the samples were behavioral in standard Monte-Carlo sampling without preselection. The GPE method also provided local sensitivities at minimal additional costs.


2020 ◽  
Author(s):  
Joanna Doummar ◽  
Assaad H. Kassem

<p>Qualitative vulnerability assessment methods applied in karst aquifers rely on key factors in the hydrological compartments usually assigned different weights according to their estimated impact on groundwater vulnerability. Based on an integrated numerical groundwater model on a snow-governed karst catchment area (Assal Spring- Lebanon), the aim of this work is to quantify the importance of the most influential parameters on recharge and spring discharge and outline potential parameters that are not accounted for in standard methods, when in fact they do play a role in the intrinsic vulnerability of a system. The assessment of the model sensitivity and the ranking of parameters are conducted using an automatic calibration tool for local sensitivity analysis in addition to a variance-based local sensitivity assessment of model output time series (recharge and discharge)  for two consecutive years (2016-2017) to various model parameters. The impact of each parameter was normalized to estimate standardized weights for each of the process based key-controlling parameters. Parameters to which model was sensitive were factors related to soil, 2) fast infiltration (bypass function) typical of karst aquifers, 3) climatic parameters (melting temperature and degree day coefficient) and 4) aquifer hydraulic properties that play a major role in groundwater vulnerability inducing a temporal effect and varied recession. Other less important parameters play different roles according to different assigned weights proportional to their ranking. Additionally, the effect of slope/geomorphology (e.g., dolines) was further investigated.  In general, this study shows that the weighting coefficients assigned to key vulnerability factors in the qualitative assessment methods can be reevaluated based on this process-based approach.</p><p> </p><p> </p><p> </p>


2020 ◽  
Author(s):  
Monica Riva ◽  
Aronne Dell'Oca ◽  
Alberto Guadagnini

<p>Modern models of environmental and industrial systems have reached a relatively high level of complexity. The latter aspect could hamper an unambiguous understanding of the functioning of a model, i.e., how it drives relationships and dependencies among inputs and outputs of interest. Sensitivity Analysis tools can be employed to examine this issue.</p><p>Global sensitivity analysis (GSA) approaches rest on the evaluation of sensitivity across the entire support within which system model parameters are supposed to vary. In this broad context, it is important to note that the definition of a sensitivity metric must be linked to the nature of the question(s) the GSA is meant to address. These include, for example: (i) which are the most important model parameters with respect to given model output(s)?; (ii) could we set some parameter(s) (thus assisting model calibration) at prescribed value(s) without significantly affecting model results?; (iii) at which space/time locations can one expect the highest sensitivity of model output(s) to model parameters and/or knowledge of which parameter(s) could be most beneficial for model calibration?</p><p>The variance-based Sobol’ Indices (e.g., Sobol, 2001) represent one of the most widespread GSA metrics, quantifying the average reduction in the variance of a model output stemming from knowledge of the input. Amongst other techniques, Dell’Oca et al. [2017] proposed a moment-based GSA approach which enables one to quantify the influence of uncertain model parameters on the (statistical) moments of a target model output.</p><p>Here, we embed in these sensitivity indices the effect of uncertainties both in the system model conceptualization and in the ensuing model(s) parameters. The study is grounded on the observation that physical processes and natural systems within which they take place are complex, rendering target state variables amenable to multiple interpretations and mathematical descriptions. As such, predictions and uncertainty analyses based on a single model formulation can result in statistical bias and possible misrepresentation of the total uncertainty, thus justifying the assessment of multiple model system conceptualizations. We then introduce copula-based sensitivity metrics which allow characterizing the global (with respect to the input) value of the sensitivity and the degree of variability (across the whole range of the input values) of the sensitivity for each value that the prescribed model output can possibly undertake, as driven by a governing model. In this sense, such an approach to sensitivity is global with respect to model input(s) and local with respect to model output, thus enabling one to discriminate the relevance of an input across the entire range of values of the modeling goal of interest. The methodology is demonstrated in the context of flow and reactive transport scenarios.</p><p> </p><p><strong>References</strong></p><p>Sobol, I. M., 2001. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Math. Comput. Sim., 55, 271-280.</p><p>Dell’Oca, A., Riva, M., Guadagnini, A., 2017. Moment-based metrics for global sensitivity analysis of hydrological systems. Hydr. Earth Syst. Sci., 21, 6219-6234.</p>


PLoS ONE ◽  
2015 ◽  
Vol 10 (6) ◽  
pp. e0130252 ◽  
Author(s):  
Eugene T Y Chang ◽  
Mark Strong ◽  
Richard H Clayton

2021 ◽  
Author(s):  
Leonardo Sandoval ◽  
Monica Riva ◽  
Ivo Colombo ◽  
Alberto Guadagnini

<p>Methane is recognized as a potential energy source in the transition to carbon free energies. Appropriate modeling approaches to quantify methane migration in low permeability geomaterials can assist the appraisal of the feasibility of a methane recovery project. Wu et al. (2016) proposed a model enabling one to estimate the total mass flow rate of the gas as the sum of key processes, including (i) a surface diffusion and two weighted bulk diffusion components, (ii) slip flow, and (iii) Knudsen diffusion. In its isothermal form and taking pressure gradient as boundary condition, the model relies on 10 parameters. These are typically estimated through laboratory-scale experiments. Considering the mechanisms involved, such experiments are costly, time demanding, and their results are prone to uncertainty. The latter is also related to the intrinsic difficulties linked to replicating operational field conditions at the laboratory scale as well as to the desired transferability of results to heterogeneous field scale settings. Due to our still incomplete knowledge of the key mechanisms driving gas movement in low permeability geomaterials and the complexities associated with the estimation of model parameters, model outputs should be carefully analyzed considering all possible sources of uncertainty. In this sense, sensitivity analysis approaches may be used to enhance the quality of parameter estimation workflows, upon focusing efforts on parameters with the highest influence to target model outputs. We rely on two typical global sensitivity analysis approaches (i.e., Variance-based Sobol approach and Morris method) to analyze the behavior of the aforementioned gas migration model targeting low permeability media. Because of the complexity of the physical processes represented in the model and the typical frequency distributions of pore size in caprocks, the sensitivity analysis is performed in two differing settings, each corresponding to a given range of variability of characteristic pore sizes. When considering porous systems with pore size ranging between 2 and 100 nanometers, results based on Sobol indices identify (in decreasing order of importance) pore radius, porosity, pore pressure, and tortuosity as the parameters whose uncertainty significantly imprints model output uncertainty. Similar results are obtained through the analysis of the Morris indices, these identifying the pore radius parameter as the one with the highest contribution to non-linear (or interaction) effects on the model output. For tighter porous media (i.e., with pore size comprised between 2 and 10 nanometers), the Sobol indices analyses identify (in decreasing order of importance) pore pressure, porosity, blockage/migration ratio of adsorbed molecules, and pore radius as the most influential model parameters. The role of the blockage/migration ratio of adsorbed molecules suggests that surface diffusion is a dominant gas transport mechanism in these scenarios. The Morris approach identifies the same parameters as important, albeit in a different order of importance.</p><p>References.</p><p>Wu, K., Chen, Z., Li, X., Guo, C., Wei, M., 2016. A model for multiple transport mechanisms through nanopores of shale gas reservoirs with real gas effect-adsorption-mechanic coupling. International Journal of Heat and Mass Transfer 93, 408-426. doi: 10.1016/j.ijheatmasstransfer.2015.10.003</p>


2018 ◽  
Author(s):  
Ksenia Aleksankina ◽  
Stefan Reis ◽  
Massimo Vieno ◽  
Mathew R. Heal

Abstract. Atmospheric chemistry transport models (ACTMs) are extensively used to provide scientific support for the development of policies to mitigate against the detrimental effects of air pollution on human health and ecosystems. Therefore, it is essential to quantitatively assess the level of model uncertainty and to identify the model input parameters that contribute the most to the uncertainty. For complex process-based models, such as ACTMs, uncertainty and global sensitivity analyses are still challenging and are often limited by computational constraints due to the requirement of a large number of model runs. In this work, we demonstrate an emulator-based approach to uncertainty quantification and variance-based sensitivity analysis for the EMEP4UK model (regional application of the European Monitoring and Evaluation Programme Meteorological Synthesizing Centre-West). A separate Gaussian process emulator was used to estimate model predictions at unsampled points in the space of the uncertain model inputs for every modelled grid cell. The training points for the emulator were chosen using an optimised Latin hypercube sampling design. The uncertainties in surface concentrations of O3, NO2, and PM2.5 were propagated from the uncertainties in the anthropogenic emissions of NOx, SO2, NH3, VOC, and primary PM2.5 reported by the UK National Atmospheric Emissions Inventory. The results of the EMEP4UK uncertainty analysis for the annually averaged model predictions indicate that modelled surface concentrations of O3, NO2, and PM2.5 have the highest level of uncertainty in the grid cells comprising urban areas (up to ±7 %, ±9 %, and ±9 % respectively). The uncertainty in the surface concentrations of O3 and NO2 were dominated by uncertainties in NOx emissions combined from non-dominant sectors (i.e. all sectors excluding energy production and road transport) and shipping emissions. Additionally, uncertainty in OO3 was driven by uncertainty VOC emissions combined from sectors excluding solvent use. Uncertainties in the modelled PM2.5 concentrations were mainly driven by uncertainties in primary PM2.5 emissions and NH3 emissions from the agricultural sector. Uncertainty and sensitivity analyses were also performed for five selected grid sells for monthly averaged model predictions to illustrate the seasonal change in the magnitude of uncertainty and change in the contribution of different model inputs to the overall uncertainty. Our study demonstrates the viability of a Gaussian process emulator-based approach for uncertainty and global sensitivity analyses, which can be applied to other ACTMs. Conducting these analyses helps to increase the confidence in model predictions. Additionally, the emulators created for these analyses can be used to predict the ACTM response for any other combination of perturbed input emissions within the ranges set for the original Latin hypercube sampling design without the need to re-run the ACTM, thus allowing fast exploratory assessments at significantly reduced computational costs.


2019 ◽  
Vol 19 (5) ◽  
pp. 2881-2898 ◽  
Author(s):  
Ksenia Aleksankina ◽  
Stefan Reis ◽  
Massimo Vieno ◽  
Mathew R. Heal

Abstract. Atmospheric chemistry transport models (ACTMs) are extensively used to provide scientific support for the development of policies to mitigate the detrimental effects of air pollution on human health and ecosystems. Therefore, it is essential to quantitatively assess the level of model uncertainty and to identify the model input parameters that contribute the most to the uncertainty. For complex process-based models, such as ACTMs, uncertainty and global sensitivity analyses are still challenging and are often limited by computational constraints due to the requirement of a large number of model runs. In this work, we demonstrate an emulator-based approach to uncertainty quantification and variance-based sensitivity analysis for the EMEP4UK model (regional application of the European Monitoring and Evaluation Programme Meteorological Synthesizing Centre-West). A separate Gaussian process emulator was used to estimate model predictions at unsampled points in the space of the uncertain model inputs for every modelled grid cell. The training points for the emulator were chosen using an optimised Latin hypercube sampling design. The uncertainties in surface concentrations of O3, NO2, and PM2.5 were propagated from the uncertainties in the anthropogenic emissions of NOx, SO2, NH3, VOC, and primary PM2.5 reported by the UK National Atmospheric Emissions Inventory. The results of the EMEP4UK uncertainty analysis for the annually averaged model predictions indicate that modelled surface concentrations of O3, NO2, and PM2.5 have the highest level of uncertainty in the grid cells comprising urban areas (up to ±7 %, ±9 %, and ±9 %, respectively). The uncertainty in the surface concentrations of O3 and NO2 were dominated by uncertainties in NOx emissions combined from non-dominant sectors (i.e. all sectors excluding energy production and road transport) and shipping emissions. Additionally, uncertainty in O3 was driven by uncertainty in VOC emissions combined from sectors excluding solvent use. Uncertainties in the modelled PM2.5 concentrations were mainly driven by uncertainties in primary PM2.5 emissions and NH3 emissions from the agricultural sector. Uncertainty and sensitivity analyses were also performed for five selected grid cells for monthly averaged model predictions to illustrate the seasonal change in the magnitude of uncertainty and change in the contribution of different model inputs to the overall uncertainty. Our study demonstrates the viability of a Gaussian process emulator-based approach for uncertainty and global sensitivity analyses, which can be applied to other ACTMs. Conducting these analyses helps to increase the confidence in model predictions. Additionally, the emulators created for these analyses can be used to predict the ACTM response for any other combination of perturbed input emissions within the ranges set for the original Latin hypercube sampling design without the need to rerun the ACTM, thus allowing for fast exploratory assessments at significantly reduced computational costs.


Sign in / Sign up

Export Citation Format

Share Document