scholarly journals On the Need to Determine Accurately the Impact of Higher-Order Sensitivities on Model Sensitivity Analysis, Uncertainty Quantification and Best-Estimate Predictions

Energies ◽  
2021 ◽  
Vol 14 (19) ◽  
pp. 6318
Author(s):  
Dan Gabriel Cacuci

This work aims at underscoring the need for the accurate quantification of the sensitivities (i.e., functional derivatives) of the results (a.k.a. “responses”) produced by large-scale computational models with respect to the models’ parameters, which are seldom known perfectly in practice. The large impact that can arise from sensitivities of order higher than first has been highlighted by the results of a third-order sensitivity and uncertainty analysis of an OECD/NEA reactor physics benchmark, which will be briefly reviewed in this work to underscore that neglecting the higher-order sensitivities causes substantial errors in predicting the expectation and variance of model responses. The importance of accurately computing the higher-order sensitivities is further highlighted in this work by presenting a text-book analytical example from the field of neutron transport, which impresses the need for the accurate quantification of higher-order response sensitivities by demonstrating that their neglect would lead to substantial errors in predicting the moments (expectation, variance, skewness, kurtosis) of the model response’s distribution in the phase space of model parameters. The incorporation of response sensitivities in methodologies for uncertainty quantification, data adjustment and predictive modeling currently available for nuclear engineering systems is also reviewed. The fundamental conclusion highlighted by this work is that confidence intervals and tolerance limits on results predicted by models that only employ first-order sensitivities are likely to provide a false sense of confidence, unless such models also demonstrate quantitatively that the second- and higher-order sensitivities provide negligibly small contributions to the respective tolerance limits and confidence intervals. The high-order response sensitivities to parameters underlying large-scale models can be computed most accurately and most efficiently by employing the high-order comprehensive adjoint sensitivity analysis methodology, which overcomes the curse of dimensionality that hampers other methods when applied to large-scale models involving many parameters.

2010 ◽  
Vol 67 (3) ◽  
pp. 834-850 ◽  
Author(s):  
Cara-Lyn Lappen ◽  
David Randall ◽  
Takanobu Yamaguchi

Abstract In 2001, the authors presented a higher-order mass-flux model called “assumed distributions with higher-order closure” (ADHOC 1), which represents the large eddies of the planetary boundary layer (PBL) in terms of an assumed joint distribution of the vertical velocity and scalars. In a subsequent version (ADHOC 2) the authors incorporated vertical momentum fluxes and second moments involving pressure perturbations into the framework. These versions of ADHOC, as well as all other higher-order closure models, are not suitable for use in large-scale models because of the high vertical and temporal resolution that is required. This high resolution is needed mainly because higher-order closure (HOC) models must resolve discontinuities at the PBL top, which can occur anywhere on a model’s Eulerian vertical grid. This paper reports the development of ADHOC 3, in which the computational cost of the model is reduced by introducing the PBL depth as an explicit prognostic variable. ADHOC 3 uses a stretched vertical coordinate that is attached to the PBL top. The discontinuous jumps at the PBL top are “hidden” in the layer edge that represents the PBL top. This new HOC model can use much coarser vertical resolution and a longer time step and is thus suitable for use in large-scale models. To predict the PBL depth, an entrainment parameterization is needed. In the development of the model, the authors have been led to a new view of the old problem of entrainment parameterization. The relatively detailed information available in the HOC model is used to parameterize the entrainment rate. The present approach thus borrows ideas from mixed-layer modeling to create a new, more economical type of HOC model that is better suited for use as a parameterization in large-scale models.


2020 ◽  
Author(s):  
Lucie Pheulpin ◽  
Vito Bacchi

<p>Hydraulic models are increasingly used to assess the flooding hazard. However, all numerical models are affected by uncertainties, related to model parameters, which can be quantified through Uncertainty Quantification (UQ) and Global Sensitivity Analysis (GSA). In traditional methods of UQ and GSA, the input parameters of the numerical models are considered to be independent which is actually rarely the case. The objective of this work is to proceed with UQ and GSA methods considering dependent inputs and comparing different methodologies. At our knowledge, there is no such application in the field of 2D hydraulic modelling.</p><p>At first the uncertain parameters of the hydraulic model are classified in groups of dependent parameters. Within this aim, it is then necessary to define the copulas that better represent these groups. Finally UQ and GSA based on copulas are performed. The proposed methodology is applied to the large scale 2D hydraulic model of the Loire River. However, as the model computation is high time-consuming, we used a meta-model instead of the initial model. We compared the results coming from the traditional methods of UQ and GSA (<em>i.e.</em> without taking into account the dependencies between inputs) and the ones coming from the new methods based on copulas. The results show that the dependence between inputs should not always be neglected in UQ and GSA.</p>


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Raju Pathak ◽  
Sandeep Sahany ◽  
Saroj K. Mishra

Abstract Using uncertainty quantification techniques, we carry out a sensitivity analysis of a large number (17) of parameters used in the NCAR CAM5 cloud parameterization schemes. The LLNL PSUADE software is used to identify the most sensitive parameters by performing sensitivity analysis. Using Morris One-At-a-Time (MOAT) method, we find that the simulations of global annual mean total precipitation, convective, large-scale precipitation, cloud fractions (total, low, mid, and high), shortwave cloud forcing, longwave cloud forcing, sensible heat flux, and latent heat flux are very sensitive to the threshold-relative-humidity-for-stratiform-low-clouds ($$rhminl)$$ r h m i n l ) and the auto-conversion-size-threshold-for-ice-to-snow $$\left( {dcs} \right).$$ dcs . The seasonal and regime specific dependence of some parameters in the simulation of precipitation is also found for the global monsoons and storm track regions. Through sensitivity analysis, we find that the Somali jet strength and the tropical easterly jet associated with the south Asian summer monsoon (SASM) show a systematic dependence on $$dcs$$ dcs and $$rhminl$$ rhminl . The timing of the withdrawal of SASM over India shows a monotonic increase (delayed withdrawal) with an increase in $$dcs$$ dcs . Overall, we find that $$rhminl$$ rhminl , $$dcs$$ dcs , $$ai,$$ a i , and $$as$$ as are the most sensitive cloud parameters and thus are of high priority in the model tuning process, in order to reduce uncertainty in the simulation of past, present, and future climate.


2017 ◽  
Vol 27 (2) ◽  
pp. 239-250 ◽  
Author(s):  
Malgorzata Kardynska ◽  
Jaroslaw Smieja

Abstract The paper is focused on sensitivity analysis of large-scale models of biological systems that describe dynamics of the so called signaling pathways. These systems are continuous in time but their models are based on discrete-time measurements. Therefore, if sensitivity analysis is used as a tool supporting model development and evaluation of its quality, it should take this fact into account. Such models are usually very complex and include many parameters difficult to estimate in an experimental way. Changes of many of those parameters have little effect on model dynamics, and therefore they are called sloppy. In contrast, other parameters, when changed, lead to substantial changes in model responses and these are called stiff parameters. While this is a well-known fact, and there are methods to discern sloppy parameters from the stiff ones, they have not been utilized, so far, to create parameter rankings and quantify the influence of single parameter changes on system time responses. These single parameter changes are particularly important in analysis of signalling pathways, because they may pinpoint parameters, associated with the processes to be targeted at the molecular level in laboratory experiments. In the paper we present a new, original method of creating parameter rankings, based on an Hessian of a cost function which describes the fit of the model to a discrete experimental data. Its application is explained with simple dynamical systems, representing two typical dynamics exhibited by the signaling pathways.


Author(s):  
Yufeng Xia ◽  
Jun Zhang ◽  
Tingsong Jiang ◽  
Zhiqiang Gong ◽  
Wen Yao ◽  
...  

AbstractQuantifying predictive uncertainty in deep neural networks is a challenging and yet unsolved problem. Existing quantification approaches can be categorized into two lines. Bayesian methods provide a complete uncertainty quantification theory but are often not scalable to large-scale models. Along another line, non-Bayesian methods have good scalability and can quantify uncertainty with high quality. The most remarkable idea in this line is Deep Ensemble, but it is limited in practice due to its expensive computational cost. Thus, we propose HatchEnsemble to improve the efficiency and practicality of Deep Ensemble. The main idea is to use function-preserving transformations, ensuring HatchNets to inherit the knowledge learned by a single model called SeedNet. This process is called hatching, and HatchNet can be obtained by continuously widening the SeedNet. Based on our method, two different hatches are proposed, respectively, for ensembling the same and different architecture networks. To ensure the diversity of models, we also add random noises to parameters during hatching. Experiments on both clean and corrupted datasets show that HatchEnsemble can give a competitive prediction performance and better-calibrated uncertainty quantification in a shorter time compared with baselines.


2016 ◽  
Vol 136 (5) ◽  
pp. 484-496 ◽  
Author(s):  
Yusuke Udagawa ◽  
Kazuhiko Ogimoto ◽  
Takashi Oozeki ◽  
Hideaki Ohtake ◽  
Takashi Ikegami ◽  
...  

2020 ◽  
Vol 31 (6) ◽  
pp. 681-689
Author(s):  
Jalal Mirakhorli ◽  
Hamidreza Amindavar ◽  
Mojgan Mirakhorli

AbstractFunctional magnetic resonance imaging a neuroimaging technique which is used in brain disorders and dysfunction studies, has been improved in recent years by mapping the topology of the brain connections, named connectopic mapping. Based on the fact that healthy and unhealthy brain regions and functions differ slightly, studying the complex topology of the functional and structural networks in the human brain is too complicated considering the growth of evaluation measures. One of the applications of irregular graph deep learning is to analyze the human cognitive functions related to the gene expression and related distributed spatial patterns. Since a variety of brain solutions can be dynamically held in the neuronal networks of the brain with different activity patterns and functional connectivity, both node-centric and graph-centric tasks are involved in this application. In this study, we used an individual generative model and high order graph analysis for the region of interest recognition areas of the brain with abnormal connection during performing certain tasks and resting-state or decompose irregular observations. Accordingly, a high order framework of Variational Graph Autoencoder with a Gaussian distributer was proposed in the paper to analyze the functional data in brain imaging studies in which Generative Adversarial Network is employed for optimizing the latent space in the process of learning strong non-rigid graphs among large scale data. Furthermore, the possible modes of correlations were distinguished in abnormal brain connections. Our goal was to find the degree of correlation between the affected regions and their simultaneous occurrence over time. We can take advantage of this to diagnose brain diseases or show the ability of the nervous system to modify brain topology at all angles and brain plasticity according to input stimuli. In this study, we particularly focused on Alzheimer’s disease.


Sign in / Sign up

Export Citation Format

Share Document