A Hybrid Qualitative/Quantitative Uncertainty Importance Assessment Approach: Applications to Thermal-Hydraulics System Codes Calculations

Author(s):  
Mohammad Pourgol-Mohammad ◽  
Kamran Sepanloo

Uncertainty importance is used to rank sources of uncertainty in the input variables for their degree of contribution to the uncertainty of the output variable(s). Such ranking is used to plan in reducing epistemic uncertainty in the output variable(s). In application to Thermal-Hydraulic calculations using RELAP5, TRAC and TRACE codes involving uncertainty, uncertainty ranking can be used to confirm the results of Phenomena Identification and Ranking Table (PIRT) utilized in available uncertainty quantification methodologies. Several methodologies have been developed to address uncertainty importance assessment. Existing methodologies for uncertainty importance are not practical for some applications such as some Thermal-Hydraulic calculations due to required computational time and resources. A new efficient uncertainty importance ranking method is proposed as part of a broader research conducted by the authors for comprehensive TH code uncertainty assessment. Given the computational complexities of the TH codes, the proposed uncertainty importance measure is defined in multiples of standard deviation (xσ) changes in a given input parameter or variable over the resulting changes in the standard deviation of output variable (such as a Figure of Merit). The total uncertainty range resulted from propagation of uncertainties is obtained from several available methodologies e.g., CSAU, GRS, UMAE and the recently proposed integrated methodology IMTHUA, proposed by the author. There are some difficulties in assessment of non-linearity of some input changes vs. variations in the output variables, which require special treatment. Different levels of input change (multiples of standard variation) are devised for accurate ranking of uncertainty contributors. Comparing the output change as a fraction of the overall uncertainty range will result in a ranking index to show the contribution of each uncertainty source. In this paper a brief overview of the importance analysis as well as the difference between uncertainty-importance vs. importance uncertainty will be first given. Current methodologies for uncertainty importance will be discussed and their applicability to TH analysis will be discussed. A description of the proposed methodology, along with an example of its application to LOFT LBLOCA uncertain parameters will be discussed.

2021 ◽  
Author(s):  
Grigorios Lavrentiadis ◽  
Norman A. Abrahamson ◽  
Nicolas M. Kuehn

Abstract A new non-ergodic ground-motion model (GMM) for effective amplitude spectral (EAS) values for California is presented in this study. EAS, which is defined in Goulet et al. (2018), is a smoothed rotation-independent Fourier amplitude spectrum of the two horizontal components of an acceleration time history. The main motivation for developing a non-ergodic EAS GMM, rather than a spectral acceleration GMM, is that the scaling of EAS does not depend on spectral shape, and therefore, the more frequent small magnitude events can be used in the estimation of the non-ergodic terms. The model is developed using the California subset of the NGAWest2 dataset Ancheta et al. (2013). The Bayless and Abrahamson (2019b) (BA18) ergodic EAS GMM was used as backbone to constrain the average source, path, and site scaling. The non-ergodic GMM is formulated as a Bayesian hierarchical model: the non-ergodic source and site terms are modeled as spatially varying coefficients following the approach of Landwehr et al. (2016), and the non-ergodic path effects are captured by the cell-specific anelastic attenuation attenuation following the approach of Dawood and Rodriguez-Marek (2013). Close to stations and past events, the mean values of the non-ergodic terms deviate from zero to capture the systematic effects and their epistemic uncertainty is small. In areas with sparse data, the epistemic uncertainty of the non-ergodic terms is large, as the systematic effects cannot be determined. The non-ergodic total aleatory standard deviation is approximately 30 to 40% smaller than the total aleatory standard deviation of BA18. This reduction in the aleatory variability has a significant impact on hazard calculations at large return periods. The epistemic uncertainty of the ground motion predictions is small in areas close to stations and past event.


2020 ◽  
Vol 30 (1) ◽  
pp. 5-17
Author(s):  
Abdurrahman Coskun ◽  
Wytze P. Oosterhuis

Uncertainty is an inseparable part of all types of measurement. Recently, the International Organization for Standardization (ISO) released a new standard (ISO 20914) on how to calculate measurement uncertainty (MU) in laboratory medicine. This standard can be regarded as the beginning of a new era in laboratory medicine. Measurement uncertainty comprises various components and is used to calculate the total uncertainty. All components must be expressed in standard deviation (SD) and then combined. However, the characteristics of these components are not the same; some are expressed as SD, while others are expressed as a ± b, such as the purity of the reagents. All non-SD variables must be transformed into SD, which requires a detailed knowledge of common statistical distributions used in the calculation of MU. Here, the main statistical distributions used in MU calculation are briefly summarized.


2011 ◽  
Vol 11 (3) ◽  
pp. 9607-9633
Author(s):  
J. Tonttila ◽  
E. J. O'Connor ◽  
S. Niemelä ◽  
P. Räisänen ◽  
H. Järvinen

Abstract. The statistics of cloud-base vertical velocity simulated by the non-hydrostatic mesoscale model AROME are compared with Cloudnet remote sensing observations at two locations: the ARM SGP site in Central Oklahoma, and the DWD observatory at Lindenberg, Germany. The results show that, as expected, AROME significantly underestimates the variability of vertical velocity at cloud-base compared to observations at their nominal resolution; the standard deviation of vertical velocity in the model is typically 4–6 times smaller than observed, and even more during the winter at Lindenberg. Averaging the observations to the horizontal scale corresponding to the physical grid spacing of AROME (2.5 km) explains 70–80% of the underestimation by the model. Further averaging of the observations in the horizontal is required to match the model values for the standard deviation in vertical velocity. This indicates an effective horizontal resolution for the AROME model of at least 4 times the physically-defined grid spacing. The results illustrate the need for special treatment of sub-grid scale variability of vertical velocities in kilometer-scale atmospheric models, if processes such as aerosol-cloud interactions are to be included in the future.


Author(s):  
Mohit Kumar

Recently, a new fuzzy fault tree analysis (FFTA) has been developed to propagate and quantify the epistemic uncertainties occurring in qualitative data such as expert opinions or judgments. It is well known that the weakest triangular norm (Tw) based fuzzy arithmetic operations preserve the shape of the fuzzy numbers, provide more exact fuzzy results and effectively reduce uncertainty range. The objective of this paper is to develop a novel Tw-based fuzzy importance measure to identify the critical basic events in FFTA. The proposed approach has been demonstrated by applying it to a case study to identify the critical components of the Group 1 of the U.S. Combustion Engineering Reactor Protection System (CERPS). The obtained results are then compared to the results computed by the existing well-known importance measures of conventional as well as FFTA. The computed results confirm that the proposed Tw -based importance measure is feasible to identify the critical basic events in FFTA in more exact way.


1988 ◽  
Vol 78 (2) ◽  
pp. 855-862
Author(s):  
K. L. McLaughlin

Abstract The bootstrap procedure of Efron may be used with maximum-likelihood magnitude estimation to estimate the standard deviation of the event magnitude and the distribution standard deviation. The procedure resamples the observations randomly with replacement. The event magnitude, m, and station magnitude distribution, σ, are then estimated for each random sampling of the observations. This generates a sequence of event magnitude estimations that are used to estimate the event magnitude standard error, σm, and the uncertainty in the distribution, σσ. Maximum-likelihood mb event magnitudes with uncertainties are provided for events at NTS, Amchitka, Tuamotu, Novaya Zemlya, and Eastern Kazakh. These magnitudes based on WWSSN film chip readings illustrate the importance of nondetection for events with magnitudes below mb < 5 and of clipping for events with magnitudes above mb > 6.5. The uncertainties in the event magnitudes are found to be close to the uncertainty in the mean of the observed signals. The introduction of the maximum-likelihood procedure does not significantly improve the precision of the event magnitude estimate, and furthermore it may actually increase the estimated uncertainty with introduction of censoring information. However, the maximum-likelihood bootstrap estimate is a more accurate estimate of the total uncertainty in the event magnitude.


Author(s):  
Bin Zhou ◽  
Bin Zi ◽  
Yishang Zeng ◽  
Weidong Zhu

Abstract An evidence-theory-based interval perturbation method (ETIPM) and an evidence-theory-based subinterval perturbation method (ETSPM) are presented for the kinematic uncertainty analysis of a dual cranes system (DCS) with epistemic uncertainty. A multiple evidence variable (MEV) model that consists of evidence variables with focal elements (FEs) and basic probability assignments (BPAs) is constructed. Based on the evidence theory, an evidence-based kinematic equilibrium equation with the MEV model is equivalently transformed to several interval equations. In the ETIPM, the bounds of the luffing angular vector (LAV) with respect to every joint FE are calculated by integrating the first-order Taylor series expansion and interval algorithm. The bounds of the expectation and variance of the LAV and corresponding BPAs are calculated by using the evidence-based uncertainty quantification method. In the ETSPM, the subinterval perturbation method is introduced to decompose original FE into several small subintervals. By comparing results yielded by the ETIPM and ETSPM with those by the evidence theory-based Monte Carlo method, numerical examples show that the accuracy and computational time of the ETSPM are higher than those of the ETIPM, and the accuracy of the ETIPM and ETSPM can be significantly improved with the increase of the number of FEs and subintervals.


Sign in / Sign up

Export Citation Format

Share Document