Probability Bounds Analysis Applied to the Sandia Verification and Validation Challenge Problem

Author(s):  
Aniruddha Choudhary ◽  
Ian T. Voyles ◽  
Christopher J. Roy ◽  
William L. Oberkampf ◽  
Mayuresh Patil

Our approach to the Sandia Verification and Validation Challenge Problem is to use probability bounds analysis (PBA) based on probabilistic representation for aleatory uncertainties and interval representation for (most) epistemic uncertainties. The nondeterministic model predictions thus take the form of p-boxes, or bounding cumulative distribution functions (CDFs) that contain all possible families of CDFs that could exist within the uncertainty bounds. The scarcity of experimental data provides little support for treatment of all uncertain inputs as purely aleatory uncertainties and also precludes significant calibration of the models. We instead seek to estimate the model form uncertainty at conditions where the experimental data are available, then extrapolate this uncertainty to conditions where no data exist. The modified area validation metric (MAVM) is employed to estimate the model form uncertainty which is important because the model involves significant simplifications (both geometric and physical nature) of the true system. The results of verification and validation processes are treated as additional interval-based uncertainties applied to the nondeterministic model predictions based on which the failure prediction is made. Based on the method employed, we estimate the probability of failure to be as large as 0.0034, concluding that the tanks are unsafe.

2018 ◽  
Vol 140 (6) ◽  
Author(s):  
Ning Wang ◽  
Wen Yao ◽  
Yong Zhao ◽  
Xiaoqian Chen ◽  
Xiang Zhang ◽  
...  

Various stochastic validation metrics have been developed for validating models, among which area metric is frequently used in many practical problems. However, the existing area metric does not consider experimental epistemic uncertainty caused by lack of sufficient physical observations. Therefore, it cannot provide a confidence level associated with the amount of experimental data, which is a desired characteristic of validation metric. In this paper, the concept of area metric is extended to a new metric, namely interval area metric, for single-site model validation with limited experimental data. The kernel of the proposed metric is defining two boundary distribution functions based on Dvoretzky–Kiefer–Wolfowitz inequality, so as to provide an interval at a given confidence level, which covers the true cumulative distribution function (CDF) of physical observations. Based on this interval area metric, the validity of a model can be quantitatively measured with the specific confidence level in association with consideration of the lack of experiment information. The new metric is examined and compared with the existing metrics through numerical case studies to demonstrate its validity and discover its properties. Furthermore, an engineering example is provided to illustrate the effectiveness of the proposed metric in practical satellite structure engineering application.


2020 ◽  
Vol 501 (1) ◽  
pp. 994-1001
Author(s):  
Suman Sarkar ◽  
Biswajit Pandey ◽  
Snehasish Bhattacharjee

ABSTRACT We use an information theoretic framework to analyse data from the Galaxy Zoo 2 project and study if there are any statistically significant correlations between the presence of bars in spiral galaxies and their environment. We measure the mutual information between the barredness of galaxies and their environments in a volume limited sample (Mr ≤ −21) and compare it with the same in data sets where (i) the bar/unbar classifications are randomized and (ii) the spatial distribution of galaxies are shuffled on different length scales. We assess the statistical significance of the differences in the mutual information using a t-test and find that both randomization of morphological classifications and shuffling of spatial distribution do not alter the mutual information in a statistically significant way. The non-zero mutual information between the barredness and environment arises due to the finite and discrete nature of the data set that can be entirely explained by mock Poisson distributions. We also separately compare the cumulative distribution functions of the barred and unbarred galaxies as a function of their local density. Using a Kolmogorov–Smirnov test, we find that the null hypothesis cannot be rejected even at $75{{\ \rm per\ cent}}$ confidence level. Our analysis indicates that environments do not play a significant role in the formation of a bar, which is largely determined by the internal processes of the host galaxy.


2021 ◽  
Vol 13 (6) ◽  
pp. 1096
Author(s):  
Soi Ahn ◽  
Sung-Rae Chung ◽  
Hyun-Jong Oh ◽  
Chu-Yong Chung

This study aimed to generate a near real time composite of aerosol optical depth (AOD) to improve predictive model ability and provide current conditions of aerosol spatial distribution and transportation across Northeast Asia. AOD, a proxy for aerosol loading, is estimated remotely by various spaceborne imaging sensors capturing visible and infrared spectra. Nevertheless, differences in satellite-based retrieval algorithms, spatiotemporal resolution, sampling, radiometric calibration, and cloud-screening procedures create significant variability among AOD products. Satellite products, however, can be complementary in terms of their accuracy and spatiotemporal comprehensiveness. Thus, composite AOD products were derived for Northeast Asia based on data from four sensors: Advanced Himawari Imager (AHI), Geostationary Ocean Color Imager (GOCI), Moderate Infrared Spectroradiometer (MODIS), and Visible Infrared Imaging Radiometer Suite (VIIRS). Cumulative distribution functions were employed to estimate error statistics using measurements from the Aerosol Robotic Network (AERONET). In order to apply the AERONET point-specific error, coefficients of each satellite were calculated using inverse distance weighting. Finally, the root mean square error (RMSE) for each satellite AOD product was calculated based on the inverse composite weighting (ICW). Hourly AOD composites were generated (00:00–09:00 UTC, 2017) using the regression equation derived from the comparison of the composite AOD error statistics to AERONET measurements, and the results showed that the correlation coefficient and RMSE values of composite were close to those of the low earth orbit satellite products (MODIS and VIIRS). The methodology and the resulting dataset derived here are relevant for the demonstrated successful merging of multi-sensor retrievals to produce long-term satellite-based climate data records.


2021 ◽  
Vol 11 (8) ◽  
pp. 3310
Author(s):  
Marzio Invernizzi ◽  
Federica Capra ◽  
Roberto Sozzi ◽  
Laura Capelli ◽  
Selena Sironi

For environmental odor nuisance, it is extremely important to identify the instantaneous concentration statistics. In this work, a Fluctuating Plume Model for different statistical moments is proposed. It provides data in terms of mean concentrations, variance, and intensity of concentration. The 90th percentile peak-to-mean factor, R90, was tested here by comparing it with the experimental results (Uttenweiler field experiment), considering different Probability Distribution Functions (PDFs): Gamma and the Modified Weibull. Seventy-two percent of the simulated mean concentration values fell within a factor 2 compared to the experimental ones: the model was judged acceptable. Both the modelled results for standard deviation, σC, and concentration intensity, Ic, overestimate the experimental data. This evidence can be due to the non-ideality of the measurement system. The propagation of those errors to the estimation of R90 is complex, but the ranges covered are quite repeatable: the obtained values are 1–3 for the Gamma, 1.5–4 for Modified Weibull PDF, and experimental ones from 1.4 to 3.6.


Author(s):  
Rama Subba Reddy Gorla

Heat transfer from a nuclear fuel rod bumper support was computationally simulated by a finite element method and probabilistically evaluated in view of the several uncertainties in the performance parameters. Cumulative distribution functions and sensitivity factors were computed for overall heat transfer rates due to the thermodynamic random variables. These results can be used to identify quickly the most critical design variables in order to optimize the design and to make it cost effective. The analysis leads to the selection of the appropriate measurements to be used in heat transfer and to the identification of both the most critical measurements and the parameters.


2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Thabet Abdeljawad ◽  
Saima Rashid ◽  
Zakia Hammouch ◽  
İmdat İşcan ◽  
Yu-Ming Chu

Abstract The present article addresses the concept of p-convex functions on fractal sets. We are able to prove a novel auxiliary result. In the application aspect, the fidelity of the local fractional is used to establish the generalization of Simpson-type inequalities for the class of functions whose local fractional derivatives in absolute values at certain powers are p-convex. The method we present is an alternative in showing the classical variants associated with generalized p-convex functions. Some parts of our results cover the classical convex functions and classical harmonically convex functions. Some novel applications in random variables, cumulative distribution functions and generalized bivariate means are obtained to ensure the correctness of the present results. The present approach is efficient, reliable, and it can be used as an alternative to establishing new solutions for different types of fractals in computer graphics.


Mathematics ◽  
2021 ◽  
Vol 9 (10) ◽  
pp. 1085
Author(s):  
Ilya E. Tarasov

This article discusses the application of the method of approximation of experimental data by functional dependencies, which uses a probabilistic assessment of the deviation of the assumed dependence from experimental data. The application of this method involves the introduction of an independent parameter “scale of the error probability distribution function” and allows one to synthesize the deviation functions, forming spaces with a nonlinear metric, based on the existing assumptions about the sources of errors and noise. The existing method of regression analysis can be obtained from the considered method as a special case. The article examines examples of analysis of experimental data and shows the high resistance of the method to the appearance of single outliers in the sample under study. Since the introduction of an independent parameter increases the number of computations, for the practical application of the method in measuring and information systems, the architecture of a specialized computing device of the “system on a chip” class and practical approaches to its implementation based on programmable logic integrated circuits are considered.


2011 ◽  
Vol 18 (2) ◽  
pp. 223-234 ◽  
Author(s):  
R. Haas ◽  
K. Born

Abstract. In this study, a two-step probabilistic downscaling approach is introduced and evaluated. The method is exemplarily applied on precipitation observations in the subtropical mountain environment of the High Atlas in Morocco. The challenge is to deal with a complex terrain, heavily skewed precipitation distributions and a sparse amount of data, both spatial and temporal. In the first step of the approach, a transfer function between distributions of large-scale predictors and of local observations is derived. The aim is to forecast cumulative distribution functions with parameters from known data. In order to interpolate between sites, the second step applies multiple linear regression on distribution parameters of observed data using local topographic information. By combining both steps, a prediction at every point of the investigation area is achieved. Both steps and their combination are assessed by cross-validation and by splitting the available dataset into a trainings- and a validation-subset. Due to the estimated quantiles and probabilities of zero daily precipitation, this approach is found to be adequate for application even in areas with difficult topographic circumstances and low data availability.


Sign in / Sign up

Export Citation Format

Share Document