Pilot Study for Uncertainty Analysis in Probabilistic Fitness-for-Service Evaluations of Zr-2.5Nb Pressure Tubes: Uncertainty Characterization

Author(s):  
Leonid Gutkin ◽  
Suresh Datla ◽  
Christopher Manu

Canadian Nuclear Standard CSA N285.8, “Technical requirements for in-service evaluation of zirconium alloy pressure tubes in CANDU® reactors”(1), permits the use of probabilistic methods when assessments of the reactor core are performed. A non-mandatory annex has been proposed for inclusion in the CSA Standard N285.8 to provide guidelines for performing uncertainty analysis in probabilistic fitness-for-service evaluations within the scope of this Standard, such as the probabilistic evaluation of leak-before-break. The proposed annex outlines the general approach to uncertainty analysis as being comprised of the following major activities: identification of influential variables, characterization of uncertainties in influential variables, and subsequent propagation of these uncertainties through the evaluation framework or code. The proposed methodology distinguishes between two types of non-deterministic variables by the method used to obtain their best estimate. Uncertainties are classified by their source, and different uncertainty components are considered when the best estimates for the variables of interest are obtained using calibrated parametric models or analyses and when these estimates are obtained using statistical models or analyses. The application of the proposed guidelines for uncertainty analysis was exercised by performing a pilot study for one of the evaluations within the scope of the CSA Standard N285.8, the probabilistic evaluation of leak-before-break based on a postulated through-wall crack. The pilot study was performed for a representative CANDU reactor unit using the recently developed software code P-LBB that complies with the requirements of Canadian Nuclear Standard CSA N286.7 for quality assurance of analytical, scientific, and design computer programs for nuclear power plants. This paper discusses the approaches used and the results obtained in the second stage of this pilot study, the uncertainty characterization of influential variables identified as discussed in the companion paper presented at the PVP 2018 Conference (PVP2018-85010). In the proposed methodology, statistical assessment and expert judgment are recognized as two complementary approaches to uncertainty characterization. In this pilot study, the uncertainty characterization was limited to cases where statistical assessment could be used as the primary approach. Parametric uncertainty and uncertainty due to numerical solutions were considered as the uncertainty components for variables represented by parametric models. Residual uncertainty and uncertainty due to imbalances in the model-basis data set were considered as the uncertainty components for variables represented by statistical models. In general, the uncertainty due to numerical solutions was found to be substantially smaller than the parametric uncertainty for variables represented by parametric models, and the uncertainty due to imbalances in the model basis data set was found to be substantially smaller than the residual uncertainty for variables represented by statistical models.

Author(s):  
Christopher Manu ◽  
Suresh Datla ◽  
Leonid Gutkin

Canadian Nuclear Standard CSA N285.8, “Technical requirements for in-service evaluation of zirconium alloy pressure tubes in CANDU® reactors”(1), permits the use of probabilistic methods when performing assessments of the reactor core. A non-mandatory annex has been proposed for inclusion in the CSA Standard N285.8, to provide guidelines for performing uncertainty analysis in probabilistic fitness-for-service evaluations within the scope of this Standard, such as the probabilistic evaluation of leak-before-break. The proposed annex outlines the general approach to uncertainty analysis as being comprised of the following major activities: identification of influential variables, characterization of uncertainties in influential variables, and subsequent propagation of these uncertainties through the evaluation framework or code. The application of the proposed guidelines for uncertainty analysis was exercised by performing a pilot study for one of the evaluations within the scope of the CSA Standard N285.8, the probabilistic evaluation of leak-before-break based on a postulated through-wall crack. The pilot study was performed for a representative CANDU reactor unit using the recently developed computer code P-LBB that complies with requirements of Canadian Nuclear Standard N286.7 for quality assurance of analytical, scientific, and design computer programs for nuclear power plants. This paper discusses the approach used and the results obtained in the first stage of this pilot study, the identification of influential variables. The proposed annex considers three approaches for identifying influential variables, which may be used separately or in combination: analysis of probabilistic evaluation outputs, sensitivity analysis and expert judgment. In this pilot study, local sensitivity analysis was used to identify and rank the influential variables. For each input variable in the probabilistic evaluation of leak-before-break, the local sensitivity coefficient was determined as the relative change in the output variable associated with a relative change of a small magnitude in the input variable. Each input variable was also varied across a large range to assess the linearity of the relationship between the input variable and the output variable. All relevant input variables were ranked according to the absolute value of their sensitivity coefficients to identify the influential variables. On the basis of the results obtained, the pressure tube wall thickness was found to be the most influential variable in the probabilistic evaluation of leak-before-break based on a postulated through-wall crack, followed by the fracture toughness of Zr-2.5Nb pressure tube material and the pressure tube inner diameter. The results obtained at this stage were then used at the second stage of this pilot study, the uncertainty characterization of influential variables, as discussed in the companion paper PVP2018-85011.


1998 ◽  
Vol 14 (3) ◽  
pp. 202-210 ◽  
Author(s):  
Suzanne Skiffington ◽  
Ephrem Fernandez ◽  
Ken McFarland

This study extends previous attempts to assess emotion with single adjective descriptors, by examining semantic as well as cognitive, motivational, and intensity features of emotions. The focus was on seven negative emotions common to several emotion typologies: anger, fear, sadness, shame, pity, jealousy, and contempt. For each of these emotions, seven items were generated corresponding to cognitive appraisal about the self, cognitive appraisal about the environment, action tendency, action fantasy, synonym, antonym, and intensity range of the emotion, respectively. A pilot study established that 48 of the 49 items were linked predominantly to the specific emotions as predicted. The main data set comprising 700 subjects' ratings of relatedness between items and emotions was subjected to a series of factor analyses, which revealed that 44 of the 49 items loaded on the emotion constructs as predicted. A final factor analysis of these items uncovered seven factors accounting for 39% of the variance. These emergent factors corresponded to the hypothesized emotion constructs, with the exception of anger and fear, which were somewhat confounded. These findings lay the groundwork for the construction of an instrument to assess emotions multicomponentially.


Author(s):  
Michael S. Danielson

The first empirical task is to identify the characteristics of municipalities which US-based migrants have come together to support financially. Using a nationwide, municipal-level data set compiled by the author, the chapter estimates several multivariate statistical models to compare municipalities that did not benefit from the 3x1 Program for Migrants with those that did, and seeks to explain variation in the number and value of 3x1 projects. The analysis shows that migrants are more likely to contribute where migrant civil society has become more deeply institutionalized at the state level and in places with longer histories as migrant-sending places. Furthermore, the results suggest that political factors are at play, as projects have disproportionately benefited states and municipalities where the PAN had a stronger presence, with fewer occurring elsewhere.


2018 ◽  
Vol 27 (4) ◽  
pp. 191-198
Author(s):  
Karen Van den Bussche ◽  
Sofie Verhaeghe ◽  
Ann Van Hecke ◽  
Dimitri Beeckman

2017 ◽  
Vol 7 (1) ◽  
Author(s):  
G. Panou ◽  
R. Korakitis

AbstractThe direct geodesic problem on an oblate spheroid is described as an initial value problem and is solved numerically using both geodetic and Cartesian coordinates. The geodesic equations are formulated by means of the theory of differential geometry. The initial value problem under consideration is reduced to a system of first-order ordinary differential equations, which is solved using a numerical method. The solution provides the coordinates and the azimuths at any point along the geodesic. The Clairaut constant is not used for the solution but it is computed, allowing to check the precision of the method. An extensive data set of geodesics is used, in order to evaluate the performance of the method in each coordinate system. The results for the direct geodesic problem are validated by comparison to Karney’s method. We conclude that a complete, stable, precise, accurate and fast solution of the problem in Cartesian coordinates is accomplished.


2021 ◽  
Author(s):  
Zaynab Shaik ◽  
Nicola Georgina Bergh ◽  
Bengt Oxelman ◽  
Anthony George Verboom

We applied species delimitation methods based on the Multi-Species Coalescent (MSC) model to 500+ loci derived from genotyping-by-sequencing on the South African Seriphium plumosum (Asteraceae) species complex. The loci were represented either as multiple sequence alignments or single nucleotide polymorphisms (SNPs), and analysed by the STACEY and Bayes Factor Delimitation (BFD)/SNAPP methods, respectively. Both methods supported species taxonomies where virtually all of the 32 sampled individuals, each representing its own geographical population, were identified as separate species. Computational efforts required to achieve adequate mixing of MCMC chains were considerable, and the species/minimal cluster trees identified similar strongly supported clades in replicate runs. The resolution was, however, higher in the STACEY trees than in the SNAPP trees, which is consistent with the higher information content of full sequences. The computational efficiency, measured as effective sample sizes of likelihood and posterior estimates per time unit, was consistently higher for STACEY. A random subset of 56 alignments had similar resolution to the 524-locus SNP data set. The STRUCTURE-like sparse Non-negative Matrix Factorisation (sNMF) method was applied to six individuals from each of 48 geographical populations and 28023 SNPs. Significantly fewer (13) clusters were identified as optimal by this analysis compared to the MSC methods. The sNMF clusters correspond closely to clades consistently supported by MSC methods, and showed evidence of admixture, especially in the western Cape Floristic Region. We discuss the significance of these findings, and conclude that it is important to a priori consider the kind of species one wants to identify when using genome-scale data, the assumptions behind the parametric models applied, and the potential consequences of model violations may have.


2021 ◽  
pp. 1-22
Author(s):  
Xu Guo ◽  
Zongliang Du ◽  
Chang Liu ◽  
Shan Tang

Abstract In the present paper, a new uncertainty analysis-based framework for data-driven computational mechanics (DDCM) is established. Compared with its practical classical counterpart, the distinctive feature of this framework is that uncertainty analysis is introduced into the corresponding problem formulation explicitly. Instated of only focusing on a single solution in phase space, a solution set is sought for in order to account for the influence of the multi-source uncertainties associated with the data set on the data-driven solutions. An illustrative example provided shows that the proposed framework is not only conceptually new, but also has the potential of circumventing the intrinsic numerical difficulties pertaining to the classical DDCM framework.


2021 ◽  
Vol 27 (3) ◽  
pp. 8-34
Author(s):  
Tatyana Cherkashina

The article presents the experience of converting non-targeted administrative data into research data, using as an example data on the income and property of deputies from local legislative bodies of the Russian Federation for 2019, collected as part of anticorruption operations. This particular empirical fragment was selected for the pilot study of administrative data, which includes assessing the possibility of integrating scattered fragments of information into a single database, assessing quality of data and their relevance for solving research problems, particularly analysis of high-income strata and the apparent trends towards individualization of private property. The system of indicators for assessing data quality includes their timeliness, availability, interpretability, reliability, comparability, coherence, errors of representation and measurement, and relevance. In the case of the data set in question, measurement errors are more common than representation errors. Overall the article emphasizes the notion that introducing new non-target data into circulation requires their preliminary testing, while data quality assessment becomes distributed both in time and between different subjects. The transition from created data to «obtained» data shifts the functions of evaluating its quality from the researcher-creator to the researcheruser. And though in this case data quality is in part ensured by the legal support for their production, the transformation of administrative data into research data involves assessing a variety of quality measurements — from availability to uniformity and accuracy.


HLA ◽  
2020 ◽  
Vol 96 (2) ◽  
pp. 192-193
Author(s):  
Edwina Sutton ◽  
Dianne De Santis ◽  
Louise Hay ◽  
Elizabeth McKinnon ◽  
Lloyd D'Orsogna ◽  
...  

2009 ◽  
Vol 54 (No. 5) ◽  
pp. 217-228 ◽  
Author(s):  
J. Kvapilík ◽  
J. Přibyl ◽  
Z. Růžička ◽  
D. Řehák

Through data analysis of 7 571 883 pig carcasses slaughtered from 2004 to 2007 the means of quality classes (QC) 2.32, lean meat percentage (LM) 55.83%, carcass weight (CW) 87.21 kg, muscle thickness (MT) 61.95 mm and fat thickness (FT) 15.95 mm were determined. The highest correlation coefficients are between QC and LM (<i>r</i> = –0.920), LM and FT (–0.900) as well as QC and FT (0.828), the lowest between FT and MT (<i>r</i> = –0.084). Quality class as the dominant indicator is influenced mainly by LM, which explains from 77% to 89% of variability in the case of linear regression. Among the eight methods of pig carcass classification the FOM apparatus was used the most frequently (46.5% carcasses) followed by the ULTRA-FOM 300 apparatus (15.6%), another apparatus (13.2%) and by the IS-D-05 unit (9.8%). In the statistical models used all effects (differences) are statistically significant because of the large size of the data set. The results from the separate evaluation of each cross-classified effect are that EV has the largest influence and year-season and methods have a smaller influence. The time trend (42 months) documents stable CW and MT, a slight increase in LM and improvement of QC. The estimated results indicate the successful introduction of pig carcass classification in the CR after accession to the EU.


Sign in / Sign up

Export Citation Format

Share Document