Assessment of the Predictive Capability of VERA—CS for CASL Challenge Problems

Author(s):  
Paridhi Athe ◽  
Christopher Jones ◽  
Nam Dinh

Abstract This paper describes the process for assessing the predictive capability of the Consortium for the advanced simulation of light-water reactors (CASL) virtual environment for reactor applications code suite (VERA—CS) for different challenge problems. The assessment process is guided by the two qualitative frameworks, i.e., phenomena identification and ranking table (PIRT) and predictive capability maturity model (PCMM). The capability and credibility of VERA codes (individual and coupled simulation codes) are evaluated. Capability refers to evidence of required functionality for capturing phenomena of interest while credibility refers to the evidence that provides confidence in the calculated results. For this assessment, each challenge problem defines a set of phenomenological requirements (based on PIRT) against which the VERA software is evaluated. This approach, in turn, enables the focused assessment of only those capabilities that are relevant to the challenge problem. The credibility assessment using PCMM is based on different decision attributes that encompass verification, validation, and uncertainty quantification (VVUQ) of the CASL codes. For each attribute, a maturity score from zero to three is assigned to ascertain the acquired maturity level of the VERA codes with respect to the challenge problem. Credibility in the assessment is established by mapping relevant evidence obtained from VVUQ of codes to the corresponding PCMM attribute. The illustration of the proposed approach is presented using one of the CASL challenge problems called chalk river unidentified deposit (CRUD) induced power shift (CIPS). The assessment framework described in this paper can be considered applicable to other M & S code development efforts.

2018 ◽  
Vol 13 (4) ◽  
pp. 31-49
Author(s):  
Simon Hart ◽  
Howard Amos

Abstract Objective – This paper presents a Library Assessment Capability Maturity Model (LACMM) that can assist library managers to improve assessment. The process of developing the LACMM is detailed to provide an evidence trail to foster confidence in its utility and value. Methods – The LACMM was developed during a series of library benchmarking activities across an international network of universities. The utility and value of the LACMM was tested by the benchmarking libraries and other practitioners; feedback from this testing was applied to improve it. Guidance was taken from a procedures model for developing maturity models that draws on design science research methodology where an iterative and reflective approach is taken.  Results – The activities decision making junctures and the LACMM as an artifact make up the results of this research. The LACMM has five levels. Each level represents a measure of the effectiveness of any assessment process or program, from ad-hoc processes to mature and continuously improving processes. At each level there are criteria and characteristics that need to be fulfilled in order to reach a particular maturity level. Corresponding to each level of maturity, four stages of the assessment cycle were identified as further elements of the LACMM template. These included (1) Objectives, (2) Methods and data collection, (3) Analysis and interpretation, and (4) Use of results. Several attempts were needed to determine the criteria for each maturity level corresponding to the stages of the assessment cycle. Three versions of the LACMM were developed to introduce managers to using it. Each version corresponded to a different kind of assessment activity: data, discussion, and comparison. A generic version was developed for those who have become more familiar with using it. Through a process of review, capability maturity levels can be identified for each stage in the assessment cycle; so too can plans to improve processes toward continuous improvement. Conclusion – The LACMM will add to the plethora of resources already available. However, it is hoped that the simplicity of the tool as a means of assessing assessment and identifying an improvement path will be its strength. It can act as a quick aide-mémoire or form the basis of a comprehensive self-review or an inter-institutional benchmarking project. It is expected that the tool will be adapted and improved upon as library managers apply it.


2013 ◽  
Author(s):  
Richard Hills ◽  
Walter Witkowski ◽  
Angel Urbina ◽  
William Rider ◽  
Timothy Trucano

Author(s):  
Linyu Lin ◽  
Nam Dinh

Abstract In nuclear engineering, modeling and simulations (M&Ss) are widely applied to support risk-informed safety analysis. Since nuclear safety analysis has important implications, a convincing validation process is needed to assess simulation adequacy, i.e., the degree to which M&S tools can adequately represent the system quantities of interest. However, due to data gaps, validation becomes a decision-making process under uncertainties. Expert knowledge and judgments are required to collect, choose, characterize, and integrate evidence toward the final adequacy decision. However, in validation frameworks, CSAU: code scaling, applicability, and uncertainty (NUREG/CR-5249) and EMDAP: evaluation model development and assessment process regulatory guide (RG 1.203), such a decision-making process is largely implicit and obscure. When scenarios are complex, knowledge biases and unreliable judgments can be overlooked, which could increase uncertainty in the simulation adequacy result and the corresponding risks. Therefore, a framework is required to formalize the decision-making process for simulation adequacy in a practical, transparent, and consistent manner. This paper suggests a framework—“Predictive capability maturity quantification using Bayesian network (PCMQBN)”—as a quantified framework for assessing simulation adequacy based on information collected from validation activities. A case study is prepared for evaluating the adequacy of a Smoothed Particle Hydrodynamic simulation in predicting the hydrodynamic forces onto static structures during an external flooding scenario. Comparing to the qualitative and implicit adequacy assessment, PCMQBN is able to improve confidence in the simulation adequacy result and to reduce expected loss in the risk-informed safety analysis.


2020 ◽  
Vol 2020 (1) ◽  
pp. 67-77
Author(s):  
Nikita Vladimirivich Kovalyov ◽  
Boris Yakovlevich Zilberman ◽  
Nikolay Dmitrievich Goletskiy ◽  
Andrey Borisovich Sinyukhin

Sign in / Sign up

Export Citation Format

Share Document