Towards Development of Uncertainty Library for Nuclear Reactor Core Simulation

Author(s):  
Hany S. Abdel-Khalik ◽  
Dongli Huang ◽  
Ondrej Chvala ◽  
G. Ivan Maldonado

Uncertainty quantification is an indispensable analysis for nuclear reactor simulation as it provides a rigorous approach by which the credibility of the predictions can be assessed. Focusing on propagation of multi-group cross-sections, the major challenge lies in the enormous size of the uncertainty space. Earlier work has explored the use of the physics-guided coverage mapping (PCM) methodology to assess the quality of the assumptions typically employed to reduce the size of the uncertainty space. A reduced order modeling (ROM) approach has been further developed to identify the active degrees of freedom (DOFs) of the uncertainty space, comprising all the cross-section few-group parameters required in core-wide simulation. In the current work, a sensitivity study, based on the PCM and ROM results, is applied to identify a suitable compressed representation of the uncertainty space to render feasible the quantification and prioritization of the various sources of uncertainties. While the proposed developments are general to any reactor physics computational sequence, the proposed approach is customized to the TRITON-NESTLE computational sequence, simulating the BWR lattice model and the core model, which will serve as a demonstrative tool for the implementation of the algorithms.

Energies ◽  
2018 ◽  
Vol 11 (12) ◽  
pp. 3509 ◽  
Author(s):  
Bruno Merk ◽  
Mark Bankhead ◽  
Dzianis Litskevich ◽  
Robert Gregg ◽  
Aiden Peakman ◽  
...  

The U.K. has initiated the nuclear renaissance by contracting for the first two new plants and announcing further new build projects. The U.K. government has recently started to support this development with the announcement of a national programme of nuclear innovation. The aim of this programme with respect to modelling and simulation is foreseen to fulfil the demand in education and the build-up of a reasonably qualified workforce, as well as the development and application of a new state-of-the-art software environment for improved economics and safety. This document supports the ambition to define a new approach to the structured development of nuclear reactor core simulation that is based on oversight instead of looking at detail problems and the development of single tools for these specific detail problems. It is based on studying the industrial demand to bridge the gap in technical innovation that can be derived from basic research in order to create a tailored industry solution to set the new standard for reactor core modelling and simulation for the U.K. However, finally, a technical requirements specification has to be developed alongside the strategic approach to give code developers a functional specification that they can use to develop the tools for the future. Key points for a culture change to the application of modern technologies are identified in the use of DevOps in a double-strata approach to academic and industrial code development. The document provides a novel, strategic approach to achieve the most promising final product for industry, and to identify the most important points for improvement.


Author(s):  
Antonio Carlos Marques Alvim ◽  
Fernando Carvalho da Silva ◽  
Aquilino Senra Martinez

This paper deals with an alternative numerical method for calculating depletion and production chains of the main isotopes found in a pressurized water reactor. It is based on the use of the exponentiation procedure coupled to orthogonal polynomial expansion to compute the transition matrix associated with the solution of the differential equations describing isotope concentrations in the nuclear reactor. Actually, the method was implemented in an automated nuclear reactor core design system that uses a quick and accurate 3D nodal method, the Nodal Expansion Method (NEM), aiming at solving the diffusion equation describing the spatial neutron distribution in the reactor. This computational system, besides solving the diffusion equation, also solves the depletion equations governing the gradual changes in material compositions of the core due to fuel depletion. The depletion calculation is the most time-consuming aspect of the nuclear reactor design code, and has to be done in a very precise way in order to obtain a correct evaluation of the economic performance of the nuclear reactor. In this sense, the proposed method was applied to estimate the critical boron concentration at the end of the cycle. Results were compared to measured values and confirm the effectiveness of the method for practical purposes.


Author(s):  
Mancang Li ◽  
Kan Wang ◽  
Dong Yao

The general equivalence theory (GET) and the superhomogenization method (SPH) are widely used for equivalence in the standard two-step reactor physics calculation. GET has behaved well in light water reactor calculation via nodal reactor analysis methods. The SPH was brought up again lately to satisfy the need of accurate pin-by-pin core calculations. However, both of the classical methods have their limitations. The super equivalence method (SPE) is proposed in the paper as an attempt to preserve the surface current, the reaction rates and the reactivity. It enhances the good property of the SPH method through reaction rates based normalization. The concept of pin discontinuity factors are utilized to preserve the surface current, which is the basic idea in the GET technique. However, the pin discontinuity factors are merged into the homogenized cross sections and diffusion coefficients, thus no additional homogenization parameters are needed in the succedent reactor core calculation. The eigenvalue preservation is performed after the reaction rate and surface current have been preserved, resulting in reduced errors of reactivity. The SPE has been implemented into the Monte Carlo method based homogenization code MCMC, as part of RMC Program, under developed in Tsinghua University. The C5G7 benchmark problem have been carried out to test the SPE. The results show that the SPE method not only suits for the equivalence in Monte Carlo based homogenization but also provides improved accuracy compared to the traditional GET or SPH method.


2021 ◽  
Vol 2048 (1) ◽  
pp. 012024
Author(s):  
H Ardiansyah ◽  
V Seker ◽  
T Downar ◽  
S Skutnik ◽  
W Wieselquist

Abstract The significant recent advances in computer speed and memory have made possible an increasing fidelity and accuracy in reactor core simulation with minimal increase in the computational burden. This has been important for modeling some of the smaller advanced reactor designs for which simplified approximations such as few groups homogenized diffusion theory are not as accurate as they were for large light water reactor cores. For narrow cylindrical cores with large surface to volume ratios such the Ped Bed Modular Reactor (PBMR), neutron leakage from the core can be significant, particularly with the harder neutron spectrum and longer mean free path than a light water reactor. In this paper the core from the OECD PBMR-400 benchmark was analyzed using multigroup Monte Carlo cross sections in the HTR reactor core simulation code AGREE. Homogenized cross sections were generated for each of the discrete regions of the AGREE model using a full core SERPENT Monte Carlo model. The cross sections were generated for a variety of group structures in AGREE to assess the importance of finer group discretization on the accuracy of the core eigenvalue and flux predictions compared to the SERPENT full core Monte Carlo solution. A significant increase in the accuracy was observed by increasing the number of energy groups, with as much as a 530 pcm improvement in the eigenvalue calculation when increasing the number of energy groups from 2 to 14. Significant improvements were also observed in the AGREE neutron flux distributions compared to the SERPENT full core calculation.


Author(s):  
Luca Ratti ◽  
Guido Mazzini ◽  
Marek Ruščák ◽  
Valerio Giusti

The Czech Republic National Radiation Protection Institute (SURO) provides technical support to the Czech Republic State Office for Nuclear Safety, providing safety analysis and reviewing of the technical documentations for Nuclear Power Plants (NPPs). For this reason, several computational models created in SURO were prepared using different codes as tools to simulate and investigate the design base and beyond design base accidents scenarios. This paper focuses on the creation of SCALE and PARCS neutronic models for a proper analysis of the VVER-440 reactor analysis. In particular, SCALE models of the VVER-440 fuel assemblies have been created in order to produce collapsed and homogenized cross sections necessary for the study with PARCS of the whole VVER-440 reactor core. The sensitivity study of the suitable energy threshold to be adopted for the preparation with SCALE of collapsed two energy-group homogenized cross sections is also discussed. Finally, the results obtained with PARCS core model are compared with those reported in the VVER-440 Final Safety Report.


Author(s):  
Wenping Hu ◽  
Shengyao Jiang ◽  
Xingtuan Yang

Pebble-bed nuclear reactor technology, with a reactor core typically composed of spherical pebbles draining very slowly in a continuous refueling process, is currently being revived around the world. But the dense slow pebble flow in the reactor, which has an important impact on reactor physics, is still poorly understood. Under such circumstance, this article studies mathematical models which are potential to research the pebbles motion in the pebble-bed reactor, including void model, spot model and DEM model. The fundamental principles of these models are introduced, the success and deficiency of each model is briefly analyzed. Theoretically, it’s expected that spot model and DEM model may be more practical to apply on studying the pebble dynamics. Though, spot model still needs to be refined based on further experimentation, and more research is necessary to solve the problem of huge computational time in order to make the DEM model simulation technique a really practical notion.


Energies ◽  
2021 ◽  
Vol 14 (16) ◽  
pp. 5060
Author(s):  
Sebastian Davies ◽  
Dzianis Litskevich ◽  
Ulrich Rohde ◽  
Anna Detkina ◽  
Bruno Merk ◽  
...  

Understanding and optimizing the relation between nuclear reactor components or physical phenomena allows us to improve the economics and safety of nuclear reactors, deliver new nuclear reactor designs, and educate nuclear staff. Such relation in the case of the reactor core is described by coupled reactor physics as heat transfer depends on energy production while energy production depends on heat transfer with almost none of the available codes providing full coupled reactor physics at the fuel pin level. A Multiscale and Multiphysics nuclear software development between NURESIM and CASL for LWRs has been proposed for the UK. Improved coupled reactor physics at the fuel pin level can be simulated through coupling nodal codes such as DYN3D as well as subchannel codes such as CTF. In this journal article, the first part of the DYN3D and CTF coupling within the Multiscale and Multiphysics software development is presented to evaluate all inner iterations within one outer iteration to provide partially verified improved coupled reactor physics at the fuel pin level. Such verification has proven that the DYN3D and CTF coupling provides improved feedback distributions over the DYN3D coupling as crossflow and turbulent mixing are present in the former.


2014 ◽  
Vol 2014 ◽  
pp. 1-14
Author(s):  
M. R. Ball ◽  
C. McEwan ◽  
D. R. Novog ◽  
J. C. Luxat

The propagation of nuclear data uncertainties through reactor physics calculation has received attention through the Organization for Economic Cooperation and Development—Nuclear Energy Agency’s Uncertainty Analysis in Modelling (UAM) benchmark. A common strategy for performing lattice physics uncertainty analysis involves starting with nuclear data and covariance matrix which is typically available at infinite dilution. To describe the uncertainty of all multigroup physics parameters—including those at finite dilution—additional calculations must be performed that relate uncertainties in an infinite dilution cross-section to those at the problem dilution. Two potential methods for propagating dilution-related uncertainties were studied in this work. The first assumed a correlation between continuous-energy and multigroup cross-sectional data and uncertainties, which is convenient for direct implementation in lattice physics codes. The second is based on a more rigorous approach involving the Monte Carlo sampling of resonance parameters in evaluated nuclear data using the TALYS software. When applied to a light water fuel cell, the two approaches show significant differences, indicating that the assumption of the first method did not capture the complexity of physics parameter data uncertainties. It was found that the covariance of problem-dilution multigroup parameters for selected neutron cross-sections can vary significantly from their infinite-dilution counterparts.


Author(s):  
Dongli Huang ◽  
Hany S. Abdel-Khalik

This work aims to develop an uncertainty analysis methodology for the propagation and quantification of the effects of nuclear cross-section uncertainties on important core-wide attributes, such as power distribution and core critical eigenvalue. Given the computationally taxing nature of this endeavor, our goal is to develop a methodology capable of preserving the accuracy of brute force sampling techniques for uncertainty quantification while realizing the efficiency of deterministic techniques. To achieve that, a reduced order modeling (ROM) approach is proposed to deal with the enormous size of the uncertainty space, comprising all the cross-section few-group parameters required in core-wide simulation. The idea is to generate a compressed representation of the uncertainty space, as represented by a covariance matrix, that renders sampling techniques computationally a feasible option for quantifying and prioritizing the various sources of uncertainties. While the proposed developments are general to any reactor physics computational sequence, we customize our approach to the NESTLE [1]-TRITON [2] computational sequence, which will serve as a demonstrative tool for the implementation of our approach. NESTLE is a software used for core wide simulation, which relies on the few-group cross-sections to calculate core wide attributes over multiple cycles of depletion. Its input cross-sections are generated using a matrix of conditions evaluated using a lattice physics code, which in our implementation is done using the TRITON software of the ORNL’ SCALE suit. This manuscript presents one of the early steps towards this goal. Specifically, we focus here on the development of the algorithms for determining the reduced dimension of covariance matrix. Numerical experiment using the TRITON software is employed to demonstrate how the reduction is achieved.


Sign in / Sign up

Export Citation Format

Share Document