CANDU Reactor.Physics Analysis Methods and Computer Codes

2021 ◽  
pp. 113-131
Author(s):  
Wei Shen ◽  
Benjamin Rouben

Reactor physics aims to understand accurately the reactivity and the distribution of all the reaction rates (most importantly of the power), and their rate of change in time, for any reactor configuration. To do this, the multiplication factor (or, equivalently, reactivity) and the neutron-flux distribution under various operating conditions and at different times need to be calculated repeatedly. Most of the other parameters of interest (such as neutron reaction rates, power, heat deposition, etc.) are derived from them. They are governed by the geometry, the material composition and the nuclear data (i.e., the neutron cross sections, their energy dependence, the energy spectra and the angular distributions of secondary particles, etc.). For radiation-shielding calculations, additional photon interactions and coupled neutron-photon interaction data are required.

2020 ◽  
Vol 239 ◽  
pp. 19001
Author(s):  
Tim Ware ◽  
David Hanlon ◽  
Glynn Hosking ◽  
Ray Perry ◽  
Simon Richards

The JEFF-3.3 and ENDF/B-VIII.0 evaluated nuclear data libraries were released in December 2017 and February 2018 respectively. Both evaluations represent a comprehensive update to their predecessor evaluations. The ANSWERS Software Service produces the MONK® and MCBEND Monte Carlo codes, and the WIMS deterministic code for nuclear criticality, shielding and reactor physics applications. MONK and MCBEND can utilise continuous energy nuclear data provided by the BINGO nuclear data library and MONK and WIMS can utilise broad energy group data (172 group XMAS scheme) via the WIMS nuclear data library. To produce the BINGO library, the BINGO Pre-Processor code is used to process ENDF-6 format evaluations. This utilises the RECONR-BROADR-PURR sequence of NJOY2016 to reconstruct and Doppler broaden the free gas neutron cross sections together with bespoke routines to generate cumulative distributions for the S(α,β) tabulations and equi-probable bins or probability functions for the secondary angle and energy data. To produce the WIMS library, NJOY2016 is again used to reconstruct and Doppler broaden the cross sections. The THERMR module is used to process the thermal scattering data. Preparation of data for system-dependent resonance shielding of some nuclides is performed. GROUPR is then used to produce the group averaged data before all the data are transformed into the specific WIMS library format. The MONK validation includes analyses based on around 800 configurations for a range of fuel and moderator types. The WIMS validation includes analyses of zero-energy critical and sub-critical, commissioning, operational and post-irradiation experiments for a range of fuel and moderator types. This paper presents and discusses the results of MONK and WIMS validation benchmark calculations using the JEFF-3.3 and ENDF/B-VIII.0 based BINGO and WIMS nuclear data libraries.


2010 ◽  
Vol 2 ◽  
pp. 12001 ◽  
Author(s):  
J.N. Wilson ◽  
S. Siem ◽  
S.J. Rose ◽  
A. Georgen ◽  
F. Gunsing ◽  
...  

2018 ◽  
Vol 170 ◽  
pp. 04009
Author(s):  
Benoit Geslot ◽  
Adrien Gruel ◽  
Stéphane Bréaud ◽  
Pierre Leconte ◽  
Patrick Blaise

Pile oscillator techniques are powerful methods to measure small reactivity worth of isotopes of interest for nuclear data improvement. This kind of experiments has long been implemented in the Mineve experimental reactor, operated by CEA Cadarache. A hybrid technique, mixing reactivity worth estimation and measurement of small changes around test samples is presented here. It was made possible after the development of high sensitivity miniature fission chambers introduced next to the irradiation channel. A test campaign, called MAESTRO-SL, took place in 2015. Its objective was to assess the feasibility of the hybrid method and investigate the possibility to separate mixed neutron effects, such as fission/capture or scattering/capture. Experimental results are presented and discussed in this paper, which focus on comparing two measurements setups, one using a power control system (closed loop) and another one where the power is free to drift (open loop). First, it is demonstrated that open loop is equivalent to closed loop. Uncertainty management and methods reproducibility are discussed. Second, results show that measuring the flux depression around oscillated samples provides valuable information regarding partial neutron cross sections. The technique is found to be very sensitive to the capture cross section at the expense of scattering, making it very useful to measure small capture effects of highly scattering samples.


Author(s):  
Mancang Li ◽  
Kan Wang ◽  
Dong Yao

The general equivalence theory (GET) and the superhomogenization method (SPH) are widely used for equivalence in the standard two-step reactor physics calculation. GET has behaved well in light water reactor calculation via nodal reactor analysis methods. The SPH was brought up again lately to satisfy the need of accurate pin-by-pin core calculations. However, both of the classical methods have their limitations. The super equivalence method (SPE) is proposed in the paper as an attempt to preserve the surface current, the reaction rates and the reactivity. It enhances the good property of the SPH method through reaction rates based normalization. The concept of pin discontinuity factors are utilized to preserve the surface current, which is the basic idea in the GET technique. However, the pin discontinuity factors are merged into the homogenized cross sections and diffusion coefficients, thus no additional homogenization parameters are needed in the succedent reactor core calculation. The eigenvalue preservation is performed after the reaction rate and surface current have been preserved, resulting in reduced errors of reactivity. The SPE has been implemented into the Monte Carlo method based homogenization code MCMC, as part of RMC Program, under developed in Tsinghua University. The C5G7 benchmark problem have been carried out to test the SPE. The results show that the SPE method not only suits for the equivalence in Monte Carlo based homogenization but also provides improved accuracy compared to the traditional GET or SPH method.


2009 ◽  
Vol 26 (3) ◽  
pp. 250-254 ◽  
Author(s):  
A. Mengoni ◽  
M. Mosconi ◽  
K. Fujii ◽  
F. Käppeler ◽  

AbstractThe neutron-capture cross sections of 186,187Os have been recently measured at the CERN neutron time-of-flight facility n_TOF for an improved evaluation of the Re/Os cosmo-chronometer. This experimental information was complemented by nuclear model calculations for obtaining the proper astrophysical reaction rates at s-process temperatures. The calculated results and their implications for the determination of the time-duration of nucleosynthesis during galactic chemical evolution is discussed.


2021 ◽  
Vol 247 ◽  
pp. 09007
Author(s):  
Isabelle Duhamel ◽  
Nicolas Leclaire ◽  
Luiz Leal ◽  
Atsushi Kimura ◽  
Shoji Nakamura

Available nuclear data for molybdenum included in the nuclear data libraries are not of sufficient quality for reactor physics or criticality safety issues and indeed information about uncertainties and covariance is either missing or leaves much to be desired. Therefore, IRSN and JAEA performed experimental measurements on molybdenum at the J-PARC (Japan Proton Accelerator Research Complex) facility in Japan. The aim was to measure capture cross section and transmission of natural molybdenum at the ANNRI (Accurate Neutron-Nucleus Reaction measurement Instrument) in the MLF (Material Life and science Facility) of J-PARC. The measurements were performed on metallic natural molybdenum samples with various thicknesses. A NaI detector, placed at a flight-path length of about 28 m, was used for capture measurements and a Li-glass detector (flight-path length of about 28.7 m) for transmission measurements. Following the data reduction process, the measured data are being analyzed and evaluated to produce more accurate cross sections and associated uncertainties.


2020 ◽  
Vol 239 ◽  
pp. 01036
Author(s):  
WANG Zhaohui ◽  
REN Jie ◽  
WU Hongyi ◽  
QIAN Jing ◽  
HUANG Hanxiong ◽  
...  

In nuclear reactors, inelastic neutron scattering is a significant energy-loss mechanism which has deep impacts on designments of nuclear reactor and radiation shielding. Iron is an important material in reactor. However, for the existing nuclear data for iron, there exists an obvious divergence for the inelastic scattering cross sections and the related gamma production sections. Therefore the precise measurements are urgently needed for satisfying the demanding to design new nuclear reactors (fast reactors), Accelerator Driven Subcritical System (ADS), and other nuclear apparatus. In this paper, we report a new system with an array of HPGe detectors, electronics and acquisition system. Experiments had been carried out on three neutron facilities.


2020 ◽  
Vol 239 ◽  
pp. 11007
Author(s):  
Aloys Nizigama ◽  
Olivier Bouland ◽  
Pierre Tamagno

The traditional methodology of nuclear data evaluation is showing its limitations in reducing significantly the uncertainties in neutron cross sections below their current level. This suggests that a new approach should be considered. This work aims at establishing that a major qualitative improvement is possible by changing the reference framework historically used for evaluating nuclear model data. The central idea is to move from the restrictive framework of the incident neutron and target nucleus to the more general framework of the excited compound-system. Such a change, which implies the simultaneous modeling of all the reactions leading to the same compound-system, opens up the possibility of direct comparisons between nuclear model parameters, whether those are derived for reactor physics applications, astrophysics or basic nuclear spectroscopy studies. This would have the double advantage of bringing together evaluation activities performed separately, and of pooling experimental databases and basic theoretical nuclear parameter files. A consistent multichannel modeling methodology using the TORA module of the CONRAD code is demonstrated across the evaluation of differential and angle-integrated neutron cross sections of 16O by fitting simultaneously incident-neutron direct kinematic reactions and incident-alpha inverse kinematic reactions without converting alpha data into the neutron laboratory system. The modeling is fulfilled within the Reich-Moore formalism and an unique set of fitted resonance parameters related to the 17O* compound-system.


2020 ◽  
Vol 6 (3) ◽  
Author(s):  
M. D. Tucker ◽  
D. R. Novog

Abstract Within emerging best-estimate-plus-uncertainty (BEPU) approaches, code output uncertainties can be inferred from the propagation of fundamental or microscopic uncertainties. This paper examines the propagation of fundamental nuclear data uncertainties though the entire analysis framework to predict macroscopic reactor physics phenomena, which can be measured in Canada Deuterium Uranium (CANDU) reactors. In this work, 151 perturbed multigroup cross sections libraries, each based on a set of perturbed microscopic nuclear data, were generated. Subsequently, these data were processed into few-group cross sections and used to generate full-core diffusion models in PARCS. The impact of these nuclear data perturbations leads to changes in core reactivity for a fixed set of fuel compositions of 4.5 mk. The impact of online fueling operations was simulated using a series of fueling rules, which attempted to mimic operator actions during CANDU operations such as studying the assembly powers and selecting fueling sites, which would minimize the deviation in power from some desirable reference condition or increasing or decreasing fueling frequency to manage reactivity. An important feature of this analysis was to perform long-transients (1–3 years) starting with each one of the 151 perturbed full core models. It was found that the operational feedback reduced the standard deviation in core reactivity by 99% from 0.0045 to 2.8 × 10−5. Overall, the conclusions demonstrate that while microscopic nuclear data uncertainties may give rise to large macroscopic variability during simple propagation, when important macrolevel feedback are considered the variability is significantly reduced.


Sign in / Sign up

Export Citation Format

Share Document