scholarly journals Production and use of nuclear parameter covariance data: an overview of challenging cross cutting scientific issues

2018 ◽  
Vol 4 ◽  
pp. 20 ◽  
Author(s):  
Massimo Salvatores ◽  
Giuseppe Palmiotti

Nuclear data users’ requirements for uncertainty data started already in the seventies, when several fast reactor projects did use extensively “statistical data adjustments” to meet data improvement for core and shielding design. However, it was only ∼20–30 years later that a major effort started to produce scientifically based covariance data and in particular since ∼2005. Most work has been done since then with spectacular achievements and enhanced understanding both of the uncertainty evaluation process and of the data utilization in V&V. This paper summarizes some key developments and still open challenges.

2021 ◽  
Vol 247 ◽  
pp. 09026
Author(s):  
A.G. Nelson ◽  
K.M. Ramey ◽  
F. Heidet

The nuclear data evaluation process inherently yields a nuclear data set designed to produce accurate results for the neutron energy spectra corresponding to a specific benchmark suite of experiments. When studying reactors with spectral conditions outside of, or not well represented by, the experimental database used to evaluate the nuclear data, care should be given to the relevance of the nuclear data used. In such cases, larger biases or uncertainties may be present than in a reactor with well-represented spectra. The motivation of this work is to understand the magnitude of differences between recent nuclear data libraries to provide estimates for expected variability in criticality and power distribution results for sodiumcooled, steel-reflected, metal-fueled fast reactor designs. This work was specifically performed by creating a 3D OpenMC model of a sodium-cooled, steel-reflected, metal-fueled fast reactor similar to the FASTER design but without a thermal test region. This OpenMC model was used to compare the differences in eigenvalues, reactivity coefficients, and the spatial and energetic effects on flux and power distributions between the ENDF/B-VII.0, ENDF/B-VII.1, ENDF/B-VIII.0, JEFF-3.2, and JEFF-3.3 nuclear data libraries. These investigations have revealed that reactivity differences between the above libraries can vary by nearly 900 pcm and the fine-group fluxes can vary by up to 18% in individual groups. Results also show a strong variation in the flux and power distributions near the fuel/reflector interface due to the high variability in the 56Fe cross sections in the libraries examined. This indicates that core design efforts of a sodium-cooled, steel-reflected, metalfueled reactor will require the application of relatively large nuclear data uncertainties and/or the development of a representative benchmark-quality experiment.


Author(s):  
Antonio Jiménez-Carrascosa ◽  
Nuria Garcia Herranz ◽  
Jiri Krepel ◽  
Marat Margulis ◽  
Una Baker ◽  
...  

Abstract In this work a detailed assessment of the decay heat power for the commercial-size European Sodium-cooled Fast Reactor (ESFR) at the end of its equilibrium cycle has been performed. The summation method has been used to compute very accurate spatial- and time-dependent decay heat by employing state-of-the-art coupled transport-depletion computational codes and nuclear data. This detailed map provides basic information for subsequent transient calculations of the ESFR. A comprehensive analysis of the decay heat has been carried out and interdependencies among decay heat and different parameters characterizing the core state prior to shutdown, such as discharge burnup or type of fuel material, have been identified. That analysis has served as a basis to develop analytic functions to reconstruct the spatial-dependent decay heat power for the ESFR for cooling times within the first day after shutdown.


2011 ◽  
Vol 59 (2(3)) ◽  
pp. 1191-1194 ◽  
Author(s):  
D. Rochman ◽  
A. J. Koning ◽  
D. F. Dacruz ◽  
S. C. van der Marck

2018 ◽  
Vol 170 ◽  
pp. 04009
Author(s):  
Benoit Geslot ◽  
Adrien Gruel ◽  
Stéphane Bréaud ◽  
Pierre Leconte ◽  
Patrick Blaise

Pile oscillator techniques are powerful methods to measure small reactivity worth of isotopes of interest for nuclear data improvement. This kind of experiments has long been implemented in the Mineve experimental reactor, operated by CEA Cadarache. A hybrid technique, mixing reactivity worth estimation and measurement of small changes around test samples is presented here. It was made possible after the development of high sensitivity miniature fission chambers introduced next to the irradiation channel. A test campaign, called MAESTRO-SL, took place in 2015. Its objective was to assess the feasibility of the hybrid method and investigate the possibility to separate mixed neutron effects, such as fission/capture or scattering/capture. Experimental results are presented and discussed in this paper, which focus on comparing two measurements setups, one using a power control system (closed loop) and another one where the power is free to drift (open loop). First, it is demonstrated that open loop is equivalent to closed loop. Uncertainty management and methods reproducibility are discussed. Second, results show that measuring the flux depression around oscillated samples provides valuable information regarding partial neutron cross sections. The technique is found to be very sensitive to the capture cross section at the expense of scattering, making it very useful to measure small capture effects of highly scattering samples.


2020 ◽  
Vol 128 ◽  
pp. 103450
Author(s):  
Filip Osuský ◽  
Štefan Čerba ◽  
Jakub Lüley ◽  
Branislav Vrban ◽  
Ján Haščík ◽  
...  

2013 ◽  
Vol 684 ◽  
pp. 429-433 ◽  
Author(s):  
Hong Li Li ◽  
Xiao Huai Chen ◽  
Hong Tao Wang

There is presented a complete uncertainty evaluation process of end distance measurement by CMM. To begin with, the major sources of uncertainty, which would influence measurement result, are found out after analyzing, then, the general mathematic model of end distance measurement is established. Furthermore, Monte Carlo method (MCM) is used, and the uncertainty of the measured quantity is obtained. The complete results are given out, so the value of CMM is enhanced. Moreover, seen from the evaluation example, the results of uncertainty evaluation obtained from MCM method and from GUM method are compared, the comparison result indicates that the mathematic model is feasible, and using MCM method to evaluate uncertainty is easy and efficient, having practical value.


2014 ◽  
Vol 118 ◽  
pp. 535-537
Author(s):  
J.J. Herrero ◽  
R. Ochoa ◽  
J.S. Martínez ◽  
C.J. Díez ◽  
N. García-Herranz ◽  
...  

2018 ◽  
Vol 4 ◽  
pp. 14 ◽  
Author(s):  
James Dyrda ◽  
Ian Hill ◽  
Luca Fiorito ◽  
Oscar Cabellos ◽  
Nicolas Soppera

Uncertainty propagation to keff using a Total Monte Carlo sampling process is commonly used to solve the issues associated with non-linear dependencies and non-Gaussian nuclear parameter distributions. We suggest that in general, keff sensitivities to nuclear data perturbations are not problematic, and that they remain linear over a large range; the same cannot be said definitively for nuclear data parameters and their impact on final cross-sections and distributions. Instead of running hundreds or thousands of neutronics calculations, we therefore investigate the possibility to take those many cross-section file samples and perform ‘cheap’ sensitivity perturbation calculations. This is efficiently possible with the NEA Nuclear Data Sensitivity Tool (NDaST) and this process we name the half Monte Carlo method (HMM). We demonstrate that this is indeed possible with a test example of JEZEBEL (PMF001) drawn from the ICSBEP handbook, comparing keff directly calculated with SERPENT to those predicted with NDaST. Furthermore, we show that one may retain the normal NDaST benefits; a deeper analysis of the resultant effects in terms of reaction and energy breakdown, without the normal computational burden of Monte Carlo (results within minutes, rather than days). Finally, we assess the rationality of using either full or HMMs, by also using the covariance data to do simple linear 'sandwich formula' type propagation of uncertainty onto the selected benchmarks. This allows us to draw some broad conclusions about the relative merits of selecting a technique with either full, half or zero degree of Monte Carlo simulation


Sign in / Sign up

Export Citation Format

Share Document