scholarly journals Uncertainty propagation for the design study of the PETALE experimental programme in the CROCUS reactor

2020 ◽  
Vol 6 ◽  
pp. 9
Author(s):  
Axel Laureau ◽  
Vincent Lamirand ◽  
Dimitri Rochman ◽  
Andreas Pautz

The PETALE experimental programme in the CROCUS reactor intends to provide integral measurements to constrain stainless steel nuclear data. This article presents the tools and the methodology developed to design and optimize the experiments, and its operating principle. Two acceleration techniques have been implemented in the Serpent2 code to perform a Total Monte Carlo uncertainty propagation using variance reduction and correlated sampling technique. Their application to the estimation of the expected reaction rates in dosimeters is also discussed, together with the estimation of the impact of the nuisance parameters of aluminium used in the experiment structures.

2020 ◽  
Vol 6 ◽  
pp. 8 ◽  
Author(s):  
Axel Laureau ◽  
Vincent Lamirand ◽  
Dimitri Rochman ◽  
Andreas Pautz

A correlated sampling technique has been implemented to estimate the impact of cross section modifications on the neutron transport and in Monte Carlo simulations in one single calculation. This implementation has been coupled to a Total Monte Carlo approach which consists in propagating nuclear data uncertainties with random cross section files. The TMC-CS (Total Monte Carlo with Correlated Sampling) approach offers an interesting speed-up of the associated computation time. This methodology is detailed in this paper, together with two application cases to validate and illustrate the gain provided by this technique: the highly enriched uranium/iron metal core reflected by a stainless-steel reflector HMI-001 benchmark, and the PETALE experimental programme in the CROCUS zero-power light water reactor.


2019 ◽  
Vol 211 ◽  
pp. 03003 ◽  
Author(s):  
Vincent Lamirand ◽  
Axel Laureau ◽  
Dimitri Rochman ◽  
Gregory Perret ◽  
Adrien Gruel ◽  
...  

The PETALE experimental programme in the CROCUS reactor at EPFL intends to contribute to the validation and improvement of neutron nuclear data in the MeV energy range for stainless steel, particularly in the prospect of heavy reflector elements of pressurized water reactors. It mainly consists of several transmission experiments: first, through metallic sheets of nuclear-grade stainless steel interleaved with dosimeter foils, and, successively, through its elemental components of interest – iron, nickel, and chromium. The present article describes the study for optimizing the response of the dosimetry experiments to the nuclear data of interest.


2020 ◽  
Vol 239 ◽  
pp. 18004
Author(s):  
Axel Laureau ◽  
Vincent Lamirand ◽  
Dimitri Rochman ◽  
Andreas Pautz

This article presents the methodology developed to generate and use dosimeter covariances and to estimate nuisance parameters for the PETALE experimental programme. In anticipation of the final experimental results, this work investigates the consideration of these experimental correlations in the Bayesian assimilation process on nuclear data. Results show that the assimilation of a given set of dosimeters provides a strong constraint on some of the posterior reaction rate predictions of the other dosimeters. It confirms that, regarding the assimilation process, the different sets of dosimeters are correlated.


2019 ◽  
Vol 211 ◽  
pp. 07008 ◽  
Author(s):  
Oscar Cabellos ◽  
Luca Fiorito

The aim of this work is to review different Monte Carlo techniques used to propagate nuclear data uncertainties. Firstly, we introduced Monte Carlo technique applied for Uncertainty Quantification studies in safety calculations of large scale systems. As an example, the impact of nuclear data uncertainty of JEFF-3.3 235U, 238U and 239Pu is demonstrated for the main design parameters of a typical 3-loop PWR Westinghouse unit. Secondly, the Bayesian Monte Carlo technique for data adjustment is presented. An example for 235U adjustment using criticality and shielding integral benchmarks shows the importance of performing joint adjustment based on different set of integral benchmarks.


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
O. Cabellos

The aim of this work is to present the Exercise I-1b “pin-cell burn-up benchmark” proposed in the framework of OECD LWR UAM. Its objective is to address the uncertainty due to the basic nuclear data as well as the impact of processing the nuclear and covariance data in a pin-cell depletion calculation. Four different sensitivity/uncertainty propagation methodologies participate in this benchmark (GRS, NRG, UPM, and SNU&KAERI). The paper describes the main features of the UPM model (hybrid method) compared with other methodologies. The requested output provided by UPM is presented, and it is discussed regarding the results of other methodologies.


2012 ◽  
Vol 2012 ◽  
pp. 1-6 ◽  
Author(s):  
Ho Jin Park ◽  
Hyung Jin Shim ◽  
Chang Hyo Kim

In the Monte Carlo (MC) burnup analyses, the uncertainty of a tally estimate at a burnup step may be induced from four sources: the statistical uncertainty caused by a finite number of simulations, the nuclear covariance data, uncertainties of number densities, and cross-correlations between the nuclear data and the number densities. In this paper, the uncertainties ofkinf, reaction rates, and number densities for a PWR pin-cell benchmark problem are quantified by an uncertainty propagation formulation in the MC burnup calculations. The required sensitivities of tallied parameters to the microscopic cross-sections and the number densities are estimated by the MC differential operator sampling method accompanied by the fission source perturbation. The uncertainty propagation analyses are conducted with two nuclear covariance data—ENDF/B-VII.1 and SCALE6.1/COVA libraries—and the numerical results are compared with each other.


2018 ◽  
Vol 3 (3) ◽  
pp. 261
Author(s):  
V.V. Kolesov ◽  
A.V. Novichkov ◽  
E.E. Voznyakevich ◽  
A.M. Terekhova

The minority of papers only is devoted to the study of impact of the uncertainties in nuclear data on the nuclear concentration received during the solution of the problem of fuel burn-up in the reactor facility. On the other hand, uncertainties of known reaction rates, neutron fluxes, etc. can lead to considerable distortions of the results obtained therefore it is important to be able to assess the impact of such uncertainties on nuclear concentration of nuclides. In this paper we consider the problem of the impact assessment of uncertainties in nuclear data on reactor functionalities as applied to isotope kinetics which represents the well-known Cauchy linear problem. The most exact approach is the statistical approach of the randomized selection of input parameters in  using different distribution laws. But the simplest method of the analysis of sensitivity of model to perturbation parameters is the following (it has several names in the literature: one-at-a-time sensitivity measures, 1% sensitivity method): by varying one of the input parameters of the task for the small amount (for example, for 1%) when other parameters are constant, the corresponding response of output parameters is defined (variation approach). Our results show that in burn-up calculations the mean square deviations of nuclear concentrations obtained using statistical approach coincide with the variations of nuclear concentrations obtained in the variation approach.


2018 ◽  
Vol 4 ◽  
pp. 15 ◽  
Author(s):  
Henrik Sjöstrand ◽  
Nicola Asquith ◽  
Petter Helgesson ◽  
Dimitri Rochman ◽  
Steven van der Marck

Random sampling methods are used for nuclear data (ND) uncertainty propagation, often in combination with the use of Monte Carlo codes (e.g., MCNP). One example is the Total Monte Carlo (TMC) method. The standard way to visualize and interpret ND covariances is by the use of the Pearson correlation coefficient, [see formula in PDF] where x or y can be any parameter dependent on ND. The spread in the output, σ, has both an ND component, σND, and a statistical component, σstat. The contribution from σstat decreases the value of ρ, and hence it underestimates the impact of the correlation. One way to address this is to minimize σstat by using longer simulation run-times. Alternatively, as proposed here, a so-called fast correlation coefficient is used, [see formula in PDF] In many cases, cov(xstat; ystat) can be assumed to be zero. The paper explores three examples, a synthetic data study, correlations in the NRG High Flux Reactor spectrum, and the correlations between integral criticality experiments. It is concluded that the use of ρ underestimates the correlation. The impact of the use of ρfast is quantified, and the implication of the results is discussed.


2013 ◽  
Vol 11 (8) ◽  
Author(s):  
Ivan Dimov ◽  
Raya Georgieva ◽  
Tzvetan Ostromsky ◽  
Zahari Zlatev

AbstractThe influence of emission levels on the concentrations of four important air pollutants (ammonia, ozone, ammonium sulphate and ammonium nitrate) over three European cities (Milan, Manchester, and Edinburgh) with different geographical locations is considered. Sensitivity analysis of the output of the Unified Danish Eulerian Model according to emission levels is provided. The Sobol’ variance-based approach for global sensitivity analysis has been applied to compute the corresponding sensitivity measures. To measure the influence of the variation of emission levels over the pollutants concentrations the Sobol’ global sensitivity indices are estimated using efficient techniques for small sensitivity indices to avoid the effect of loss of accuracy. Theoretical studies, as well as, practical computations are performed in order to analyze efficiency of various variance reduction techniques for computing small indices. The importance of accurate estimation of small sensitivity indices is analyzed. It is shown that the correlated sampling technique for small sensitivity indices gives reliable results for the full set of indices. Its superior efficiency is studied in details.


2020 ◽  
Vol 239 ◽  
pp. 22003
Author(s):  
Alexander Vasiliev ◽  
Marco Pecchia ◽  
Dimitri Rochman ◽  
Hakim Ferroukhi ◽  
Erwin Alhassan

In this work, an overview on the relevance of the nuclear data (ND) uncertainties with respect to the Light Water Reactors (LWR) neutron dosimetry is presented. The paper summarizes results of several studies realized at the LRT laboratory of the Paul Scherrer Institute over the past decade. The studies were done using the base LRT calculation methodology for dosimetry assessments, which involves the neutron source distribution representation, obtained based on validated CASMO/SIMULATE core follow calculation models, and the subsequent neutron transport simulations with the MCNP® software. The methodology was validated using as reference data results of numerous measurement programs fulfilled at Swiss NPPs. Namely, the following experimental programs are considered in the given overview: PWR “gradient probes” and BWR fast neutron fluence (FNF) monitors post irradiation examination (PIE). For the both cases, assessments of the nuclear data related uncertainties were performed. When appropriate, a cross-verification of the deterministic and stochastic based uncertainty propagation techniques is provided. Furthermore, the observations on which particular neutron induced reactions contribute dominantly to the overall ND-related uncertainties are demonstrated. The presented results should help with assessing the overall impact of the various nuclear data uncertainties with respect to dosimetry applications and provide relevant feedback to the nuclear data evaluators.


Sign in / Sign up

Export Citation Format

Share Document