Assessment and Propagation of the237Np Nuclear Data Uncertainties in Integral Calculations by Monte Carlo Techniques

2008 ◽  
Vol 160 (1) ◽  
pp. 108-122 ◽  
Author(s):  
Gilles Noguere ◽  
David Bernard ◽  
Cyrille De Saint Jean ◽  
Bertrand Iooss ◽  
Frank Gunsing ◽  
...  
2019 ◽  
Vol 211 ◽  
pp. 07008 ◽  
Author(s):  
Oscar Cabellos ◽  
Luca Fiorito

The aim of this work is to review different Monte Carlo techniques used to propagate nuclear data uncertainties. Firstly, we introduced Monte Carlo technique applied for Uncertainty Quantification studies in safety calculations of large scale systems. As an example, the impact of nuclear data uncertainty of JEFF-3.3 235U, 238U and 239Pu is demonstrated for the main design parameters of a typical 3-loop PWR Westinghouse unit. Secondly, the Bayesian Monte Carlo technique for data adjustment is presented. An example for 235U adjustment using criticality and shielding integral benchmarks shows the importance of performing joint adjustment based on different set of integral benchmarks.


2005 ◽  
Vol 12 (3) ◽  
pp. 032703 ◽  
Author(s):  
S. Kurebayashi ◽  
J. A. Frenje ◽  
F. H. Séguin ◽  
J. R. Rygg ◽  
C. K. Li ◽  
...  

Author(s):  
Edward P. Herbst ◽  
Frank Schorfheide

Dynamic stochastic general equilibrium (DSGE) models have become one of the workhorses of modern macroeconomics and are extensively used for academic research as well as forecasting and policy analysis at central banks. This book introduces readers to state-of-the-art computational techniques used in the Bayesian analysis of DSGE models. The book covers Markov chain Monte Carlo techniques for linearized DSGE models, novel sequential Monte Carlo methods that can be used for parameter inference, and the estimation of nonlinear DSGE models based on particle filter approximations of the likelihood function. The theoretical foundations of the algorithms are discussed in depth, and detailed empirical applications and numerical illustrations are provided. The book also gives invaluable advice on how to tailor these algorithms to specific applications and assess the accuracy and reliability of the computations. The book is essential reading for graduate students, academic researchers, and practitioners at policy institutions.


2014 ◽  
Vol 6 (1) ◽  
pp. 1006-1015
Author(s):  
Negin Shagholi ◽  
Hassan Ali ◽  
Mahdi Sadeghi ◽  
Arjang Shahvar ◽  
Hoda Darestani ◽  
...  

Medical linear accelerators, besides the clinically high energy electron and photon beams, produce other secondary particles such as neutrons which escalate the delivered dose. In this study the neutron dose at 10 and 18MV Elekta linac was obtained by using TLD600 and TLD700 as well as Monte Carlo simulation. For neutron dose assessment in 2020 cm2 field, TLDs were calibrated at first. Gamma calibration was performed with 10 and 18 MV linac and neutron calibration was done with 241Am-Be neutron source. For simulation, MCNPX code was used then calculated neutron dose equivalent was compared with measurement data. Neutron dose equivalent at 18 MV was measured by using TLDs on the phantom surface and depths of 1, 2, 3.3, 4, 5 and 6 cm. Neutron dose at depths of less than 3.3cm was zero and maximized at the depth of 4 cm (44.39 mSvGy-1), whereas calculation resulted  in the maximum of 2.32 mSvGy-1 at the same depth. Neutron dose at 10 MV was measured by using TLDs on the phantom surface and depths of 1, 2, 2.5, 3.3, 4 and 5 cm. No photoneutron dose was observed at depths of less than 3.3cm and the maximum was at 4cm equal to 5.44mSvGy-1, however, the calculated data showed the maximum of 0.077mSvGy-1 at the same depth. The comparison between measured photo neutron dose and calculated data along the beam axis in different depths, shows that the measurement data were much more than the calculated data, so it seems that TLD600 and TLD700 pairs are not suitable dosimeters for neutron dosimetry in linac central axis due to high photon flux, whereas MCNPX Monte Carlo techniques still remain a valuable tool for photonuclear dose studies.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 662
Author(s):  
Mateu Sbert ◽  
Jordi Poch ◽  
Shuning Chen ◽  
Víctor Elvira

In this paper, we present order invariance theoretical results for weighted quasi-arithmetic means of a monotonic series of numbers. The quasi-arithmetic mean, or Kolmogorov–Nagumo mean, generalizes the classical mean and appears in many disciplines, from information theory to physics, from economics to traffic flow. Stochastic orders are defined on weights (or equivalently, discrete probability distributions). They were introduced to study risk in economics and decision theory, and recently have found utility in Monte Carlo techniques and in image processing. We show in this paper that, if two distributions of weights are ordered under first stochastic order, then for any monotonic series of numbers their weighted quasi-arithmetic means share the same order. This means for instance that arithmetic and harmonic mean for two different distributions of weights always have to be aligned if the weights are stochastically ordered, this is, either both means increase or both decrease. We explore the invariance properties when convex (concave) functions define both the quasi-arithmetic mean and the series of numbers, we show its relationship with increasing concave order and increasing convex order, and we observe the important role played by a new defined mirror property of stochastic orders. We also give some applications to entropy and cross-entropy and present an example of multiple importance sampling Monte Carlo technique that illustrates the usefulness and transversality of our approach. Invariance theorems are useful when a system is represented by a set of quasi-arithmetic means and we want to change the distribution of weights so that all means evolve in the same direction.


Mathematics ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 580
Author(s):  
Pavel Shcherbakov ◽  
Mingyue Ding ◽  
Ming Yuchi

Various Monte Carlo techniques for random point generation over sets of interest are widely used in many areas of computational mathematics, optimization, data processing, etc. Whereas for regularly shaped sets such sampling is immediate to arrange, for nontrivial, implicitly specified domains these techniques are not easy to implement. We consider the so-called Hit-and-Run algorithm, a representative of the class of Markov chain Monte Carlo methods, which became popular in recent years. To perform random sampling over a set, this method requires only the knowledge of the intersection of a line through a point inside the set with the boundary of this set. This component of the Hit-and-Run procedure, known as boundary oracle, has to be performed quickly when applied to economy point representation of many-dimensional sets within the randomized approach to data mining, image reconstruction, control, optimization, etc. In this paper, we consider several vector and matrix sets typically encountered in control and specified by linear matrix inequalities. Closed-form solutions are proposed for finding the respective points of intersection, leading to efficient boundary oracles; they are generalized to robust formulations where the system matrices contain norm-bounded uncertainty.


2021 ◽  
Vol 11 (11) ◽  
pp. 5234
Author(s):  
Jin Hun Park ◽  
Pavel Pereslavtsev ◽  
Alexandre Konobeev ◽  
Christian Wegmann

For the stable and self-sufficient functioning of the DEMO fusion reactor, one of the most important parameters that must be demonstrated is the Tritium Breeding Ratio (TBR). The reliable assessment of the TBR with safety margins is a matter of fusion reactor viability. The uncertainty of the TBR in the neutronic simulations includes many different aspects such as the uncertainty due to the simplification of the geometry models used, the uncertainty of the reactor layout and the uncertainty introduced due to neutronic calculations. The last one can be reduced by applying high fidelity Monte Carlo simulations for TBR estimations. Nevertheless, these calculations have inherent statistical errors controlled by the number of neutron histories, straightforward for a quantity such as that of TBR underlying errors due to nuclear data uncertainties. In fact, every evaluated nuclear data file involved in the MCNP calculations can be replaced with the set of the random data files representing the particular deviation of the nuclear model parameters, each of them being correct and valid for applications. To account for the uncertainty of the nuclear model parameters introduced in the evaluated data file, a total Monte Carlo (TMC) method can be used to analyze the uncertainty of TBR owing to the nuclear data used for calculations. To this end, two 3D fully heterogeneous geometry models of the helium cooled pebble bed (HCPB) and water cooled lithium lead (WCLL) European DEMOs were utilized for the calculations of the TBR. The TMC calculations were performed, making use of the TENDL-2017 nuclear data library random files with high enough statistics providing a well-resolved Gaussian distribution of the TBR value. The assessment was done for the estimation of the TBR uncertainty due to the nuclear data for entire material compositions and for separate materials: structural, breeder and neutron multipliers. The overall TBR uncertainty for the nuclear data was estimated to be 3~4% for the HCPB and WCLL DEMOs, respectively.


Author(s):  
Ze-guang Li ◽  
Kan Wang ◽  
Gang-lin Yu

In the reactor design and analysis, there is often a need to calculate the effects caused by perturbations of temperature, components and even structure of reactors on reactivity. And in sensitivity studies, uncertainty analysis of target quantities and unclear data adjustment, perturbation calculations are also widely used. To meet the need of different types of reactors (complex, multidimensional systems), Monte Carlo perturbation methods have been developed. In this paper, several kinds of perturbation methods are investigated. Specially, differential operator sampling method and correlated tracking method are discussed in details. MCNP’s perturbation calculation capability is discussed by calculating certain problems, from which some conclusions are obtained on the capabilities of the differential operator sampling method used in the perturbation calculation model of MCNP. Also, a code using correlated tracking method has been developed to solve certain problems with cross-section changes, and the results generated by this code agree with the results generated by straightforward Monte Carlo techniques.


Sign in / Sign up

Export Citation Format

Share Document