scholarly journals Processing of the multigroup cross-sections for MCNP calculations

2019 ◽  
Vol 9 (2) ◽  
pp. 17-24
Author(s):  
Jakub Lüley ◽  
Branislav Vrban ◽  
Štefan Čerba ◽  
Filip Osuský ◽  
Vladimír Nečas

Stochastic Monte Carlo (MC) neutron transport codes are widely used in various reactorphysics applications, traditionally related to criticality safety analyses, radiation shielding and validation of deterministic transport codes. The main advantage of Monte Carlo codes lies in their ability to model complex and detail geometries without the need of simplifications. Currently, one of the most accurate and developed stochastic MC code for particle transport simulation is MCNP. To achieve the best real world approximations, continuous-energy (CE) cross-section (XS) libraries are often used. These CE libraries consider the rapid changes of XS in the resonance energy range; however, computing-intensive simulations must be performed to utilize this feature. To broaden ourcomputation abilities for industrial application and partially to allow the comparison withdeterministic codes, the CE cross section library of the MCNP code is replaced by the multigroup (MG) cross-section data. This paper is devoted to the cross-section processing scheme involving modified versions of TRANSX and CRSRD codes. Following this approach, the same data may be used in deterministic and stochastic codes. Moreover, using formerly developed and upgraded crosssection processing scheme, new MG libraries may be tailored to the user specific applications. For demonstration of the proposed cross-section processing scheme, the VVER-440 benchmark devoted to fuel assembly and pip-by-pin power distribution was selected. The obtained results are compared with continues energy MCNP calculation and multigroup KENO-VI calculation.

Author(s):  
Yuxuan Liu ◽  
Ganglin Yu ◽  
Kan Wang

Monte Carlo codes are powerful and accurate tools for reactor core calculation. Most Monte Carlo codes use the point-wise data format, in which the data are given as tables of energy-cross section pairs. When calculating the cross sections at an incident energy value, it should be determined which grid interval the energy falls in. This procedure is repeated so frequently in Monte Carlo codes that its contribution in the overall calculation time can become quite significant. In this paper, the time distribution of Monte Carlo method is analyzed to illustrate the time consuming of cross section calculation. By investigation on searching and calculating cross section data in Monte Carlo code, a new search algorithm called hash table is elaborately designed to substitute the traditional binary search method in locating the energy grid interval. The results indicate that in the criticality calculation, hash table can save 5%∼17% CPU time, depending on the number of nuclides in the material, as well as complexity of geometry for particles tracking.


2021 ◽  
Vol 247 ◽  
pp. 04017
Author(s):  
Paul E. Burke ◽  
Kyle E. Remley ◽  
David P. Griesheimer

In radiation transport calculations, the effects of material temperature on neutron/nucleus interactions must be taken into account through Doppler broadening adjustments to the microscopic cross section data. Historically, Monte Carlo transport simulations have accounted for this temperature dependence by interpolating among precalculated Doppler broadened cross sections at a variety of temperatures. More recently, there has been much interest in on-the-fly Doppler broadening methods, where reference data is broadened on-demand during particle transport to any temperature. Unfortunately, Doppler broadening operations are expensive on traditional central processing unit (CPU) architectures, making on-the-fly Doppler broadening unaffordable without approximations or complex data preprocessing. This work considers the use of graphics processing unit (GPU)s, which excel at parallel data processing, for on-the-fly Doppler broadening in continuous-energy Monte Carlo simulations. Two methods are considered for the broadening operations – a GPU implementation of the standard SIGMA1 algorithm and a novel vectorized algorithm that leverages the convolution properties of the broadening operation in an attempt to expose additional parallelism. Numerical results demonstrate that similar cross section lookup throughput is obtained for on-the-fly broadening on a GPU as cross section lookup throughput with precomputed data on a CPU, implying that offloading Doppler broadening operations to a GPU may enable on-the-fly temperature treatment of cross sections without a noticeable reduction in cross section processing performance in Monte Carlo transport codes.


2021 ◽  
Vol 247 ◽  
pp. 02011
Author(s):  
Seog Kim Kang ◽  
Andrew M. Holcomb ◽  
Friederike Bostelmann ◽  
Dorothea Wiarda ◽  
William Wieselquist

The SCALE-XSProc multigroup (MG) cross section processing procedure based on the CENTRM pointwise slowing down calculation is the primary procedure to process problem-dependent self-shielded MG cross sections and scattering matrices for neutron transport calculations. This procedure supports various cell-based geometries including slab, 1-D cylindrical, 1-D spherical and 2-D rectangular configurations and doubly heterogeneous particulate fuels. Recently, this procedure has been significantly improved to be applied to any advanced reactor analysis covering thermal and fast reactor systems, and to be comparable to continuous energy (CE) Monte Carlo calculations. Some reactivity bias and reaction rate differences have been observed compared with CE Monte Carlo calculations, and several areas for improvement have been identified in the SCALE-XSProc MG cross section processing: (1) resonance self-shielding calculations within the unresolved resonance range, (2) 10 eV thermal cut-off energy for the free gas model, (3) on-the-fly adjustments to the thermal scattering matrix, (4) normalization of the pointwise neutron flux, and (5) fine MG energy structure. This procedure ensures very accurate MG cross section processing for high-fidelity deterministic reactor physics analysis for various advanced reactor systems.


1986 ◽  
Vol 2 (3) ◽  
pp. 429-440 ◽  
Author(s):  
Badi H. Baltagi

Two different methods for pooling time series of cross section data are used by economists. The first method, described by Kmenta, is based on the idea that pooled time series of cross sections are plagued with both heteroskedasticity and serial correlation.The second method, made popular by Balestra and Nerlove, is based on the error components procedure where the disturbance term is decomposed into a cross-section effect, a time-period effect, and a remainder.Although these two techniques can be easily implemented, they differ in the assumptions imposed on the disturbances and lead to different estimators of the regression coefficients. Not knowing what the true data generating process is, this article compares the performance of these two pooling techniques under two simple setting. The first is when the true disturbances have an error components structure and the second is where they are heteroskedastic and time-wise autocorrelated.First, the strengths and weaknesses of the two techniques are discussed. Next, the loss from applying the wrong estimator is evaluated by means of Monte Carlo experiments. Finally, a Bartletfs test for homoskedasticity and the generalized Durbin-Watson test for serial correlation are recommended for distinguishing between the two error structures underlying the two pooling techniques.


2021 ◽  
Vol 247 ◽  
pp. 04020
Author(s):  
Nicolas Denoyelle ◽  
John Tramm ◽  
Kazutomo Yoshii ◽  
Swann Perarnau ◽  
Pete Beckman

The calculation of macroscopic neutron cross-sections is a fundamental part of the continuous-energy Monte Carlo (MC) neutron transport algorithm. MC simulations of full nuclear reactor cores are computationally expensive, making high-accuracy simulations impractical for most routine reactor analysis tasks because of their long time to solution. Thus, preparation of MC simulation algorithms for next generation supercomputers is extremely important as improvements in computational performance and efficiency will directly translate into improvements in achievable simulation accuracy. Due to the stochastic nature of the MC algorithm, cross-section data tables are accessed in a highly randomized manner, resulting in frequent cache misses and latency-bound memory accesses. Furthermore, contemporary and next generation non-uniform memory access (NUMA) computer architectures, featuring very high latencies and less cache space per core, will exacerbate this behaviour. The absence of a topology-aware allocation strategy in existing high-performance computing (HPC) programming models is a major source of performance problems in NUMA systems. Thus, to improve performance of the MC simulation algorithm, we propose a topology-aware data allocation strategies that allow full control over the location of data structures within a memory hierarchy. A new memory management library, known as AML, has recently been created to facilitate this mapping. To evaluate the usefulness of AML in the context of MC reactor simulations, we have converted two existing MC transport cross-section lookup “proxy-applications” (XSBench and RSBench) to utilize the AML allocation library. In this study, we use these proxy-applications to test several continuous-energy cross-section data lookup strategies (the nuclide grid, unionized grid, logarithmic hash grid, and multipole methods) with a number of AML allocation schemes on a variety of node architectures. We find that the AML library speeds up cross-section lookup performance up to 2x on current generation hardware (e.g., a dual-socket Skylake-based NUMA system) as compared with naive allocation. These exciting results also show a path forward for efficient performance on next-generation exascale supercomputer designs that feature even more complex NUMA memory hierarchies.


2021 ◽  
Vol 247 ◽  
pp. 06011
Author(s):  
A. Bernal ◽  
M. Pecchia ◽  
D. Rochman ◽  
A. Vasiliev ◽  
H. Ferroukhi

The main goal of this work is to perform pin-by-pin calculations of Swiss LWR fuel assemblies with neutron transport deterministic methods. At Paul Scherrer Institut (PSI), LWR calculations are performed with the core management system CMSYS, which is based on the Studsvik suite of codes. CMSYS includes models for all the Swiss reactors validated against a database of experimental information. Moreover, PSI has improved the pin power calculations by developing models of Swiss fuel assemblies for the Monte Carlo code MCNP, with the isotopic compositions obtained from the In-Core Fuel Management data of the Studsvik suite of codes, by using the SNF code. A step forward is to use a neutron code based on fast deterministic neutron transport methods. The method used in this work is based on a planar Method of Characteristics in which the axial coupling is solved by 1D SP3 method. The neutron code used is nTRACER. Thus, the methodology of this work develops nTRACER models of Swiss PWR fuel assemblies, in which the fuel of each pin and axial level is modelled with the isotopic composition obtained from SNF. This methodology was applied to 2D and 3D calculations of a Swiss PWR fuel assembly. However, this method has two main limitations. First, the cross sections libraries of nTRACER lack some of the isotopes obtained by SNF. Fortunately, this work proves that the missing isotopes do not have a strong effect on keff and the power distribution. Second, the 3D models require high computational memory resources, that is, more than 260 Gb. Thus, the nTRACER code was modified, so now it uses only 8 Gb, without any loss of accuracy. Finally, the keff and power results are compared with Monte Carlo calculations obtained by Serpent.


1987 ◽  
Vol 40 (3) ◽  
pp. 383 ◽  
Author(s):  
J Fletcher ◽  
PH Purdie

Low current, low pressure, steady state Townsend discharges in helium and neon gas have been investigated using the photon flux technique. Such discharges have been found to exhibit spatial non-uniformity resulting in luminous layers throughout the discharge. The separation and structure of these layers has been investigated experimentally in both gases along with the wavelength distribution of the photon flux. A Monte Carlo simulation of the discharge in neon has been used to gain information on the cross sections necessary to describe these discharges. It is found that direct excitaton of ground state atoms to the resonance level of each gas is less than indicated by some published cross section data.


2020 ◽  
Vol 8 ◽  
Author(s):  
John W. Norbury ◽  
Giuseppe Battistoni ◽  
Judith Besuglow ◽  
Luca Bocchini ◽  
Daria Boscolo ◽  
...  

The helium (4He) component of the primary particles in the galactic cosmic ray spectrum makes significant contributions to the total astronaut radiation exposure. 4He ions are also desirable for direct applications in ion therapy. They contribute smaller projectile fragmentation than carbon (12C) ions and smaller lateral beam spreading than protons. Space radiation protection and ion therapy applications need reliable nuclear reaction models and transport codes for energetic particles in matter. Neutrons and light ions (1H, 2H, 3H, 3He, and 4He) are the most important secondary particles produced in space radiation and ion therapy nuclear reactions; these particles penetrate deeply and make large contributions to dose equivalent. Since neutrons and light ions may scatter at large angles, double differential cross sections are required by transport codes that propagate radiation fields through radiation shielding and human tissue. This work will review the importance of 4He projectiles to space radiation and ion therapy, and outline the present status of neutron and light ion production cross section measurements and modeling, with recommendations for future needs.


Sign in / Sign up

Export Citation Format

Share Document