A Method of Optimized Utilization of Point-Wise Data Format in Monte Carlo Code

Author(s):  
Yuxuan Liu ◽  
Ganglin Yu ◽  
Kan Wang

Monte Carlo codes are powerful and accurate tools for reactor core calculation. Most Monte Carlo codes use the point-wise data format, in which the data are given as tables of energy-cross section pairs. When calculating the cross sections at an incident energy value, it should be determined which grid interval the energy falls in. This procedure is repeated so frequently in Monte Carlo codes that its contribution in the overall calculation time can become quite significant. In this paper, the time distribution of Monte Carlo method is analyzed to illustrate the time consuming of cross section calculation. By investigation on searching and calculating cross section data in Monte Carlo code, a new search algorithm called hash table is elaborately designed to substitute the traditional binary search method in locating the energy grid interval. The results indicate that in the criticality calculation, hash table can save 5%∼17% CPU time, depending on the number of nuclides in the material, as well as complexity of geometry for particles tracking.

Author(s):  
Tianliang Hu ◽  
Liangzhi Cao ◽  
Hongchun Wu ◽  
Kun Zhuang

A code system has been developed in this paper for the dynamics simulations of MSRs. The homogenized cross section data library is generated using the continuous-energy Monte-Carlo code OpenMC which provides significant modeling flexibility compared against the traditional deterministic lattice transport codes. The few-group cross sections generated by OpenMC are provided to TANSY and TANSY_K which is based on OpenFOAM to perform the steady-state full-core coupled simulations and dynamics simulation. For verification and application of the codes sequence, the simulation of a representative molten salt reactor core MOSART has been performed. For the further study of the characteristics of MSRs, several transients like the code-slug transient, unprotected loss of flow transient and overcooling transient have been analyzed. The numerical results indicated that the TANSY and TANSY_K codes with the cross section library generated by OpenMC has the capability for the dynamics analysis of MSRs.


Author(s):  
Audrius Jasiulevicius ◽  
Bal Raj Sehgal

The RBMK reactors are channel type, water-cooled and graphite moderated reactors. The first RBMK type electricity production reactor was put on-line in 1973. Currently there are 13 operating reactors of this type. Two of the RBMK-1500 reactors are at the Ignalina NPP in Lithuania. Experimental Critical Facility for RBMK reactors, located at Kurchiatov Institute, Moscow was designed to carry out critical reactivity experiments on assemblies, which imitate parts of the RBMK reactor core. The facility is composed of Control and Protection Rods (CPR’s), fuel assemblies with different enrichment in U-235 and other elements, typical for RBMK reactor core loadings, e.g. additional absorber assemblies, CPR imitators, etc. A simulation of a set of the experiments, performed at the Experimental Critical Facility, was carried out at the Royal Institute of Technology (RIT), Nuclear Power Safety Division, using CORETRAN 3-D neutron dynamics code. The neutron cross sections for assemblies were calculated using HELIOS code. The aim of this work was to evaluate capabilities of the HELIOS code to provide correct cross section data for the RBMK reactor. The calculation results were compared to the similar CORETRAN calculations, when employing WIMS-D4 code generated cross section data. For some of the experiments, where calculation results with CASMO-4 code generated cross sections are available, the comparison is also performed against CASMO-4 results. Eleven different experiments were simulated. Experiments differ in size of the facility core (number of assemblies loaded): from simple core loadings, composed only of a few fuel assemblies, to complicated configurations, which represent a part of the RBMK reactor core. Diverse types of measurements were carried out during these experiments: reactivity, neutron flux distributions (both axial and radial), rod reactivity worth and the voiding effects. Results of the reactivity measurements and relative neutron flux distributions were given in the Experiment report [1] as parameters, to be obtained using static calculations, i.e. the reported results were already processed numerically using the facility equipment, e.g. the reactimeter. The reported measurement errors consist only of instrumentation errors, i.e. measurement method errors and the influence from the space–time effects were not included in the error evaluation.


2019 ◽  
Vol 9 (2) ◽  
pp. 17-24
Author(s):  
Jakub Lüley ◽  
Branislav Vrban ◽  
Štefan Čerba ◽  
Filip Osuský ◽  
Vladimír Nečas

Stochastic Monte Carlo (MC) neutron transport codes are widely used in various reactorphysics applications, traditionally related to criticality safety analyses, radiation shielding and validation of deterministic transport codes. The main advantage of Monte Carlo codes lies in their ability to model complex and detail geometries without the need of simplifications. Currently, one of the most accurate and developed stochastic MC code for particle transport simulation is MCNP. To achieve the best real world approximations, continuous-energy (CE) cross-section (XS) libraries are often used. These CE libraries consider the rapid changes of XS in the resonance energy range; however, computing-intensive simulations must be performed to utilize this feature. To broaden ourcomputation abilities for industrial application and partially to allow the comparison withdeterministic codes, the CE cross section library of the MCNP code is replaced by the multigroup (MG) cross-section data. This paper is devoted to the cross-section processing scheme involving modified versions of TRANSX and CRSRD codes. Following this approach, the same data may be used in deterministic and stochastic codes. Moreover, using formerly developed and upgraded crosssection processing scheme, new MG libraries may be tailored to the user specific applications. For demonstration of the proposed cross-section processing scheme, the VVER-440 benchmark devoted to fuel assembly and pip-by-pin power distribution was selected. The obtained results are compared with continues energy MCNP calculation and multigroup KENO-VI calculation.


Kerntechnik ◽  
2021 ◽  
Vol 86 (4) ◽  
pp. 302-311
Author(s):  
M. E. Korkmaz ◽  
N. K. Arslan

Abstract Sodium Cooled Reactors is one of the Generation-IV plants selected to manage the long-lived minor actinides and to transmute the long-life radioactive elements. This study presents the comparison between two-designed SFR cores with 600 and 800 MWth total heating power. We have analyzed a conceptual core design and nuclear characteristic of SFR. Monte Carlo depletion calculations have been performed to investigate essential characteristics of the SFR core. The core calculations were performed by using the Serpent Monte Carlo code for determining the burnup behavior of the SFR, the power distribution and the effective multiplication factor. The neutronic and burn-up calculations were done by means of Serpent-2 Code with the ENDF-7 cross-sections library. Sodium Cooled Fast Reactor core was taken as the reference core for Th-232 burnup calculations. The results showed that SFR is an important option to deplete the minor actinides as well as for transmutation from Th-232 to U-233.


1986 ◽  
Vol 2 (3) ◽  
pp. 429-440 ◽  
Author(s):  
Badi H. Baltagi

Two different methods for pooling time series of cross section data are used by economists. The first method, described by Kmenta, is based on the idea that pooled time series of cross sections are plagued with both heteroskedasticity and serial correlation.The second method, made popular by Balestra and Nerlove, is based on the error components procedure where the disturbance term is decomposed into a cross-section effect, a time-period effect, and a remainder.Although these two techniques can be easily implemented, they differ in the assumptions imposed on the disturbances and lead to different estimators of the regression coefficients. Not knowing what the true data generating process is, this article compares the performance of these two pooling techniques under two simple setting. The first is when the true disturbances have an error components structure and the second is where they are heteroskedastic and time-wise autocorrelated.First, the strengths and weaknesses of the two techniques are discussed. Next, the loss from applying the wrong estimator is evaluated by means of Monte Carlo experiments. Finally, a Bartletfs test for homoskedasticity and the generalized Durbin-Watson test for serial correlation are recommended for distinguishing between the two error structures underlying the two pooling techniques.


Sign in / Sign up

Export Citation Format

Share Document