scholarly journals MODELING NUCLEAR DATA UNCERTAINTIES USING DEEP NEURAL NETWORKS

2021 ◽  
Vol 247 ◽  
pp. 15016
Author(s):  
Majdi I. Radaideh ◽  
Dean Price ◽  
Tomasz Kozlowski

A new concept using deep learning in neural networks is investigated to characterize the underlying uncertainty of nuclear data. Analysis is performed on multi-group neutron cross-sections (56 energy groups) for the GODIVA U-235 sphere. A deep model is trained with cross-validation using 1000 nuclear data random samples to fit 336 nuclear data parameters. Although of the very limited sample size (1000 samples) available in this study, the trained models demonstrate promising performance, where a prediction error of about 166 pcm is found for keff in the test set. In addition, the deep model’s sensitivity and uncertainty are validated. The comparison of importance ranking of the principal fast fission energy groups with adjoint methods shows fair agreement, while a very good agreement is observed when comparing the global keff uncertainty with sampling methods. The findings of this work shall motivate additional efforts on using machine learning to unravel complexities in nuclear data research.

2018 ◽  
Vol 170 ◽  
pp. 04009
Author(s):  
Benoit Geslot ◽  
Adrien Gruel ◽  
Stéphane Bréaud ◽  
Pierre Leconte ◽  
Patrick Blaise

Pile oscillator techniques are powerful methods to measure small reactivity worth of isotopes of interest for nuclear data improvement. This kind of experiments has long been implemented in the Mineve experimental reactor, operated by CEA Cadarache. A hybrid technique, mixing reactivity worth estimation and measurement of small changes around test samples is presented here. It was made possible after the development of high sensitivity miniature fission chambers introduced next to the irradiation channel. A test campaign, called MAESTRO-SL, took place in 2015. Its objective was to assess the feasibility of the hybrid method and investigate the possibility to separate mixed neutron effects, such as fission/capture or scattering/capture. Experimental results are presented and discussed in this paper, which focus on comparing two measurements setups, one using a power control system (closed loop) and another one where the power is free to drift (open loop). First, it is demonstrated that open loop is equivalent to closed loop. Uncertainty management and methods reproducibility are discussed. Second, results show that measuring the flux depression around oscillated samples provides valuable information regarding partial neutron cross sections. The technique is found to be very sensitive to the capture cross section at the expense of scattering, making it very useful to measure small capture effects of highly scattering samples.


2020 ◽  
Vol 29 (08) ◽  
pp. 2050062
Author(s):  
Mustafa Yiğit

Studies on the cross-sections of (n,n[Formula: see text]) reactions which are energetically possible, about 14 MeV neutrons are quite scarce. In this paper, the cross-sections of (n,n[Formula: see text] nuclear reactions at [Formula: see text]14–15 MeV are analyzed by using a new empirical formula based on the statistical theory. We show that neutron cross-sections are closely related to the [Formula: see text]-value of nuclear reaction, in particular for (n,n[Formula: see text]) channels. Results obtained with this empirical formula show good agreement with the available measured cross-section values. We hope that the estimations on the cross-sections using the present formalism may be helpful in future studies in this field.


2021 ◽  
Vol 72 ◽  
pp. 1-37
Author(s):  
Mike Wu ◽  
Sonali Parbhoo ◽  
Michael C. Hughes ◽  
Volker Roth ◽  
Finale Doshi-Velez

Deep models have advanced prediction in many domains, but their lack of interpretability  remains a key barrier to the adoption in many real world applications. There exists a large  body of work aiming to help humans understand these black box functions to varying levels  of granularity – for example, through distillation, gradients, or adversarial examples. These  methods however, all tackle interpretability as a separate process after training. In this  work, we take a different approach and explicitly regularize deep models so that they are  well-approximated by processes that humans can step through in little time. Specifically,  we train several families of deep neural networks to resemble compact, axis-aligned decision  trees without significant compromises in accuracy. The resulting axis-aligned decision  functions uniquely make tree regularized models easy for humans to interpret. Moreover,  for situations in which a single, global tree is a poor estimator, we introduce a regional tree regularizer that encourages the deep model to resemble a compact, axis-aligned decision  tree in predefined, human-interpretable contexts. Using intuitive toy examples, benchmark  image datasets, and medical tasks for patients in critical care and with HIV, we demonstrate  that this new family of tree regularizers yield models that are easier for humans to simulate  than L1 or L2 penalties without sacrificing predictive power. 


2020 ◽  
Vol 239 ◽  
pp. 14006
Author(s):  
Tim Ware ◽  
David Hanlon ◽  
Tara Hanlon ◽  
Richard Hiles ◽  
Malcolm Lingard ◽  
...  

Until recently, criticality safety assessment codes had a minimum temperature at which calculations can be performed. Where criticality assessment has been required for lower temperatures, indirect methods, including reasoned argument or extrapolation, have been required to assess reactivity changes associated with these temperatures. The ANSWERS Software Service MONK® version 10B Monte Carlo criticality code, is capable of performing criticality calculations at any temperature, within the temperature limits of the underlying nuclear data in the BINGO continuous energy library. The temperature range of the nuclear data has been extended below the traditional lower limit of 293.6 K to 193 K in a prototype BINGO library, primarily based on JEFF-3.1.2 data. The temperature range of the thermal bound scattering data of the key moderator materials was extended by reprocessing the NJOY LEAPR inputs used to produce bound data for JEFF-3.1.2 and ENDF/B-VIII.0. To give confidence in the low temperature nuclear data, a series of MONK and MCBEND calculations have been performed and results compared against external data sources. MCBEND is a Monte Carlo code for shielding and dosimetry and shares commonalities to its sister code MONK including the BINGO nuclear data library. Good agreement has been achieved between calculated and experimental cross sections for ice, k-effective results for low temperature criticality benchmarks and calculated and experimentally determined eigenvalues for thermal neutron diffusion in ice. To quantify the differences between ice and water bound scattering data a number of MONK criticality calculations were performed for nuclear fuel transport flask configurations. The results obtained demonstrate good agreement with extrapolation methods. There is a discernible difference in the use of ice and water data.


Biosensors ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 188
Author(s):  
Li-Ren Yeh ◽  
Wei-Chin Chen ◽  
Hua-Yan Chan ◽  
Nan-Han Lu ◽  
Chi-Yuan Wang ◽  
...  

Anesthesia assessment is most important during surgery. Anesthesiologists use electrocardiogram (ECG) signals to assess the patient’s condition and give appropriate medications. However, it is not easy to interpret the ECG signals. Even physicians with more than 10 years of clinical experience may still misjudge. Therefore, this study uses convolutional neural networks to classify ECG image types to assist in anesthesia assessment. The research uses Internet of Things (IoT) technology to develop ECG signal measurement prototypes. At the same time, it classifies signal types through deep neural networks, divided into QRS widening, sinus rhythm, ST depression, and ST elevation. Three models, ResNet, AlexNet, and SqueezeNet, are developed with 50% of the training set and test set. Finally, the accuracy and kappa statistics of ResNet, AlexNet, and SqueezeNet in ECG waveform classification were (0.97, 0.96), (0.96, 0.95), and (0.75, 0.67), respectively. This research shows that it is feasible to measure ECG in real time through IoT and then distinguish four types through deep neural network models. In the future, more types of ECG images will be added, which can improve the real-time classification practicality of the deep model.


2020 ◽  
Vol 239 ◽  
pp. 22008
Author(s):  
Eliot Party ◽  
Xavier Doligez ◽  
Philippe Dessagne ◽  
Maëlle Kerveno ◽  
Greg Henning

This paper shows how Total Monte Carlo (TMC) method and Perturbation Theory (PT) can be applied to quantify uncertainty due to nuclear data on reactor static calculations of integral parameters such as keff and βeff. This work focuses on thorium fueled reactors and it aims to rank different cross sections uncertainty regarding criticality calculations. The consistency of the two methods are first studied. The cross sections set used for the TMC method is computed to build adequate correlation matrices. Those matrices are then multiplied by the sensitivity coefficients obtained thanks to the PT to obtain global uncertainties that are compared to the ones calculated by the TMC method. Results in good agreement allow us to use correlation matrix from the state of the art nuclear data library (JEFF 3-3) that provide insight of uncertainty on keff and βeff for thorium fueled Pressurized Water Reactors. Finally, maximum uncertainties on cross sections are estimated to reach a target uncertainty on integral parameters. It is shown that a strong reduction of the current uncertainty is needed and consequently, new measurements and evaluations have to be performed.


2020 ◽  
Vol 239 ◽  
pp. 11007
Author(s):  
Aloys Nizigama ◽  
Olivier Bouland ◽  
Pierre Tamagno

The traditional methodology of nuclear data evaluation is showing its limitations in reducing significantly the uncertainties in neutron cross sections below their current level. This suggests that a new approach should be considered. This work aims at establishing that a major qualitative improvement is possible by changing the reference framework historically used for evaluating nuclear model data. The central idea is to move from the restrictive framework of the incident neutron and target nucleus to the more general framework of the excited compound-system. Such a change, which implies the simultaneous modeling of all the reactions leading to the same compound-system, opens up the possibility of direct comparisons between nuclear model parameters, whether those are derived for reactor physics applications, astrophysics or basic nuclear spectroscopy studies. This would have the double advantage of bringing together evaluation activities performed separately, and of pooling experimental databases and basic theoretical nuclear parameter files. A consistent multichannel modeling methodology using the TORA module of the CONRAD code is demonstrated across the evaluation of differential and angle-integrated neutron cross sections of 16O by fitting simultaneously incident-neutron direct kinematic reactions and incident-alpha inverse kinematic reactions without converting alpha data into the neutron laboratory system. The modeling is fulfilled within the Reich-Moore formalism and an unique set of fitted resonance parameters related to the 17O* compound-system.


2021 ◽  
pp. 113-131
Author(s):  
Wei Shen ◽  
Benjamin Rouben

Reactor physics aims to understand accurately the reactivity and the distribution of all the reaction rates (most importantly of the power), and their rate of change in time, for any reactor configuration. To do this, the multiplication factor (or, equivalently, reactivity) and the neutron-flux distribution under various operating conditions and at different times need to be calculated repeatedly. Most of the other parameters of interest (such as neutron reaction rates, power, heat deposition, etc.) are derived from them. They are governed by the geometry, the material composition and the nuclear data (i.e., the neutron cross sections, their energy dependence, the energy spectra and the angular distributions of secondary particles, etc.). For radiation-shielding calculations, additional photon interactions and coupled neutron-photon interaction data are required.


2020 ◽  
Vol 34 (04) ◽  
pp. 6413-6421
Author(s):  
Mike Wu ◽  
Sonali Parbhoo ◽  
Michael Hughes ◽  
Ryan Kindle ◽  
Leo Celi ◽  
...  

The lack of interpretability remains a barrier to adopting deep neural networks across many safety-critical domains. Tree regularization was recently proposed to encourage a deep neural network's decisions to resemble those of a globally compact, axis-aligned decision tree. However, it is often unreasonable to expect a single tree to predict well across all possible inputs. In practice, doing so could lead to neither interpretable nor performant optima. To address this issue, we propose regional tree regularization – a method that encourages a deep model to be well-approximated by several separate decision trees specific to predefined regions of the input space. Across many datasets, including two healthcare applications, we show our approach delivers simpler explanations than other regularization schemes without compromising accuracy. Specifically, our regional regularizer finds many more “desirable” optima compared to global analogues.


Sign in / Sign up

Export Citation Format

Share Document