Radiation Transport Simulation Results Assessment Through Comparison With Reduced Complexity Analytic and Computational Models (LA-UR-20-22342)

Author(s):  
Tyler J. Remedes ◽  
Scott D. Ramsey ◽  
Joseph H. Schmidt ◽  
James Baciak

Abstract In the past when faced with solving a non-tractable problem, scientists would make tremendous efforts to simplify these problems while preserving fundamental physics. Solutions to the simplified models provided insight into the original problem. Today, however, the affordability of high-performance computing has inverted the process for analyzing complex problems. In this paradigm, results from detailed computational scenarios can be better assessed by “building down” the complex model through simple models rooted in the fundamental or essential phenomenology. This work demonstrates how the analysis of the neutron flux spatial distribution behavior within a simulated Holtec International HI-STORM 100 spent fuel cask is enhanced through reduced complexity analytic and computational modeling. This process involves identifying features in the neutron flux spatial distribution and determining the cause of each using reduced complexity computational and/or analytic model. Ultimately, confidence in the accuracy of the original simulation result is gained through this analysis process.

Author(s):  
Auclair Gilles ◽  
Benoit Danièle

During these last 10 years, high performance correction procedures have been developed for classical EPMA, and it is nowadays possible to obtain accurate quantitative analysis even for soft X-ray radiations. It is also possible to perform EPMA by adapting this accurate quantitative procedures to unusual applications such as the measurement of the segregation on wide areas in as-cast and sheet steel products.The main objection for analysis of segregation in steel by means of a line-scan mode is that it requires a very heavy sampling plan to make sure that the most significant points are analyzed. Moreover only local chemical information is obtained whereas mechanical properties are also dependant on the volume fraction and the spatial distribution of highly segregated zones. For these reasons we have chosen to systematically acquire X-ray calibrated mappings which give pictures similar to optical micrographs. Although mapping requires lengthy acquisition time there is a corresponding increase in the information given by image anlysis.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 898
Author(s):  
Marta Saiz-Vivó ◽  
Adrián Colomer ◽  
Carles Fonfría ◽  
Luis Martí-Bonmatí ◽  
Valery Naranjo

Atrial fibrillation (AF) is the most common cardiac arrhythmia. At present, cardiac ablation is the main treatment procedure for AF. To guide and plan this procedure, it is essential for clinicians to obtain patient-specific 3D geometrical models of the atria. For this, there is an interest in automatic image segmentation algorithms, such as deep learning (DL) methods, as opposed to manual segmentation, an error-prone and time-consuming method. However, to optimize DL algorithms, many annotated examples are required, increasing acquisition costs. The aim of this work is to develop automatic and high-performance computational models for left and right atrium (LA and RA) segmentation from a few labelled MRI volumetric images with a 3D Dual U-Net algorithm. For this, a supervised domain adaptation (SDA) method is introduced to infer knowledge from late gadolinium enhanced (LGE) MRI volumetric training samples (80 LA annotated samples) to a network trained with balanced steady-state free precession (bSSFP) MR images of limited number of annotations (19 RA and LA annotated samples). The resulting knowledge-transferred model SDA outperformed the same network trained from scratch in both RA (Dice equals 0.9160) and LA (Dice equals 0.8813) segmentation tasks.


Author(s):  
Sidik Permana ◽  
Mitsutoshi Suzuki

The embodied challenges for introducing closed fuel cycle are utilizing advanced fuel reprocessing and fabrication facilities as well as nuclear nonproliferation aspect. Optimization target of advanced reactor design should be maintained properly to obtain high performance of safety, fuel breeding and reducing some long-lived and high level radioactivity of spent fuel by closed fuel cycle options. In this paper, the contribution of loading trans-uranium to the core performance, fuel production, and reduction of minor actinide in high level waste (HLW) have been investigated during reactor operation of large fast breeder reactor (FBR). Excess reactivity can be reduced by loading some minor actinide in the core which affect to the increase of fuel breeding capability, however, some small reduction values of breeding capability are obtained when minor actinides are loaded in the blanket regions. As a total composition, MA compositions are reduced by increasing operation time. Relatively smaller reduction value was obtained at end of operation by blanket regions (9%) than core regions (15%). In addition, adopting closed cycle of MA obtains better intrinsic aspect of nuclear nonproliferation based on the increase of even mass plutonium in the isotopic plutonium composition.


2001 ◽  
Vol 356 (1412) ◽  
pp. 1209-1228 ◽  
Author(s):  
Nigel H. Goddard ◽  
Michael Hucka ◽  
Fred Howell ◽  
Hugo Cornelis ◽  
Kavita Shankar ◽  
...  

Biological nervous systems and the mechanisms underlying their operation exhibit astonishing complexity. Computational models of these systems have been correspondingly complex. As these models become ever more sophisticated, they become increasingly difficult to define, comprehend, manage and communicate. Consequently, for scientific understanding of biological nervous systems to progress, it is crucial for modellers to have software tools that support discussion, development and exchange of computational models. We describe methodologies that focus on these tasks, improving the ability of neuroscientists to engage in the modelling process. We report our findings on the requirements for these tools and discuss the use of declarative forms of model description—equivalent to object–oriented classes and database schema—which we call templates. We introduce NeuroML, a mark–up language for the neurosciences which is defined syntactically using templates, and its specific component intended as a common format for communication between modelling–related tools. Finally, we propose a template hierarchy for this modelling component of NeuroML, sufficient for describing models ranging in structural levels from neuron cell membranes to neural networks. These templates support both a framework for user–level interaction with models, and a high–performance framework for efficient simulation of the models.


Author(s):  
Seppo Louhenkilpi ◽  
Subhas Ganguly

In the field of experiment, theory, modeling and simulation, the most noteworthy progressions applicable to steelmaking technology have been closely linked with the emergence of more powerful computing tools, advances in needful software's and algorithms design, and to a lesser degree, with the development of emerging computing theory. These have enabled the integration of several different types of computational techniques (for example, quantum chemical, and molecular dynamics, DFT, FEM, Soft computing, statistical learning etc., to name a few) to provide high-performance simulations of steelmaking processes based on emerging computational models and theories. This chapter overviews the general steps and concepts for developing a computational process model including few exercises in the area of steel making. The various sections of the chapter aim to describe how to developed models for various issues related to steelmaking processes and to simulate a physical process starts with the process fundaments. The examples include steel converter, tank vacuum degassing, and continuous casting, etc.


Author(s):  
Hiroki Yamashita ◽  
Guanchu Chen ◽  
Yeefeng Ruan ◽  
Paramsothy Jayakumar ◽  
Hiroyuki Sugiyama

A high-fidelity computational terrain dynamics model plays a crucial role in accurate vehicle mobility performance prediction under various maneuvering scenarios on deformable terrain. Although many computational models have been proposed using either finite element (FE) or discrete element (DE) approaches, phenomenological constitutive assumptions in FE soil models make the modeling of complex granular terrain behavior very difficult and DE soil models are computationally intensive, especially when considering a wide range of terrain. To address the limitations of existing deformable terrain models, this paper presents a hierarchical FE–DE multiscale tire–soil interaction simulation capability that can be integrated in the monolithic multibody dynamics solver for high-fidelity off-road mobility simulation using high-performance computing (HPC) techniques. It is demonstrated that computational cost is substantially lowered by the multiscale soil model as compared to the corresponding pure DE model while maintaining the solution accuracy. The multiscale tire–soil interaction model is validated against the soil bin mobility test data under various wheel load and tire inflation pressure conditions, thereby demonstrating the potential of the proposed method for resolving challenging vehicle-terrain interaction problems.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Emmanuel Imuetinyan Aghimien ◽  
Lerato Millicent Aghimien ◽  
Olutomilayo Olayemi Petinrin ◽  
Douglas Omoregie Aghimien

Purpose This paper aims to present the result of a scientometric analysis conducted using studies on high-performance computing in computational modelling. This was done with a view to showcasing the need for high-performance computers (HPC) within the architecture, engineering and construction (AEC) industry in developing countries, particularly in Africa, where the use of HPC in developing computational models (CMs) for effective problem solving is still low. Design/methodology/approach An interpretivism philosophical stance was adopted for the study which informed a scientometric review of existing studies gathered from the Scopus database. Keywords such as high-performance computing, and computational modelling were used to extract papers from the database. Visualisation of Similarities viewer (VOSviewer) was used to prepare co-occurrence maps based on the bibliographic data gathered. Findings Findings revealed the scarcity of research emanating from Africa in this area of study. Furthermore, past studies had placed focus on high-performance computing in the development of computational modelling and theory, parallel computing and improved visualisation, large-scale application software, computer simulations and computational mathematical modelling. Future studies can also explore areas such as cloud computing, optimisation, high-level programming language, natural science computing, computer graphics equipment and Graphics Processing Units as they relate to the AEC industry. Research limitations/implications The study assessed a single database for the search of related studies. Originality/value The findings of this study serve as an excellent theoretical background for AEC researchers seeking to explore the use of HPC for CMs development in the quest for solving complex problems in the industry.


Sign in / Sign up

Export Citation Format

Share Document