Development and implementation of a multi-physics high fidelity model of the TRIGA mark II reactor

2022 ◽  
Vol 166 ◽  
pp. 108704
Author(s):  
Christian Castagna ◽  
Carolina Introini ◽  
Antonio Cammi
2017 ◽  
Vol 34 (5) ◽  
pp. 1485-1500
Author(s):  
Leifur Leifsson ◽  
Slawomir Koziel

Purpose The purpose of this paper is to reduce the overall computational time of aerodynamic shape optimization that involves accurate high-fidelity simulation models. Design/methodology/approach The proposed approach is based on the surrogate-based optimization paradigm. In particular, multi-fidelity surrogate models are used in the optimization process in place of the computationally expensive high-fidelity model. The multi-fidelity surrogate is constructed using physics-based low-fidelity models and a proper correction. This work introduces a novel correction methodology – referred to as the adaptive response prediction (ARP). The ARP technique corrects the low-fidelity model response, represented by the airfoil pressure distribution, through suitable horizontal and vertical adjustments. Findings Numerical investigations show the feasibility of solving real-world problems involving optimization of transonic airfoil shapes and accurate computational fluid dynamics simulation models of such surfaces. The results show that the proposed approach outperforms traditional surrogate-based approaches. Originality/value The proposed aerodynamic design optimization algorithm is novel and holistic. In particular, the ARP correction technique is original. The algorithm is useful for fast design of aerodynamic surfaces using high-fidelity simulation data in moderately sized search spaces, which is challenging using conventional methods because of excessive computational costs.


2018 ◽  
Vol 27 (2) ◽  
pp. 118-124 ◽  
Author(s):  
Andrei Odobescu ◽  
Isak Goodwin ◽  
Djamal Berbiche ◽  
Joseph BouMerhi ◽  
Patrick G. Harris ◽  
...  

Background: The Thiel embalmment method has recently been used in a number of medical simulation fields. The authors investigate the use of Thiel vessels as a high fidelity model for microvascular simulation and propose a new checklist-based evaluation instrument for microsurgical training. Methods: Thirteen residents and 2 attending microsurgeons performed video recorded microvascular anastomoses on Thiel embalmed arteries that were evaluated using a new evaluation instrument (Microvascular Evaluation Scale) by 4 fellowship trained microsurgeons. The internal validity was assessed using the Cronbach coefficient. The external validity was verified using regression models. Results: The reliability assessment revealed an excellent intra-class correlation of 0.89. When comparing scores obtained by participants from different levels of training, attending surgeons and senior residents (Post Graduate Year [PGY] 4-5) scored significantly better than junior residents (PGY 1-3). The difference between senior residents and attending surgeons was not significant. When considering microsurgical experience, the differences were significant between the advanced group and the minimal and moderate experience groups. The differences between minimal and moderate experience groups were not significant. Based on the data obtained, a score of 8 would translate into a level of microsurgical competence appropriate for clinical microsurgery. Conclusions: Thiel cadaveric vessels are a high fidelity model for microsurgical simulation. Excellent internal and external validity measures were obtained using the Microvascular Evaluation Scale (MVES).


2020 ◽  
Vol 12 (1) ◽  
pp. 10
Author(s):  
Ion Matei ◽  
Alexander Feldman ◽  
Johan De Kleer ◽  
Alexandre Perez

In this paper we propose a hybrid modeling approach for generating reduced models of a high fidelity model of a physical system. We propose machine learning inspired representations for complex model components. These representations preserve in part the physical interpretation of the original components. Training platforms featuring automatic differentiation are used to learn the parameters of the new representations using data generated by the high-fidelity model. We showcase our approach in the context of fault diagnosis for a rail switch system. We generate three new model abstractions whose complexities are two order of magnitude smaller than the complexity of the high fidelity model, both in the number of equations and simulation time. Faster simulations ensure faster diagnosis solutions and enable the use of diagnosis algorithms relying heavily on large numbers of model simulations.


PLoS ONE ◽  
2018 ◽  
Vol 13 (7) ◽  
pp. e0201172 ◽  
Author(s):  
Shreyas K. Roy ◽  
Qinghe Meng ◽  
Benjamin D. Sadowitz ◽  
Michaela Kollisch-Singule ◽  
Natesh Yepuri ◽  
...  

2021 ◽  
Author(s):  
Francesco Rizzi ◽  
Eric Parish ◽  
Patrick Blonigan ◽  
John Tencer

<p>This talk focuses on the application of projection-based reduced-order models (pROMs) to seismic elastic shear waves. Specifically, we present a method to efficiently propagate parametric uncertainties through the system using a novel formulation of the Galerkin ROM that exploits modern many-core computing nodes.</p><p>Seismic modeling and simulation is an active field of research because of its importance in understanding the generation, propagation and effects of earthquakes as well as artificial explosions. We stress two main challenges involved: (a) physical models contain a large number of parameters (e.g., anisotropic material properties, signal forms and parametrizations); and (b) simulating these systems at global scale with high-accuracy requires a large computational cost, often requiring days or weeks on a supercomputer. Advancements in computing platforms have enabled researchers to exploit high-fidelity computational models, such as highly-resolved seismic simulations, for certain types of analyses. Unfortunately, for analyses requiring many evaluations of the forward model (e.g., uncertainty quantification, engineering design), the use of high-fidelity models often remains impractical due to their high computational cost. Consequently, analysts often rely on lower-cost, lower-fidelity surrogate models for such problems.</p><p>Broadly speaking, surrogate models fall under three categories, namely (a) data fits, which construct an explicit mapping (e.g., using polynomials, Gaussian processes) from the system's parameters to the system response of interest, (b) lower-fidelity models, which simplify the high-fidelity model (e.g., by coarsening the mesh, employing a lower finite-element order, or neglecting physics), and (c) pROMs which reduce the number of degrees of freedom in the high-fidelity model by a projection process of the full-order model onto a subspace identified from high-fidelity data. The main advantage of pROMs is that they apply a projection process directly to the equations governing the high-fidelity model, thus enabling stronger guarantees (e.g., of structure preservation or of accuracy) and more accurate a posteriori error bounds.</p><p>State-of-the-art Galerkin ROM formulations express the state as a rank-1 tensor (i.e., a vector), leading to computational kernels that are memory bandwidth bound and, therefore, ill-suited for scalable performance on modern many-core and hybrid computing nodes. In this work, we introduce a reformulation, called rank-2 Galerkin, of the Galerkin ROM for linear time-invariant (LTI) dynamical systems which converts the nature of the ROM problem from memory bandwidth to compute bound, and apply it to elastic seismic shear waves in an axisymmetric domain. Specifically, we present an end-to-end demonstration of using the rank-2 Galerkin ROM in a Monte Carlo sampling study, showing that the rank-2 Galerkin ROM is 970 times more efficient than the full order model, while maintaining excellent accuracy in both the mean and statistics of the field.</p>


Author(s):  
Marco Baldan ◽  
Alexander Nikanorov ◽  
Bernard Nacke

Purpose Reliable modeling of induction hardening requires a multi-physical approach, which makes it time-consuming. In designing an induction hardening system, combining such model with an optimization technique allows managing a high number of design variables. However, this could lead to a tremendous overall computational cost. This paper aims to reduce the computational time of an optimal design problem by making use of multi-fidelity modeling and parallel computing. Design/methodology/approach In the multi-fidelity framework, the “high-fidelity” model couples the electromagnetic, thermal and metallurgical fields. It predicts the phase transformations during both the heating and cooling stages. The “low-fidelity” model is instead limited to the heating step. Its inaccuracy is counterbalanced by its cheapness, which makes it suitable for exploring the design space in optimization. Then, the use of co-Kriging allows merging information from different fidelity models and predicting good design candidates. Field evaluations of both models occur in parallel. Findings In the design of an induction heating system, the synergy between the “high-fidelity” and “low-fidelity” model, together with use of surrogates and parallel computing could reduce up to one order of magnitude the overall computational cost. Practical implications On one hand, multi-physical modeling of induction hardening implies a better understanding of the process, resulting in further potential process improvements. On the other hand, the optimization technique could be applied to many other computationally intensive real-life problems. Originality/value This paper highlights how parallel multi-fidelity optimization could be used in designing an induction hardening system.


Author(s):  
Markus Mäck ◽  
Michael Hanss

Abstract The early design stage of mechanical structures is often characterized by unknown or only partially known boundary conditions and environmental influences. Particularly, in the case of safety-relevant components, such as the crumple zone structure of a car, those uncertainties must be appropriately quantified and accounted for in the design process. For this purpose, possibility theory provides a suitable tool for the modeling of incomplete information and uncertainty propagation. However, the numerical propagation of uncertainty described by possibility theory is accompanied by high computational costs. The necessarily repeated model evaluations render the uncertainty analysis challenging to be realized if a model is complex and of large scale. Oftentimes, simplified and idealized models are used for the uncertainty analysis to speed up the simulation while accepting a loss of accuracy. The proposed multifidelity scheme for possibilistic uncertainty analysis, instead, takes advantage of the low costs of an inaccurate low-fidelity model and the accuracy of an expensive high-fidelity model. For this purpose, the functional dependency between the high- and low-fidelity model is exploited and captured in a possibilistic way. This results in a significant speedup for the uncertainty analysis while ensuring accuracy by using only a low number of expensive high-fidelity model evaluations. The proposed approach is applied to an automotive car crash scenario in order to emphasize its versatility and applicability.


Author(s):  
Matthew A. Williams ◽  
Andrew G. Alleyne

In the early stages of control system development, designers often require multiple iterations for purposes of validating control designs in simulation. This has the potential to make high fidelity models undesirable due to increased computational complexity and time required for simulation. As a solution, lower fidelity or simplified models are used for initial designs before controllers are tested on higher fidelity models. In the event that unmodeled dynamics cause the controller to fail when applied on a higher fidelity model, an iterative approach involving designing and validating a controller’s performance may be required. In this paper, a switched-fidelity modeling formulation for closed loop dynamical systems is proposed to reduce computational effort while maintaining elevated accuracy levels of system outputs and control inputs. The effects on computational effort and accuracy are investigated by applying the formulation to a traditional vapor compression system with high and low fidelity models of the evaporator and condenser. This sample case showed the ability of the switched fidelity framework to closely match the outputs and inputs of the high fidelity model while decreasing computational cost by 32% from the high fidelity model. For contrast, the low fidelity model decreases computational cost by 48% relative to the high fidelity model.


Author(s):  
Shreyas Kousik ◽  
Sean Vaskov ◽  
Matthew Johnson-Roberson ◽  
Ram Vasudevan

Path planning for autonomous vehicles in arbitrary environments requires a guarantee of safety, but this can be impractical to ensure in real-time when the vehicle is described with a high-fidelity model. To address this problem, this paper develops a method to perform trajectory design by considering a low-fidelity model that accounts for model mismatch. The presented method begins by computing a conservative Forward Reachable Set (FRS) of a high-fidelity model’s trajectories produced when tracking trajectories of a low-fidelity model over a finite time horizon. At runtime, the vehicle intersects this FRS with obstacles in the environment to eliminate trajectories that can lead to a collision, then selects an optimal plan from the remaining safe set. By bounding the time for this set intersection and subsequent path selection, this paper proves a lower bound for the FRS time horizon and sensing horizon to guarantee safety. This method is demonstrated in simulation using a kinematic Dubin’s car as the low-fidelity model and a dynamic unicycle as the high-fidelity model.


Sign in / Sign up

Export Citation Format

Share Document