A parallel multi-fidelity optimization approach in induction hardening

Author(s):  
Marco Baldan ◽  
Alexander Nikanorov ◽  
Bernard Nacke

Purpose Reliable modeling of induction hardening requires a multi-physical approach, which makes it time-consuming. In designing an induction hardening system, combining such model with an optimization technique allows managing a high number of design variables. However, this could lead to a tremendous overall computational cost. This paper aims to reduce the computational time of an optimal design problem by making use of multi-fidelity modeling and parallel computing. Design/methodology/approach In the multi-fidelity framework, the “high-fidelity” model couples the electromagnetic, thermal and metallurgical fields. It predicts the phase transformations during both the heating and cooling stages. The “low-fidelity” model is instead limited to the heating step. Its inaccuracy is counterbalanced by its cheapness, which makes it suitable for exploring the design space in optimization. Then, the use of co-Kriging allows merging information from different fidelity models and predicting good design candidates. Field evaluations of both models occur in parallel. Findings In the design of an induction heating system, the synergy between the “high-fidelity” and “low-fidelity” model, together with use of surrogates and parallel computing could reduce up to one order of magnitude the overall computational cost. Practical implications On one hand, multi-physical modeling of induction hardening implies a better understanding of the process, resulting in further potential process improvements. On the other hand, the optimization technique could be applied to many other computationally intensive real-life problems. Originality/value This paper highlights how parallel multi-fidelity optimization could be used in designing an induction hardening system.

2017 ◽  
Vol 34 (5) ◽  
pp. 1485-1500
Author(s):  
Leifur Leifsson ◽  
Slawomir Koziel

Purpose The purpose of this paper is to reduce the overall computational time of aerodynamic shape optimization that involves accurate high-fidelity simulation models. Design/methodology/approach The proposed approach is based on the surrogate-based optimization paradigm. In particular, multi-fidelity surrogate models are used in the optimization process in place of the computationally expensive high-fidelity model. The multi-fidelity surrogate is constructed using physics-based low-fidelity models and a proper correction. This work introduces a novel correction methodology – referred to as the adaptive response prediction (ARP). The ARP technique corrects the low-fidelity model response, represented by the airfoil pressure distribution, through suitable horizontal and vertical adjustments. Findings Numerical investigations show the feasibility of solving real-world problems involving optimization of transonic airfoil shapes and accurate computational fluid dynamics simulation models of such surfaces. The results show that the proposed approach outperforms traditional surrogate-based approaches. Originality/value The proposed aerodynamic design optimization algorithm is novel and holistic. In particular, the ARP correction technique is original. The algorithm is useful for fast design of aerodynamic surfaces using high-fidelity simulation data in moderately sized search spaces, which is challenging using conventional methods because of excessive computational costs.


2021 ◽  
Author(s):  
Francesco Rizzi ◽  
Eric Parish ◽  
Patrick Blonigan ◽  
John Tencer

<p>This talk focuses on the application of projection-based reduced-order models (pROMs) to seismic elastic shear waves. Specifically, we present a method to efficiently propagate parametric uncertainties through the system using a novel formulation of the Galerkin ROM that exploits modern many-core computing nodes.</p><p>Seismic modeling and simulation is an active field of research because of its importance in understanding the generation, propagation and effects of earthquakes as well as artificial explosions. We stress two main challenges involved: (a) physical models contain a large number of parameters (e.g., anisotropic material properties, signal forms and parametrizations); and (b) simulating these systems at global scale with high-accuracy requires a large computational cost, often requiring days or weeks on a supercomputer. Advancements in computing platforms have enabled researchers to exploit high-fidelity computational models, such as highly-resolved seismic simulations, for certain types of analyses. Unfortunately, for analyses requiring many evaluations of the forward model (e.g., uncertainty quantification, engineering design), the use of high-fidelity models often remains impractical due to their high computational cost. Consequently, analysts often rely on lower-cost, lower-fidelity surrogate models for such problems.</p><p>Broadly speaking, surrogate models fall under three categories, namely (a) data fits, which construct an explicit mapping (e.g., using polynomials, Gaussian processes) from the system's parameters to the system response of interest, (b) lower-fidelity models, which simplify the high-fidelity model (e.g., by coarsening the mesh, employing a lower finite-element order, or neglecting physics), and (c) pROMs which reduce the number of degrees of freedom in the high-fidelity model by a projection process of the full-order model onto a subspace identified from high-fidelity data. The main advantage of pROMs is that they apply a projection process directly to the equations governing the high-fidelity model, thus enabling stronger guarantees (e.g., of structure preservation or of accuracy) and more accurate a posteriori error bounds.</p><p>State-of-the-art Galerkin ROM formulations express the state as a rank-1 tensor (i.e., a vector), leading to computational kernels that are memory bandwidth bound and, therefore, ill-suited for scalable performance on modern many-core and hybrid computing nodes. In this work, we introduce a reformulation, called rank-2 Galerkin, of the Galerkin ROM for linear time-invariant (LTI) dynamical systems which converts the nature of the ROM problem from memory bandwidth to compute bound, and apply it to elastic seismic shear waves in an axisymmetric domain. Specifically, we present an end-to-end demonstration of using the rank-2 Galerkin ROM in a Monte Carlo sampling study, showing that the rank-2 Galerkin ROM is 970 times more efficient than the full order model, while maintaining excellent accuracy in both the mean and statistics of the field.</p>


Author(s):  
Satyavir Singh ◽  
Mohammad Abid Bazaz ◽  
Shahkar Ahmad Nahvi

Purpose The purpose of this paper is to demonstrate the applicability of the Discrete Empirical Interpolation method (DEIM) for simulating the swing dynamics of benchmark power system problems. The authors demonstrate that considerable savings in computational time and resources are obtained using this methodology. Another purpose is to apply a recently developed modified DEIM strategy with a reduced on-line computational burden on this problem. Design/methodology/approach On-line computational cost of the power system dynamics problem is reduced by using DEIM, which reduces the complexity of the evaluation of the nonlinear function in the reduced model to a cost proportional to the number of reduced modes. The on-line computational cost is reduced by using an approximate snap-shot ensemble to construct the reduced basis. Findings Considerable savings in computational resources and time are obtained when DEIM is used for simulating swing dynamics. The on-line cost implications of DEIM are also reduced considerably by using approximate snapshots to construct the reduced basis. Originality/value Applicability of DEIM (with and without approximate ensemble) to a large-scale power system dynamics problem is demonstrated for the first time.


2019 ◽  
Vol 9 (10) ◽  
pp. 1972 ◽  
Author(s):  
Elzbieta Gawronska

Progress in computational methods has been stimulated by the widespread availability of cheap computational power leading to the improved precision and efficiency of simulation software. Simulation tools become indispensable tools for engineers who are interested in attacking increasingly larger problems or are interested in searching larger phase space of process and system variables to find the optimal design. In this paper, we show and introduce a new approach to a computational method that involves mixed time stepping scheme and allows to decrease computational cost. Implementation of our algorithm does not require a parallel computing environment. Our strategy splits domains of a dynamically changing physical phenomena and allows to adjust the numerical model to various sub-domains. We are the first (to our best knowledge) to show that it is possible to use a mixed time partitioning method with various combination of schemes during binary alloys solidification. In particular, we use a fixed time step in one domain, and look for much larger time steps in other domains, while maintaining high accuracy. Our method is independent of a number of domains considered, comparing to traditional methods where only two domains were considered. Mixed time partitioning methods are of high importance here, because of natural separation of domain types. Typically all important physical phenomena occur in the casting and are of high computational cost, while in the mold domains less dynamic processes are observed and consequently larger time step can be chosen. Finally, we performed series of numerical experiments and demonstrate that our approach allows reducing computational time by more than three times without losing the significant precision of results and without parallel computing.


2016 ◽  
Vol 33 (4) ◽  
pp. 1095-1113 ◽  
Author(s):  
Slawomir Koziel ◽  
Adrian Bekasiewicz

Purpose – The purpose of this paper is to investigate strategies for expedited dimension scaling of electromagnetic (EM)-simulated microwave and antenna structures, exploiting the concept of variable-fidelity inverse surrogate modeling. Design/methodology/approach – A fast inverse surrogate modeling technique is described for dimension scaling of microwave and antenna structures. The model is established using reference designs obtained for cheap underlying low-fidelity model and corrected to allow structure scaling at high accuracy level. Numerical and experimental case studies are provided demonstrating feasibility of the proposed approach. Findings – It is possible, by appropriate combination of surrogate modeling techniques, to establish an inverse model for explicit determination of geometry dimensions of the structure at hand so as to re-design it for various operating frequencies. The scaling process can be concluded at a low computational cost corresponding to just a few evaluations of the high-fidelity computational model of the structure. Research limitations/implications – The present study is a step toward development of procedures for rapid dimension scaling of microwave and antenna structures at high-fidelity EM-simulation accuracy. Originality/value – The proposed modeling framework proved useful for fast geometry scaling of microwave and antenna structures, which is very laborious when using conventional methods. To the authors’ knowledge, this is one of the first attempts to surrogate-assisted dimension scaling of microwave components at the EM-simulation level.


Author(s):  
Matthew A. Williams ◽  
Andrew G. Alleyne

In the early stages of control system development, designers often require multiple iterations for purposes of validating control designs in simulation. This has the potential to make high fidelity models undesirable due to increased computational complexity and time required for simulation. As a solution, lower fidelity or simplified models are used for initial designs before controllers are tested on higher fidelity models. In the event that unmodeled dynamics cause the controller to fail when applied on a higher fidelity model, an iterative approach involving designing and validating a controller’s performance may be required. In this paper, a switched-fidelity modeling formulation for closed loop dynamical systems is proposed to reduce computational effort while maintaining elevated accuracy levels of system outputs and control inputs. The effects on computational effort and accuracy are investigated by applying the formulation to a traditional vapor compression system with high and low fidelity models of the evaporator and condenser. This sample case showed the ability of the switched fidelity framework to closely match the outputs and inputs of the high fidelity model while decreasing computational cost by 32% from the high fidelity model. For contrast, the low fidelity model decreases computational cost by 48% relative to the high fidelity model.


2017 ◽  
Vol 89 (4) ◽  
pp. 609-619 ◽  
Author(s):  
Witold Artur Klimczyk ◽  
Zdobyslaw Jan Goraj

Purpose This paper aims to address the issue of designing aerodynamically robust empennage. Aircraft design optimization often narrowed to analysis of cruise conditions does not take into account other flight phases (manoeuvres). These, especially in unmanned air vehicle sector, can be significant part of the whole flight. Empennage is a part of the aircraft, with crucial function for manoeuvres. It is important to consider robustness for highest performance. Design/methodology/approach Methodology for robust wing design is presented. Surrogate modelling using kriging is used to reduce the optimization cost for high-fidelity aerodynamic calculations. Analysis of varying flight conditions, angle of attack, is made to assess robustness of design for particular mission. Two cases are compared: global optimization of 11 parameters and optimization divided into two consecutive sub-optimizations. Findings Surrogate modelling proves its usefulness for cutting computational time. Optimum design found by splitting problem into sub-optimizations finds better design at lower computational cost. Practical implications It is demonstrated, how surrogate modelling can be used for analysis of robustness, and why it is important to consider it. Intuitive split of wing design into airfoil and planform sub-optimizations brings promising savings in the optimization cost. Originality/value Methodology presented in this paper can be used in various optimization problems, especially those involving expensive computations and requiring top quality design.


2018 ◽  
Vol 35 (2) ◽  
pp. 710-732 ◽  
Author(s):  
Jie Liu ◽  
Guilin Wen ◽  
Qixiang Qing ◽  
Fangyi Li ◽  
Yi Min Xie

Purpose This paper aims to tackle the challenge topic of continuum structural layout in the presence of random loads and to develop an efficient robust method. Design/methodology/approach An innovative robust topology optimization approach for continuum structures with random applied loads is reported. Simultaneous minimization of the expectation and the variance of the structural compliance is performed. Uncertain load vectors are dealt with by using additional uncertain pseudo random load vectors. The sensitivity information of the robust objective function is obtained approximately by using the Taylor expansion technique. The design problem is solved using bi-directional evolutionary structural optimization method with the derived sensitivity numbers. Findings The numerical examples show the significant topological changes of the robust solutions compared with the equivalent deterministic solutions. Originality/value A simple yet efficient robust topology optimization approach for continuum structures with random applied loads is developed. The computational time scales linearly with the number of applied loads with uncertainty, which is very efficient when compared with Monte Carlo-based optimization method.


2016 ◽  
Vol 33 (7) ◽  
pp. 2007-2018 ◽  
Author(s):  
Slawomir Koziel ◽  
Adrian Bekasiewicz

Purpose Development of techniques for expedited design optimization of complex and numerically expensive electromagnetic (EM) simulation models of antenna structures validated both numerically and experimentally. The paper aims to discuss these issues. Design/methodology/approach The optimization task is performed using a technique that combines gradient search with adjoint sensitivities, trust region framework, as well as EM simulation models with various levels of fidelity (coarse, medium and fine). Adaptive procedure for switching between the models of increasing accuracy in the course of the optimization process is implemented. Numerical and experimental case studies are provided to validate correctness of the design approach. Findings Appropriate combination of suitable design optimization algorithm embedded in a trust region framework, as well as model selection techniques, allows for considerable reduction of the antenna optimization cost compared to conventional methods. Research limitations/implications The study demonstrates feasibility of EM-simulation-driven design optimization of antennas at low computational cost. The presented techniques reach beyond the common design approaches based on direct optimization of EM models using conventional gradient-based or derivative-free methods, particularly in terms of reliability and reduction of the computational costs of the design processes. Originality/value Simulation-driven design optimization of contemporary antenna structures is very challenging when high-fidelity EM simulations are utilized for performance utilization of structure at hand. The proposed variable-fidelity optimization technique with adjoint sensitivity and trust regions permits rapid optimization of numerically demanding antenna designs (here, dielectric resonator antenna and compact monopole), which cannot be achieved when conventional methods are of use. The design cost of proposed strategy is up to 60 percent lower than direct optimization exploiting adjoint sensitivities. Experimental validation of the results is also provided.


Aerospace ◽  
2021 ◽  
Vol 8 (12) ◽  
pp. 398
Author(s):  
Angelos Kafkas ◽  
Spyridon Kilimtzidis ◽  
Athanasios Kotzakolios ◽  
Vassilis Kostopoulos ◽  
George Lampeas

Efficient optimization is a prerequisite to realize the full potential of an aeronautical structure. The success of an optimization framework is predominately influenced by the ability to capture all relevant physics. Furthermore, high computational efficiency allows a greater number of runs during the design optimization process to support decision-making. The efficiency can be improved by the selection of highly optimized algorithms and by reducing the dimensionality of the optimization problem by formulating it using a finite number of significant parameters. A plethora of variable-fidelity tools, dictated by each design stage, are commonly used, ranging from costly high-fidelity to low-cost, low-fidelity methods. Unfortunately, despite rapid solution times, an optimization framework utilizing low-fidelity tools does not necessarily capture the physical problem accurately. At the same time, high-fidelity solution methods incur a very high computational cost. Aiming to bridge the gap and combine the best of both worlds, a multi-fidelity optimization framework was constructed in this research paper. In our approach, the low-fidelity modules and especially the equivalent-plate methodology structural representation, capable of drastically reducing the associated computational time, form the backbone of the optimization framework and a MIDACO optimizer is tasked with providing an initial optimized design. The higher fidelity modules are then employed to explore possible further gains in performance. The developed framework was applied to a benchmark airliner wing. As demonstrated, reasonable mass reduction was obtained for a current state of the art configuration.


Sign in / Sign up

Export Citation Format

Share Document