Inverse surrogate modeling for low-cost geometry scaling of microwave and antenna structures

2016 ◽  
Vol 33 (4) ◽  
pp. 1095-1113 ◽  
Author(s):  
Slawomir Koziel ◽  
Adrian Bekasiewicz

Purpose – The purpose of this paper is to investigate strategies for expedited dimension scaling of electromagnetic (EM)-simulated microwave and antenna structures, exploiting the concept of variable-fidelity inverse surrogate modeling. Design/methodology/approach – A fast inverse surrogate modeling technique is described for dimension scaling of microwave and antenna structures. The model is established using reference designs obtained for cheap underlying low-fidelity model and corrected to allow structure scaling at high accuracy level. Numerical and experimental case studies are provided demonstrating feasibility of the proposed approach. Findings – It is possible, by appropriate combination of surrogate modeling techniques, to establish an inverse model for explicit determination of geometry dimensions of the structure at hand so as to re-design it for various operating frequencies. The scaling process can be concluded at a low computational cost corresponding to just a few evaluations of the high-fidelity computational model of the structure. Research limitations/implications – The present study is a step toward development of procedures for rapid dimension scaling of microwave and antenna structures at high-fidelity EM-simulation accuracy. Originality/value – The proposed modeling framework proved useful for fast geometry scaling of microwave and antenna structures, which is very laborious when using conventional methods. To the authors’ knowledge, this is one of the first attempts to surrogate-assisted dimension scaling of microwave components at the EM-simulation level.

2016 ◽  
Vol 33 (7) ◽  
pp. 2007-2018 ◽  
Author(s):  
Slawomir Koziel ◽  
Adrian Bekasiewicz

Purpose Development of techniques for expedited design optimization of complex and numerically expensive electromagnetic (EM) simulation models of antenna structures validated both numerically and experimentally. The paper aims to discuss these issues. Design/methodology/approach The optimization task is performed using a technique that combines gradient search with adjoint sensitivities, trust region framework, as well as EM simulation models with various levels of fidelity (coarse, medium and fine). Adaptive procedure for switching between the models of increasing accuracy in the course of the optimization process is implemented. Numerical and experimental case studies are provided to validate correctness of the design approach. Findings Appropriate combination of suitable design optimization algorithm embedded in a trust region framework, as well as model selection techniques, allows for considerable reduction of the antenna optimization cost compared to conventional methods. Research limitations/implications The study demonstrates feasibility of EM-simulation-driven design optimization of antennas at low computational cost. The presented techniques reach beyond the common design approaches based on direct optimization of EM models using conventional gradient-based or derivative-free methods, particularly in terms of reliability and reduction of the computational costs of the design processes. Originality/value Simulation-driven design optimization of contemporary antenna structures is very challenging when high-fidelity EM simulations are utilized for performance utilization of structure at hand. The proposed variable-fidelity optimization technique with adjoint sensitivity and trust regions permits rapid optimization of numerically demanding antenna designs (here, dielectric resonator antenna and compact monopole), which cannot be achieved when conventional methods are of use. The design cost of proposed strategy is up to 60 percent lower than direct optimization exploiting adjoint sensitivities. Experimental validation of the results is also provided.


Aerospace ◽  
2021 ◽  
Vol 8 (12) ◽  
pp. 398
Author(s):  
Angelos Kafkas ◽  
Spyridon Kilimtzidis ◽  
Athanasios Kotzakolios ◽  
Vassilis Kostopoulos ◽  
George Lampeas

Efficient optimization is a prerequisite to realize the full potential of an aeronautical structure. The success of an optimization framework is predominately influenced by the ability to capture all relevant physics. Furthermore, high computational efficiency allows a greater number of runs during the design optimization process to support decision-making. The efficiency can be improved by the selection of highly optimized algorithms and by reducing the dimensionality of the optimization problem by formulating it using a finite number of significant parameters. A plethora of variable-fidelity tools, dictated by each design stage, are commonly used, ranging from costly high-fidelity to low-cost, low-fidelity methods. Unfortunately, despite rapid solution times, an optimization framework utilizing low-fidelity tools does not necessarily capture the physical problem accurately. At the same time, high-fidelity solution methods incur a very high computational cost. Aiming to bridge the gap and combine the best of both worlds, a multi-fidelity optimization framework was constructed in this research paper. In our approach, the low-fidelity modules and especially the equivalent-plate methodology structural representation, capable of drastically reducing the associated computational time, form the backbone of the optimization framework and a MIDACO optimizer is tasked with providing an initial optimized design. The higher fidelity modules are then employed to explore possible further gains in performance. The developed framework was applied to a benchmark airliner wing. As demonstrated, reasonable mass reduction was obtained for a current state of the art configuration.


2019 ◽  
Vol 37 (2) ◽  
pp. 753-788
Author(s):  
Slawomir Koziel ◽  
Adrian Bekasiewicz

Purpose The purpose of this paper is to investigate the strategies and algorithms for expedited design optimization of microwave and antenna structures in multi-objective setup. Design/methodology/approach Formulation of the multi-objective design problem-oriented toward execution of the population-based metaheuristic algorithm within the segmented search space is investigated. Described algorithmic framework exploits variable fidelity modeling, physics- and approximation-based representation of the structure and model correction techniques. The considered approach is suitable for handling various problems pertinent to the design of microwave and antenna structures. Numerical case studies are provided demonstrating the feasibility of the segmentation-based framework for the design of real-world structures in setups with two and three objectives. Findings Formulation of appropriate design problem enables identification of the search space region containing Pareto front, which can be further divided into a set of compartments characterized by small combined volume. Approximation model of each segment can be constructed using a small number of training samples and then optimized, at a negligible computational cost, using population-based metaheuristics. Introduction of segmentation mechanism to multi-objective design framework is important to facilitate low-cost optimization of many-parameter structures represented by numerically expensive computational models. Further reduction of the design cost can be achieved by enforcing equal-volumes of the search space segments. Research limitations/implications The study summarizes recent advances in low-cost multi-objective design of microwave and antenna structures. The investigated techniques exceed capabilities of conventional design approaches involving direct evaluation of physics-based models for determination of trade-offs between the design objectives, particularly in terms of reliability and reduction of the computational cost. Studies on the scalability of segmentation mechanism indicate that computational benefits of the approach decrease with the number of search space segments. Originality/value The proposed design framework proved useful for the rapid multi-objective design of microwave and antenna structures characterized by complex and multi-parameter topologies, which is extremely challenging when using conventional methods driven by population-based metaheuristics algorithms. To the authors knowledge, this is the first work that summarizes segmentation-based approaches to multi-objective optimization of microwave and antenna components.


2019 ◽  
Vol 36 (9) ◽  
pp. 2983-2995
Author(s):  
Slawomir Koziel ◽  
Adrian Bekasiewicz

Purpose This paper aims to investigate the strategy for low-cost yield optimization of miniaturized microstrip couplers using variable-fidelity electromagnetic (EM) simulations. Design/methodology/approach Usefulness of data-driven models constructed from structure frequency responses formulated in the form of suitably defined characteristic points for statistical analysis is investigated. Reformulation of the characteristics leads to a less nonlinear functional landscape and reduces the number of training samples required for accurate modeling. Further reduction of the cost associated with construction of the data-driven model, is achieved using variable-fidelity methods. Numerical case study is provided demonstrating feasibility of the feature-based modeling for low cost statistical analysis and yield optimization. Findings It is possible, through reformulation of the structure frequency responses in the form of suitably defined feature points, to reduce the number of training samples required for its data-driven modeling. The approximation model can be used as an accurate evaluation engine for a low-cost Monte Carlo analysis. Yield optimization can be realized through minimization of yield within the data-driven model bounds and subsequent model re-set around the optimized design. Research limitations/implications The investigated technique exceeds capabilities of conventional Monte Carlo-based approaches for statistical analysis in terms of computational cost without compromising its accuracy with respect to the conventional EM-based Monte Carlo. Originality/value The proposed tolerance-aware design approach proved useful for rapid yield optimization of compact microstrip couplers represented using EM-simulation models, which is extremely challenging when using conventional approaches due to tremendous number of EM evaluations required for statistical analysis.


Author(s):  
P. Perdikaris ◽  
M. Raissi ◽  
A. Damianou ◽  
N. D. Lawrence ◽  
G. E. Karniadakis

Multi-fidelity modelling enables accurate inference of quantities of interest by synergistically combining realizations of low-cost/low-fidelity models with a small set of high-fidelity observations. This is particularly effective when the low- and high-fidelity models exhibit strong correlations, and can lead to significant computational gains over approaches that solely rely on high-fidelity models. However, in many cases of practical interest, low-fidelity models can only be well correlated to their high-fidelity counterparts for a specific range of input parameters, and potentially return wrong trends and erroneous predictions if probed outside of their validity regime. Here we put forth a probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends. This introduces a new class of multi-fidelity information fusion algorithms that provide a fundamental extension to the existing linear autoregressive methodologies, while still maintaining the same algorithmic complexity and overall computational cost. The performance of the proposed methods is tested in several benchmark problems involving both synthetic and real multi-fidelity datasets from computational fluid dynamics simulations.


Author(s):  
Marco Baldan ◽  
Alexander Nikanorov ◽  
Bernard Nacke

Purpose Reliable modeling of induction hardening requires a multi-physical approach, which makes it time-consuming. In designing an induction hardening system, combining such model with an optimization technique allows managing a high number of design variables. However, this could lead to a tremendous overall computational cost. This paper aims to reduce the computational time of an optimal design problem by making use of multi-fidelity modeling and parallel computing. Design/methodology/approach In the multi-fidelity framework, the “high-fidelity” model couples the electromagnetic, thermal and metallurgical fields. It predicts the phase transformations during both the heating and cooling stages. The “low-fidelity” model is instead limited to the heating step. Its inaccuracy is counterbalanced by its cheapness, which makes it suitable for exploring the design space in optimization. Then, the use of co-Kriging allows merging information from different fidelity models and predicting good design candidates. Field evaluations of both models occur in parallel. Findings In the design of an induction heating system, the synergy between the “high-fidelity” and “low-fidelity” model, together with use of surrogates and parallel computing could reduce up to one order of magnitude the overall computational cost. Practical implications On one hand, multi-physical modeling of induction hardening implies a better understanding of the process, resulting in further potential process improvements. On the other hand, the optimization technique could be applied to many other computationally intensive real-life problems. Originality/value This paper highlights how parallel multi-fidelity optimization could be used in designing an induction hardening system.


Author(s):  
Matthew A. Williams ◽  
Andrew G. Alleyne

In the early stages of control system development, designers often require multiple iterations for purposes of validating control designs in simulation. This has the potential to make high fidelity models undesirable due to increased computational complexity and time required for simulation. As a solution, lower fidelity or simplified models are used for initial designs before controllers are tested on higher fidelity models. In the event that unmodeled dynamics cause the controller to fail when applied on a higher fidelity model, an iterative approach involving designing and validating a controller’s performance may be required. In this paper, a switched-fidelity modeling formulation for closed loop dynamical systems is proposed to reduce computational effort while maintaining elevated accuracy levels of system outputs and control inputs. The effects on computational effort and accuracy are investigated by applying the formulation to a traditional vapor compression system with high and low fidelity models of the evaporator and condenser. This sample case showed the ability of the switched fidelity framework to closely match the outputs and inputs of the high fidelity model while decreasing computational cost by 32% from the high fidelity model. For contrast, the low fidelity model decreases computational cost by 48% relative to the high fidelity model.


Complexity ◽  
2017 ◽  
Vol 2017 ◽  
pp. 1-14 ◽  
Author(s):  
Nariman Fouladinejad ◽  
Nima Fouladinejad ◽  
Mohamad Kasim Abdul Jalil ◽  
Jamaludin Mohd Taib

The development of complex simulation systems is extremely costly as it requires high computational capability and expensive hardware. As cost is one of the main issues in developing simulation components, achieving real-time simulation is challenging and it often leads to intensive computational burdens. Overcoming the computational burden in a multidisciplinary simulation system that has several subsystems is essential in producing inexpensive real-time simulation. In this paper, a surrogate-based computational framework was proposed to reduce the computational cost in a high-dimensional model while maintaining accurate simulation results. Several well-known metamodeling techniques were used in creating a global surrogate model. Decomposition approaches were also used to simplify the complexities of the system and to guide the surrogate modeling processes. In addition, a case study was provided to validate the proposed approach. A surrogate-based vehicle dynamic model (SBVDM) was developed to reduce computational delay in a real-time driving simulator. The results showed that the developed surrogate-based model was able to significantly reduce the computing costs, unlike the expensive computational model. The response time in surrogate-based simulation was considerably faster than the conventional model. Therefore, the proposed framework can be used in developing low-cost simulation systems while yielding high fidelity and fast computational output.


2012 ◽  
Vol 544 ◽  
pp. 49-54 ◽  
Author(s):  
Jun Zheng ◽  
Hao Bo Qiu ◽  
Xiao Lin Zhang

ATC provides a systematic approach in solving decomposed large scale systems that has solvable subsystems. However, complex engineering system usually has a high computational cost , which result in limiting real-life applications of ATC based on high-fidelity simulation models. To address these problems, this paper aims to develop an efficient approximation model building techniques under the analytical target cascading (ATC) framework, to reduce computational cost associated with multidisciplinary design optimization problems based on high-fidelity simulations. An approximation model building techniques is proposed: approximations in the subsystem level are based on variable-fidelity modeling (interaction of low- and high-fidelity models). The variable-fidelity modeling consists of computationally efficient simplified models (low-fidelity) and expensive detailed (high-fidelity) models. The effectiveness of the method for modeling under the ATC framework using variable-fidelity models is studied. Overall results show the methods introduced in this paper provide an effective way of improving computational efficiency of the ATC method based on variable-fidelity simulation models.


Author(s):  
Theodoros Zygiridis ◽  
Stamatis A. Amanatiadis ◽  
Theodosios Karamanos ◽  
Nikolaos V. Kantartzis

Purpose The extraordinary properties of graphene render it ideal for diverse contemporary and future applications. Aiming at the investigation of certain aspects commonly overlooked in pertinent works, the authors study wave-propagation phenomena supported by graphene layers within a stochastic framework, i.e. when uncertainty in various factors affects the graphene’s surface conductivity. Given that the consideration of an increasing number of graphene sheets may increase the stochastic dimensionality of the corresponding problem, efficient surrogates with reasonable computational cost need to be developed. Design/methodology/approach The authors exploit the potential of generalized Polynomial Chaos (PC) expansions and develop low-cost surrogates that enable the efficient extraction of the necessary statistical properties displayed by stochastic graphene-related quantities of interest (QoI). A key step is the incorporation of an initial variance estimation, which unveils the significance of each input parameter and facilitates the selection of the most appropriate basis functions, by favoring anisotropic formulae. In addition, the impact of controlling the allowable input interactions in the expansion terms is investigated, aiming at further PC-basis elimination. Findings The proposed stochastic methodology is assessed via comparisons with reference Monte-Carlo results, and the developed reduced basis models are shown to be sufficiently reliable, being at the same time computationally cheaper than standard PC expansions. In this context, different graphene configurations with varying numbers of random inputs are modeled, and interesting conclusions are drawn regarding their stochastic responses. Originality/value The statistical properties of surface-plasmon polaritons and other QoIs are predicted reliably in diverse graphene configurations, when the surface conductivity displays non-trivial uncertainty levels. The suggested PC methodology features simple implementation and low complexity, yet its performance is not compromised, compared to other standard approaches, and it is shown to be capable of delivering valid results.


Sign in / Sign up

Export Citation Format

Share Document