fidelity model
Recently Published Documents


TOTAL DOCUMENTS

214
(FIVE YEARS 47)

H-INDEX

13
(FIVE YEARS 1)

2022 ◽  
Vol 166 ◽  
pp. 108704
Author(s):  
Christian Castagna ◽  
Carolina Introini ◽  
Antonio Cammi


Energies ◽  
2022 ◽  
Vol 15 (2) ◽  
pp. 403
Author(s):  
Marzieh Mahrokh ◽  
Slawomir Koziel

The growing demand for the integration of surface mount design (SMD) antennas into miniaturized electronic devices has imposed increasing limitations on the structure dimensions. Examples include embedded antennas in applications such as on-board devices, picosatellites, 5G communications, or implantable and wearable devices. The demands for size reduction while ensuring a satisfactory level of electrical and field performance can be managed through constrained numerical optimization. The reliability of optimization-based size reduction requires utilization of full-wave electromagnetic (EM) analysis, which entails significant computational costs. This can be alleviated by incorporating surrogate modeling techniques, adjoint sensitivities, or the employment of sparse sensitivity updates. An alternative is the incorporation of multi-fidelity simulation models, normally limited to two levels, low and high resolution. This paper proposes a novel algorithm for accelerated antenna miniaturization, featuring a continuous adjustment of the simulation model fidelity in the course of the optimization process. The model resolution is determined by factors related to violation of the design constraints as well as the convergence status of the algorithm. The algorithm utilizes the lowest-fidelity model for the early stages of the optimization process; it is gradually refined towards the highest-fidelity model upon approaching convergence, and the constraint violations improve towards the preset tolerance threshold. At the same time, a penalty function approach with adaptively adjusted coefficients is applied to enable the precise control of constraints, and to increase the achievable miniaturization rates. The presented procedure has been validated using five microstrip antennas, including three broadband, and two circularly polarized structures. The obtained results corroborate the relevance of the implemented mechanisms from the point of view of improving the average computational efficiency of the optimization process by 43% as compared to the single-fidelity adaptive penalty function approach. Furthermore, the presented methodology demonstrates a performance that is equivalent or even superior to its single-fidelity counterpart in terms of an average constraint violation of 0.01 dB (compared to 0.03 dB for the reference), and an average size reduction of 25% as compared to 25.6%.



2022 ◽  
Vol 7 (01) ◽  
pp. 31-51
Author(s):  
Tanya Peart ◽  
Nicolas Aubin ◽  
Stefano Nava ◽  
John Cater ◽  
Stuart Norris

Velocity Prediction Programs (VPPs) are commonly used to help predict and compare the performance of different sail designs. A VPP requires an aerodynamic input force matrix which can be computationally expensive to calculate, limiting its application in industrial sail design projects. The use of multi-fidelity kriging surrogate models has previously been presented by the authors to reduce this cost, with high-fidelity data for a new sail being modelled and the low-fidelity data provided by data from existing, but different, sail designs. The difference in fidelity is not due to the simulation method used to obtain the data, but instead how similar the sail’s geometry is to the new sail design. An important consideration for the construction of these models is the choice of low-fidelity data points, which provide information about the trend of the model curve between the high-fidelity data. A method is required to select the best existing sail design to use for the low-fidelity data when constructing a multi-fidelity model. The suitability of an existing sail design as a low fidelity model could be evaluated based on the similarity of its geometric parameters with the new sail. It is shown here that for upwind jib sails, the similarity of the broadseam between the two sails best indicates the ability of a design to be used as low-fidelity data for a lift coefficient surrogate model. The lift coefficient surrogate model error predicted by the regression is shown to be close to 1% of the lift coefficient surrogate error for most points. Larger discrepancies are observed for a drag coefficient surrogate error regression.



2021 ◽  
Author(s):  
Frederick Law ◽  
Antoine J Cerfon ◽  
Benjamin Peherstorfer

Abstract In the design of stellarators, energetic particle confinement is a critical point of concern which remains challenging to study from a numerical point of view. Standard Monte Carlo analyses are highly expensive because a large number of particle trajectories need to be integrated over long time scales, and small time steps must be taken to accurately capture the features of the wide variety of trajectories. Even when they are based on guiding center trajectories, as opposed to full-orbit trajectories, these standard Monte Carlo studies are too expensive to be included in most stellarator optimization codes. We present the first multifidelity Monte Carlo scheme for accelerating the estimation of energetic particle confinement in stellarators. Our approach relies on a two-level hierarchy, in which a guiding center model serves as the high-fidelity model, and a data-driven linear interpolant is leveraged as the low-fidelity surrogate model. We apply multifidelity Monte Carlo to the study of energetic particle confinement in a 4-period quasi-helically symmetric stellarator, assessing various metrics of confinement. Stemming from the very high computational efficiency of our surrogate model as well as its sufficient correlation to the high-fidelity model, we obtain speedups of up to 10 with multifidelity Monte Carlo compared to standard Monte Carlo.



2021 ◽  
Author(s):  
Ryan Santoso ◽  
Xupeng He ◽  
Marwa Alsinan ◽  
Ruben Figueroa Hernandez ◽  
Hyung Kwak ◽  
...  

Abstract History matching is a critical step within the reservoir management process to synchronize the simulation model with the production data. The history-matched model can be used for planning optimum field development and performing optimization and uncertainty quantifications. We present a novel history matching workflow based on a Bayesian framework that accommodates subsurface uncertainties. Our workflow involves three different model resolutions within the Bayesian framework: 1) a coarse low-fidelity model to update the prior range, 2) a fine low-fidelity model to represent the high-fidelity model, and 3) a high-fidelity model to re-construct the real response. The low-fidelity model is constructed by a multivariate polynomial function, while the high-fidelity model is based on the reservoir simulation model. We firstly develop a coarse low-fidelity model using a two-level Design of Experiment (DoE), which aims to provide a better prior. We secondly use Latin Hypercube Sampling (LHS) to construct the fine low-fidelity model to be deployed in the Bayesian runs, where we use the Metropolis-Hastings algorithm. Finally, the posterior is fed into the high-fidelity model to evaluate the matching quality. This work demonstrates the importance of including uncertainties in history matching. Bayesian provides a robust framework to allow uncertainty quantification within the reservoir history matching. Under uniform prior, the convergence of the Bayesian is very sensitive to the parameter ranges. When the solution is far from the mean of the parameter ranges, the Bayesian introduces bios and deviates from the observed data. Our results show that updating the prior from the coarse low-fidelity model accelerates the Bayesian convergence and improves the matching convergence. Bayesian requires a huge number of runs to produce an accurate posterior. Running the high-fidelity model multiple times is expensive. Our workflow tackles this problem by deploying a fine low-fidelity model to represent the high-fidelity model in the main runs. This fine low-fidelity model is fast to run, while it honors the physics and accuracy of the high-fidelity model. We also use ANOVA sensitivity analysis to measure the importance of each parameter. The ranking gives awareness to the significant ones that may contribute to the matching accuracy. We demonstrate our workflow for a geothermal reservoir with static and operational uncertainties. Our workflow produces accurate matching of thermal recovery factor and produced-enthalpy rate with physically-consistent posteriors. We present a novel workflow to account for uncertainty in reservoir history matching involving multi-resolution interaction. The proposed method is generic and can be readily applied within existing history-matching workflows in reservoir simulation.



2021 ◽  
pp. 109963622110536
Author(s):  
Vahid Pourriahi ◽  
Mohammad Heidari-Rarani ◽  
Amir Torabpour Isfahani

The hexagonal honeycomb core sandwich panels used in the satellite structure are subjected to severe vibration during launch. Therefore, the amounts of natural frequencies of these panels are of great importance for design engineers. Three-dimensional finite element modeling of the core considering all geometric parameters (i.e., a high-fidelity model) to achieve accurate results is not cost-effective. The honeycomb core is traditionally equivalent to a homogenized continuum core (i.e., a low-fidelity model) using simple analytical relations with ignoring the adhesive layer at the double cell-walls and radius of inclined cell-wall curvature. In this study, analytical formulations are first presented for the prediction of the equivalent elastic properties of a hexagonal aluminum honeycomb with considering all geometric parameters including adhesive layer thickness, cell-wall thickness, inclined cell-wall length, radius of inclined cell-wall curvature at the intersection, internal cell-wall angle, and honeycomb height. Then, two aluminum honeycomb core sandwich beams with free-free boundary conditions are modeled and analyzed in Abaqus finite element software, one with 3D high-fidelity core and the other with 3D low-fidelity core. In order to validate the results of the equivalent model, the modal analysis test was performed and the experimental natural frequencies were compared. The obtained results show a good agreement between the 3D low-fidelity and high-fidelity finite element models and experimental results. In addition, the influence of the above-mentioned geometric parameters has been investigated on the natural frequencies of a sandwich beam. [Formula: see text]



Author(s):  
Serap Keles ◽  
Görel Bringedal ◽  
Thormod Idsoe

AbstractThis paper aims at describing the process for assessing the intervention fidelity of a randomized controlled trial (RCT) of an “Adolescent Coping with Depression Course” (ACDC) and to assess the participants’ satisfaction with the intervention. We applied the comprehensive fidelity model developed by the National Institutes of Health’s Behavior Change Consortium to examine how our intervention met the fidelity requirements under five categories. Data came from a two-arm parallel cluster RCT. Both qualitative and quantitative analyses of the ACDC intervention using the comprehensive fidelity model indicated that the level of fidelity in this study did not reach 100%. However, it was approaching a high level of treatment fidelity. Participants also expressed high levels of satisfaction (M = 3.65, SD = .95). This analysis is important to show how appropriately the intervention was implemented, areas for improvement to increase its fidelity, and to ensure the internal and external validity of the findings. Trial Registration: ISRCTN registry ISRCTN19700389. Registered 6 October 2015. https://doi.org/10.1186/ISRCTN19700389. Full Protocol: 10.1186/s12888-016-0954-y



2021 ◽  
Vol 28 (11) ◽  
pp. S158-S159
Author(s):  
N.E. Miles ◽  
S. Evans ◽  
R. Treat ◽  
B.D. Beran
Keyword(s):  


2021 ◽  
Author(s):  
Yoo-Seung Won ◽  
Soham Chatterjee ◽  
Dirmanto Jap ◽  
Arindam Basu ◽  
Shivam Bhasin
Keyword(s):  


2021 ◽  
Author(s):  
Ryan Santoso ◽  
Xupeng He ◽  
Marwa Alsinan ◽  
Hyung Kwak ◽  
Hussein Hoteit

Abstract History matching is critical in subsurface flow modeling. It is to align the reservoir model with the measured data. However, it remains challenging since the solution is not unique and the implementation is expensive. The traditional approach relies on trial and error, which are exhaustive and labor-intensive. In this study, we propose a new workflow utilizing Bayesian Markov Chain Monte Carlo (MCMC) to automatically and accurately perform history matching. We deliver four novelties within the workflow: 1) the use of multi-resolution low-fidelity models to guarantee high-quality matching, 2) updating the ranges of priors to assure convergence, 3) the use of Long-Short Term Memory (LSTM) network as a low-fidelity model to produce continuous time-response, and 4) the use of Bayesian optimization to obtain the optimum low-fidelity model for Bayesian MCMC runs. We utilize the first SPE comparative model as the physical and high-fidelity model. It is a gas injection into an oil reservoir case, which is the gravity-dominated process. The coarse low-fidelity model manages to provide updated priors that increase the precision of Bayesian MCMC. The Bayesian-optimized LSTM has successfully captured the physics in the high-fidelity model. The Bayesian-LSTM MCMC produces an accurate prediction with narrow uncertainties. The posterior prediction through the high-fidelity model ensures the robustness and precision of the workflow. This approach provides an efficient and high-quality history matching for subsurface flow modeling.



Sign in / Sign up

Export Citation Format

Share Document