Data-Centric Engineering
Latest Publications


TOTAL DOCUMENTS

41
(FIVE YEARS 41)

H-INDEX

1
(FIVE YEARS 1)

Published By Cambridge University Press (CUP)

2632-6736

2021 ◽  
Vol 2 ◽  
Author(s):  
Zhiping Qiu ◽  
Han Wu ◽  
Isaac Elishakoff ◽  
Dongliang Liu

Abstract This paper studies the data-based polyhedron model and its application in uncertain linear optimization of engineering structures, especially in the absence of information either on probabilistic properties or about membership functions in the fussy sets-based approach, in which situation it is more appropriate to quantify the uncertainties by convex polyhedra. Firstly, we introduce the uncertainty quantification method of the convex polyhedron approach and the model modification method by Chebyshev inequality. Secondly, the characteristics of the optimal solution of convex polyhedron linear programming are investigated. Then the vertex solution of convex polyhedron linear programming is presented and proven. Next, the application of convex polyhedron linear programming in the static load-bearing capacity problem is introduced. Finally, the effectiveness of the vertex solution is verified by an example of the plane truss bearing problem, and the efficiency is verified by a load-bearing problem of stiffened composite plates.


2021 ◽  
Vol 2 ◽  
Author(s):  
Nikolaos Papadimas ◽  
Timothy Dodwell

Abstract This article recasts the traditional challenge of calibrating a material constitutive model into a hierarchical probabilistic framework. We consider a Bayesian framework where material parameters are assigned distributions, which are then updated given experimental data. Importantly, in true engineering setting, we are not interested in inferring the parameters for a single experiment, but rather inferring the model parameters over the population of possible experimental samples. In doing so, we seek to also capture the inherent variability of the material from coupon-to-coupon, as well as uncertainties around the repeatability of the test. In this article, we address this problem using a hierarchical Bayesian model. However, a vanilla computational approach is prohibitively expensive. Our strategy marginalizes over each individual experiment, decreasing the dimension of our inference problem to only the hyperparameter—those parameter describing the population statistics of the material model only. Importantly, this marginalization step, requires us to derive an approximate likelihood, for which, we exploit an emulator (built offline prior to sampling) and Bayesian quadrature, allowing us to capture the uncertainty in this numerical approximation. Importantly, our approach renders hierarchical Bayesian calibration of material models computational feasible. The approach is tested in two different examples. The first is a compression test of simple spring model using synthetic data; the second, a more complex example using real experiment data to fit a stochastic elastoplastic model for 3D-printed steel.


2021 ◽  
Vol 2 ◽  
Author(s):  
Muhammad I. Zafar ◽  
Meelan M. Choudhari ◽  
Pedro Paredes ◽  
Heng Xiao

Abstract Accurate prediction of laminar-turbulent transition is a critical element of computational fluid dynamics simulations for aerodynamic design across multiple flow regimes. Traditional methods of transition prediction cannot be easily extended to flow configurations where the transition process depends on a large set of parameters. In comparison, neural network methods allow higher dimensional input features to be considered without compromising the efficiency and accuracy of the traditional data-driven models. Neural network methods proposed earlier follow a cumbersome methodology of predicting instability growth rates over a broad range of frequencies, which are then processed to obtain the N-factor envelope, and then, the transition location based on the correlating N-factor. This paper presents an end-to-end transition model based on a recurrent neural network, which sequentially processes the mean boundary-layer profiles along the surface of the aerodynamic body to directly predict the N-factor envelope and the transition locations over a two-dimensional airfoil. The proposed transition model has been developed and assessed using a large database of 53 airfoils over a wide range of chord Reynolds numbers and angles of attack. The large universe of airfoils encountered in various applications causes additional difficulties. As such, we provide further insights on selecting training datasets from large amounts of available data. Although the proposed model has been analyzed for two-dimensional boundary layers in this paper, it can be easily generalized to other flows due to embedded feature extraction capability of convolutional neural network in the model.


2021 ◽  
Vol 2 ◽  
Author(s):  
Domenic Di Francesco ◽  
Marios Chryssanthopoulos ◽  
Michael Havbro Faber ◽  
Ujjwal Bharadwaj

Abstract Attempts to formalize inspection and monitoring strategies in industry have struggled to combine evidence from multiple sources (including subject matter expertise) in a mathematically coherent way. The perceived requirement for large amounts of data are often cited as the reason that quantitative risk-based inspection is incompatible with the sparse and imperfect information that is typically available to structural integrity engineers. Current industrial guidance is also limited in its methods of distinguishing quality of inspections, as this is typically based on simplified (qualitative) heuristics. In this paper, Bayesian multi-level (partial pooling) models are proposed as a flexible and transparent method of combining imperfect and incomplete information, to support decision-making regarding the integrity management of in-service structures. This work builds on the established theoretical framework for computing the expected value of information, by allowing for partial pooling between inspection measurements (or groups of measurements). This method is demonstrated for a simulated example of a structure with active corrosion in multiple locations, which acknowledges that the data will be associated with some precision, bias, and reliability. Quantifying the extent to which an inspection of one location can reduce uncertainty in damage models at remote locations has been shown to influence many aspects of the expected value of an inspection. These results are considered in the context of the current challenges in risk based structural integrity management.


2021 ◽  
Vol 2 ◽  
Author(s):  
Milad Zeraatpisheh ◽  
Stephane P.A. Bordas ◽  
Lars A.A. Beex

Abstract Patient-specific surgical simulations require the patient-specific identification of the constitutive parameters. The sparsity of the experimental data and the substantial noise in the data (e.g., recovered during surgery) cause considerable uncertainty in the identification. In this exploratory work, parameter uncertainty for incompressible hyperelasticity, often used for soft tissues, is addressed by a probabilistic identification approach based on Bayesian inference. Our study particularly focuses on the uncertainty of the model: we investigate how the identified uncertainties of the constitutive parameters behave when different forms of model uncertainty are considered. The model uncertainty formulations range from uninformative ones to more accurate ones that incorporate more detailed extensions of incompressible hyperelasticity. The study shows that incorporating model uncertainty may improve the results, but this is not guaranteed.


2021 ◽  
Vol 2 ◽  
Author(s):  
Giuseppe D’Alessio ◽  
Alberto Cuoci ◽  
Alessandro Parente

Abstract The integration of Artificial Neural Networks (ANNs) and Feature Extraction (FE) in the context of the Sample- Partitioning Adaptive Reduced Chemistry approach was investigated in this work, to increase the on-the-fly classification accuracy for very large thermochemical states. The proposed methodology was firstly compared with an on-the-fly classifier based on the Principal Component Analysis reconstruction error, as well as with a standard ANN (s-ANN) classifier, operating on the full thermochemical space, for the adaptive simulation of a steady laminar flame fed with a nitrogen-diluted stream of n-heptane in air. The numerical simulations were carried out with a kinetic mechanism accounting for 172 species and 6,067 reactions, which includes the chemistry of Polycyclic Aromatic Hydrocarbons (PAHs) up to C $ {}_{20} $ . Among all the aforementioned classifiers, the one exploiting the combination of an FE step with ANN proved to be more efficient for the classification of high-dimensional spaces, leading to a higher speed-up factor and a higher accuracy of the adaptive simulation in the description of the PAH and soot-precursor chemistry. Finally, the investigation of the classifier’s performances was also extended to flames with different boundary conditions with respect to the training one, obtained imposing a higher Reynolds number or time-dependent sinusoidal perturbations. Satisfying results were observed on all the test flames.


2021 ◽  
Vol 2 ◽  
Author(s):  
George Tsialiamanis ◽  
David J. Wagg ◽  
Nikolaos Dervilis ◽  
Keith Worden

Abstract A framework is proposed for generative models as a basis for digital twins or mirrors of structures. The proposal is based on the premise that deterministic models cannot account for the uncertainty present in most structural modeling applications. Two different types of generative models are considered here. The first is a physics-based model based on the stochastic finite element (SFE) method, which is widely used when modeling structures that have material and loading uncertainties imposed. Such models can be calibrated according to data from the structure and would be expected to outperform any other model if the modeling accurately captures the true underlying physics of the structure. The potential use of SFE models as digital mirrors is illustrated via application to a linear structure with stochastic material properties. For situations where the physical formulation of such models does not suffice, a data-driven framework is proposed, using machine learning and conditional generative adversarial networks (cGANs). The latter algorithm is used to learn the distribution of the quantity of interest in a structure with material nonlinearities and uncertainties. For the examples considered in this work, the data-driven cGANs model outperforms the physics-based approach. Finally, an example is shown where the two methods are coupled such that a hybrid model approach is demonstrated.


2021 ◽  
Vol 2 ◽  
Author(s):  
Timothy Peter Davis

Abstract We explore the concept of parameter design applied to the production of glass beads in the manufacture of metal-encapsulated transistors. The main motivation is to complete the analysis hinted at in the original publication by Jim Morrison in 1957, which was an early example of discussing the idea of transmitted variation in engineering design, and an influential paper in the development of analytic parameter design as a data-centric engineering activity. Parameter design is a secondary design activity focused on selecting the nominals of the design variables to achieve the required target performance and to simultaneously reduce the variance around the target. Although the 1957 paper is not recent, its approach to engineering design is modern.


2021 ◽  
Vol 2 ◽  
Author(s):  
Ali Girayhan Özbay ◽  
Arash Hamzehloo ◽  
Sylvain Laizet ◽  
Panagiotis Tzirakis ◽  
Georgios Rizos ◽  
...  

Abstract The Poisson equation is commonly encountered in engineering, for instance, in computational fluid dynamics (CFD) where it is needed to compute corrections to the pressure field to ensure the incompressibility of the velocity field. In the present work, we propose a novel fully convolutional neural network (CNN) architecture to infer the solution of the Poisson equation on a 2D Cartesian grid with different resolutions given the right-hand side term, arbitrary boundary conditions, and grid parameters. It provides unprecedented versatility for a CNN approach dealing with partial differential equations. The boundary conditions are handled using a novel approach by decomposing the original Poisson problem into a homogeneous Poisson problem plus four inhomogeneous Laplace subproblems. The model is trained using a novel loss function approximating the continuous $ {L}^p $ norm between the prediction and the target. Even when predicting on grids denser than previously encountered, our model demonstrates encouraging capacity to reproduce the correct solution profile. The proposed model, which outperforms well-known neural network models, can be included in a CFD solver to help with solving the Poisson equation. Analytical test cases indicate that our CNN architecture is capable of predicting the correct solution of a Poisson problem with mean percentage errors below 10%, an improvement by comparison to the first step of conventional iterative methods. Predictions from our model, used as the initial guess to iterative algorithms like Multigrid, can reduce the root mean square error after a single iteration by more than 90% compared to a zero initial guess.


Sign in / Sign up

Export Citation Format

Share Document