Data-Centric Engineering
Latest Publications


TOTAL DOCUMENTS

41
(FIVE YEARS 41)

H-INDEX

1
(FIVE YEARS 1)

Published By Cambridge University Press (CUP)

2632-6736

2021 ◽  
Vol 2 ◽  
Author(s):  
Nikolaos Papadimas ◽  
Timothy Dodwell

Abstract This article recasts the traditional challenge of calibrating a material constitutive model into a hierarchical probabilistic framework. We consider a Bayesian framework where material parameters are assigned distributions, which are then updated given experimental data. Importantly, in true engineering setting, we are not interested in inferring the parameters for a single experiment, but rather inferring the model parameters over the population of possible experimental samples. In doing so, we seek to also capture the inherent variability of the material from coupon-to-coupon, as well as uncertainties around the repeatability of the test. In this article, we address this problem using a hierarchical Bayesian model. However, a vanilla computational approach is prohibitively expensive. Our strategy marginalizes over each individual experiment, decreasing the dimension of our inference problem to only the hyperparameter—those parameter describing the population statistics of the material model only. Importantly, this marginalization step, requires us to derive an approximate likelihood, for which, we exploit an emulator (built offline prior to sampling) and Bayesian quadrature, allowing us to capture the uncertainty in this numerical approximation. Importantly, our approach renders hierarchical Bayesian calibration of material models computational feasible. The approach is tested in two different examples. The first is a compression test of simple spring model using synthetic data; the second, a more complex example using real experiment data to fit a stochastic elastoplastic model for 3D-printed steel.


2021 ◽  
Vol 2 ◽  
Author(s):  
Muhammad I. Zafar ◽  
Meelan M. Choudhari ◽  
Pedro Paredes ◽  
Heng Xiao

Abstract Accurate prediction of laminar-turbulent transition is a critical element of computational fluid dynamics simulations for aerodynamic design across multiple flow regimes. Traditional methods of transition prediction cannot be easily extended to flow configurations where the transition process depends on a large set of parameters. In comparison, neural network methods allow higher dimensional input features to be considered without compromising the efficiency and accuracy of the traditional data-driven models. Neural network methods proposed earlier follow a cumbersome methodology of predicting instability growth rates over a broad range of frequencies, which are then processed to obtain the N-factor envelope, and then, the transition location based on the correlating N-factor. This paper presents an end-to-end transition model based on a recurrent neural network, which sequentially processes the mean boundary-layer profiles along the surface of the aerodynamic body to directly predict the N-factor envelope and the transition locations over a two-dimensional airfoil. The proposed transition model has been developed and assessed using a large database of 53 airfoils over a wide range of chord Reynolds numbers and angles of attack. The large universe of airfoils encountered in various applications causes additional difficulties. As such, we provide further insights on selecting training datasets from large amounts of available data. Although the proposed model has been analyzed for two-dimensional boundary layers in this paper, it can be easily generalized to other flows due to embedded feature extraction capability of convolutional neural network in the model.


2021 ◽  
Vol 2 ◽  
Author(s):  
Zhiping Qiu ◽  
Han Wu ◽  
Isaac Elishakoff ◽  
Dongliang Liu

Abstract This paper studies the data-based polyhedron model and its application in uncertain linear optimization of engineering structures, especially in the absence of information either on probabilistic properties or about membership functions in the fussy sets-based approach, in which situation it is more appropriate to quantify the uncertainties by convex polyhedra. Firstly, we introduce the uncertainty quantification method of the convex polyhedron approach and the model modification method by Chebyshev inequality. Secondly, the characteristics of the optimal solution of convex polyhedron linear programming are investigated. Then the vertex solution of convex polyhedron linear programming is presented and proven. Next, the application of convex polyhedron linear programming in the static load-bearing capacity problem is introduced. Finally, the effectiveness of the vertex solution is verified by an example of the plane truss bearing problem, and the efficiency is verified by a load-bearing problem of stiffened composite plates.


2021 ◽  
Vol 2 ◽  
Author(s):  
Domenic Di Francesco ◽  
Marios Chryssanthopoulos ◽  
Michael Havbro Faber ◽  
Ujjwal Bharadwaj

Abstract Attempts to formalize inspection and monitoring strategies in industry have struggled to combine evidence from multiple sources (including subject matter expertise) in a mathematically coherent way. The perceived requirement for large amounts of data are often cited as the reason that quantitative risk-based inspection is incompatible with the sparse and imperfect information that is typically available to structural integrity engineers. Current industrial guidance is also limited in its methods of distinguishing quality of inspections, as this is typically based on simplified (qualitative) heuristics. In this paper, Bayesian multi-level (partial pooling) models are proposed as a flexible and transparent method of combining imperfect and incomplete information, to support decision-making regarding the integrity management of in-service structures. This work builds on the established theoretical framework for computing the expected value of information, by allowing for partial pooling between inspection measurements (or groups of measurements). This method is demonstrated for a simulated example of a structure with active corrosion in multiple locations, which acknowledges that the data will be associated with some precision, bias, and reliability. Quantifying the extent to which an inspection of one location can reduce uncertainty in damage models at remote locations has been shown to influence many aspects of the expected value of an inspection. These results are considered in the context of the current challenges in risk based structural integrity management.


2021 ◽  
Vol 2 ◽  
Author(s):  
George Tsialiamanis ◽  
David J. Wagg ◽  
Nikolaos Dervilis ◽  
Keith Worden

Abstract A framework is proposed for generative models as a basis for digital twins or mirrors of structures. The proposal is based on the premise that deterministic models cannot account for the uncertainty present in most structural modeling applications. Two different types of generative models are considered here. The first is a physics-based model based on the stochastic finite element (SFE) method, which is widely used when modeling structures that have material and loading uncertainties imposed. Such models can be calibrated according to data from the structure and would be expected to outperform any other model if the modeling accurately captures the true underlying physics of the structure. The potential use of SFE models as digital mirrors is illustrated via application to a linear structure with stochastic material properties. For situations where the physical formulation of such models does not suffice, a data-driven framework is proposed, using machine learning and conditional generative adversarial networks (cGANs). The latter algorithm is used to learn the distribution of the quantity of interest in a structure with material nonlinearities and uncertainties. For the examples considered in this work, the data-driven cGANs model outperforms the physics-based approach. Finally, an example is shown where the two methods are coupled such that a hybrid model approach is demonstrated.


2021 ◽  
Vol 2 ◽  
Author(s):  
Milad Zeraatpisheh ◽  
Stephane P.A. Bordas ◽  
Lars A.A. Beex

Abstract Patient-specific surgical simulations require the patient-specific identification of the constitutive parameters. The sparsity of the experimental data and the substantial noise in the data (e.g., recovered during surgery) cause considerable uncertainty in the identification. In this exploratory work, parameter uncertainty for incompressible hyperelasticity, often used for soft tissues, is addressed by a probabilistic identification approach based on Bayesian inference. Our study particularly focuses on the uncertainty of the model: we investigate how the identified uncertainties of the constitutive parameters behave when different forms of model uncertainty are considered. The model uncertainty formulations range from uninformative ones to more accurate ones that incorporate more detailed extensions of incompressible hyperelasticity. The study shows that incorporating model uncertainty may improve the results, but this is not guaranteed.


2021 ◽  
Vol 2 ◽  
Author(s):  
Giuseppe D’Alessio ◽  
Alberto Cuoci ◽  
Alessandro Parente

Abstract The integration of Artificial Neural Networks (ANNs) and Feature Extraction (FE) in the context of the Sample- Partitioning Adaptive Reduced Chemistry approach was investigated in this work, to increase the on-the-fly classification accuracy for very large thermochemical states. The proposed methodology was firstly compared with an on-the-fly classifier based on the Principal Component Analysis reconstruction error, as well as with a standard ANN (s-ANN) classifier, operating on the full thermochemical space, for the adaptive simulation of a steady laminar flame fed with a nitrogen-diluted stream of n-heptane in air. The numerical simulations were carried out with a kinetic mechanism accounting for 172 species and 6,067 reactions, which includes the chemistry of Polycyclic Aromatic Hydrocarbons (PAHs) up to C $ {}_{20} $ . Among all the aforementioned classifiers, the one exploiting the combination of an FE step with ANN proved to be more efficient for the classification of high-dimensional spaces, leading to a higher speed-up factor and a higher accuracy of the adaptive simulation in the description of the PAH and soot-precursor chemistry. Finally, the investigation of the classifier’s performances was also extended to flames with different boundary conditions with respect to the training one, obtained imposing a higher Reynolds number or time-dependent sinusoidal perturbations. Satisfying results were observed on all the test flames.


2021 ◽  
Vol 2 ◽  
Author(s):  
Abel Sancarlos ◽  
Morgan Cameron ◽  
Jean-Marc Le Peuvedic ◽  
Juliette Groulier ◽  
Jean-Louis Duval ◽  
...  

Abstract The concept of “hybrid twin” (HT) has recently received a growing interest thanks to the availability of powerful machine learning techniques. This twin concept combines physics-based models within a model order reduction framework—to obtain real-time feedback rates—and data science. Thus, the main idea of the HT is to develop on-the-fly data-driven models to correct possible deviations between measurements and physics-based model predictions. This paper is focused on the computation of stable, fast, and accurate corrections in the HT framework. Furthermore, regarding the delicate and important problem of stability, a new approach is proposed, introducing several subvariants and guaranteeing a low computational cost as well as the achievement of a stable time-integration.


2021 ◽  
Vol 2 ◽  
Author(s):  
Aleksandra Svalova ◽  
Peter Helm ◽  
Dennis Prangle ◽  
Mohamed Rouainia ◽  
Stephanie Glendinning ◽  
...  

Abstract We propose using fully Bayesian Gaussian process emulation (GPE) as a surrogate for expensive computer experiments of transport infrastructure cut slopes in high-plasticity clay soils that are associated with an increased risk of failure. Our deterioration experiments simulate the dissipation of excess pore water pressure and seasonal pore water pressure cycles to determine slope failure time. It is impractical to perform the number of computer simulations that would be sufficient to make slope stability predictions over a meaningful range of geometries and strength parameters. Therefore, a GPE is used as an interpolator over a set of optimally spaced simulator runs modeling the time to slope failure as a function of geometry, strength, and permeability. Bayesian inference and Markov chain Monte Carlo simulation are used to obtain posterior estimates of the GPE parameters. For the experiments that do not reach failure within model time of 184 years, the time to failure is stochastically imputed by the Bayesian model. The trained GPE has the potential to inform infrastructure slope design, management, and maintenance. The reduction in computational cost compared with the original simulator makes it a highly attractive tool which can be applied to the different spatio-temporal scales of transport networks.


Sign in / Sign up

Export Citation Format

Share Document