scholarly journals Towards the Development of an Operational Digital Twin

Vibration ◽  
2020 ◽  
Vol 3 (3) ◽  
pp. 235-265
Author(s):  
Paul Gardner ◽  
Mattia Dal Borgo ◽  
Valentina Ruffini ◽  
Aidan J. Hughes ◽  
Yichen Zhu ◽  
...  

A digital twin is a powerful new concept in computational modelling that aims to produce a one-to-one mapping of a physical structure, operating in a specific context, into the digital domain. The development of a digital twin provides clear benefits in improved predictive performance and in aiding robust decision making for operators and asset managers. One key feature of a digital twin is the ability to improve the predictive performance over time, via improvements of the digital twin. An important secondary function is the ability to inform the user when predictive performance will be poor. If regions of poor performance are identified, the digital twin must offer a course of action for improving its predictive capabilities. In this paper three sources of improvement are investigated; (i) better estimates of the model parameters, (ii) adding/updating a data-based component to model unknown physics, and (iii) the addition of more physics-based modelling into the digital twin. These three courses of actions (along with taking no further action) are investigated through a probabilistic modelling approach, where the confidence of the current digital twin is used to inform when an action is required. In addition to addressing how a digital twin targets improvement in predictive performance, this paper also considers the implications of utilising a digital twin in a control context, particularly when the digital twin identifies poor performance of the underlying modelling assumptions. The framework is applied to a three-storey shear structure, where the objective is to construct a digital twin that predicts the acceleration response at each of the three floors given an unknown (and hence, unmodelled) structural state, caused by a contact nonlinearity between the upper two floors. This is intended to represent a realistic challenge for a digital twin, the case where the physical twin will degrade with age and the digital twin will have to make predictions in the presence of unforeseen physics at the time of the original model development phase.

2021 ◽  
Vol 11 (15) ◽  
pp. 6998
Author(s):  
Qiuying Li ◽  
Hoang Pham

Many NHPP software reliability growth models (SRGMs) have been proposed to assess software reliability during the past 40 years, but most of them have focused on modeling the fault detection process (FDP) in two ways: one is to ignore the fault correction process (FCP), i.e., faults are assumed to be instantaneously removed after the failure caused by the faults is detected. However, in real software development, it is not always reliable as fault removal usually needs time, i.e., the faults causing failures cannot always be removed at once and the detected failures will become more and more difficult to correct as testing progresses. Another way to model the fault correction process is to consider the time delay between the fault detection and fault correction. The time delay has been assumed to be constant and function dependent on time or random variables following some kind of distribution. In this paper, some useful approaches to the modeling of dual fault detection and correction processes are discussed. The dependencies between fault amounts of dual processes are considered instead of fault correction time-delay. A model aiming to integrate fault-detection processes and fault-correction processes, along with the incorporation of a fault introduction rate and testing coverage rate into the software reliability evaluation is proposed. The model parameters are estimated using the Least Squares Estimation (LSE) method. The descriptive and predictive performance of this proposed model and other existing NHPP SRGMs are investigated by using three real data-sets based on four criteria, respectively. The results show that the new model can be significantly effective in yielding better reliability estimation and prediction.


Transport ◽  
2009 ◽  
Vol 24 (2) ◽  
pp. 135-142 ◽  
Author(s):  
Ali Payıdar Akgüngör ◽  
Erdem Doğan

This study proposes an Artificial Neural Network (ANN) model and a Genetic Algorithm (GA) model to estimate the number of accidents (A), fatalities (F) and injuries (I) in Ankara, Turkey, utilizing the data obtained between 1986 and 2005. For model development, the number of vehicles (N), fatalities, injuries, accidents and population (P) were selected as model parameters. In the ANN model, the sigmoid and linear functions were used as activation functions with the feed forward‐back propagation algorithm. In the GA approach, two forms of genetic algorithm models including a linear and an exponential form of mathematical expressions were developed. The results of the GA model showed that the exponential model form was suitable to estimate the number of accidents and fatalities while the linear form was the most appropriate for predicting the number of injuries. The best fit model with the lowest mean absolute errors (MAE) between the observed and estimated values is selected for future estimations. The comparison of the model results indicated that the performance of the ANN model was better than that of the GA model. To investigate the performance of the ANN model for future estimations, a fifteen year period from 2006 to 2020 with two possible scenarios was employed. In the first scenario, the annual average growth rates of population and the number of vehicles are assumed to be 2.0 % and 7.5%, respectively. In the second scenario, the average number of vehicles per capita is assumed to reach 0.60, which represents approximately two and a half‐fold increase in fifteen years. The results obtained from both scenarios reveal the suitability of the current methods for road safety applications.


2021 ◽  
Author(s):  
Christian Zeman ◽  
Christoph Schär

<p>Since their first operational application in the 1950s, atmospheric numerical models have become essential tools in weather and climate prediction. As such, they are a constant subject to changes, thanks to advances in computer systems, numerical methods, and the ever increasing knowledge about the atmosphere of Earth. Many of the changes in today's models relate to seemingly unsuspicious modifications, associated with minor code rearrangements, changes in hardware infrastructure, or software upgrades. Such changes are meant to preserve the model formulation, yet the verification of such changes is challenged by the chaotic nature of our atmosphere - any small change, even rounding errors, can have a big impact on individual simulations. Overall this represents a serious challenge to a consistent model development and maintenance framework.</p><p>Here we propose a new methodology for quantifying and verifying the impacts of minor atmospheric model changes, or its underlying hardware/software system, by using ensemble simulations in combination with a statistical hypothesis test. The methodology can assess effects of model changes on almost any output variable over time, and can also be used with different hypothesis tests.</p><p>We present first applications of the methodology with the regional weather and climate model COSMO. The changes considered include a major system upgrade of the supercomputer used, the change from double to single precision floating-point representation, changes in the update frequency of the lateral boundary conditions, and tiny changes to selected model parameters. While providing very robust results, the methodology also shows a large sensitivity to more significant model changes, making it a good candidate for an automated tool to guarantee model consistency in the development cycle.</p>


2015 ◽  
Vol 8 (10) ◽  
pp. 3441-3470 ◽  
Author(s):  
J. A. Bradley ◽  
A. M. Anesio ◽  
J. S. Singarayer ◽  
M. R. Heath ◽  
S. Arndt

Abstract. SHIMMER (Soil biogeocHemIcal Model for Microbial Ecosystem Response) is a new numerical modelling framework designed to simulate microbial dynamics and biogeochemical cycling during initial ecosystem development in glacier forefield soils. However, it is also transferable to other extreme ecosystem types (such as desert soils or the surface of glaciers). The rationale for model development arises from decades of empirical observations in glacier forefields, and enables a quantitative and process focussed approach. Here, we provide a detailed description of SHIMMER, test its performance in two case study forefields: the Damma Glacier (Switzerland) and the Athabasca Glacier (Canada) and analyse sensitivity to identify the most sensitive and unconstrained model parameters. Results show that the accumulation of microbial biomass is highly dependent on variation in microbial growth and death rate constants, Q10 values, the active fraction of microbial biomass and the reactivity of organic matter. The model correctly predicts the rapid accumulation of microbial biomass observed during the initial stages of succession in the forefields of both the case study systems. Primary production is responsible for the initial build-up of labile substrate that subsequently supports heterotrophic growth. However, allochthonous contributions of organic matter, and nitrogen fixation, are important in sustaining this productivity. The development and application of SHIMMER also highlights aspects of these systems that require further empirical research: quantifying nutrient budgets and biogeochemical rates, exploring seasonality and microbial growth and cell death. This will lead to increased understanding of how glacier forefields contribute to global biogeochemical cycling and climate under future ice retreat.


2021 ◽  
Author(s):  
Hyeyoung Koh ◽  
Hannah Beth Blum

This study presents a machine learning-based approach for sensitivity analysis to examine how parameters affect a given structural response while accounting for uncertainty. Reliability-based sensitivity analysis involves repeated evaluations of the performance function incorporating uncertainties to estimate the influence of a model parameter, which can lead to prohibitive computational costs. This challenge is exacerbated for large-scale engineering problems which often carry a large quantity of uncertain parameters. The proposed approach is based on feature selection algorithms that rank feature importance and remove redundant predictors during model development which improve model generality and training performance by focusing only on the significant features. The approach allows performing sensitivity analysis of structural systems by providing feature rankings with reduced computational effort. The proposed approach is demonstrated with two designs of a two-bay, two-story planar steel frame with different failure modes: inelastic instability of a single member and progressive yielding. The feature variables in the data are uncertainties including material yield strength, Young’s modulus, frame sway imperfection, and residual stress. The Monte Carlo sampling method is utilized to generate random realizations of the frames from published distributions of the feature parameters, and the response variable is the frame ultimate strength obtained from finite element analyses. Decision trees are trained to identify important features. Feature rankings are derived by four feature selection techniques including impurity-based, permutation, SHAP, and Spearman's correlation. Predictive performance of the model including the important features are discussed using the evaluation metric for imbalanced datasets, Matthews correlation coefficient. Finally, the results are compared with those from reliability-based sensitivity analysis on the same example frames to show the validity of the feature selection approach. As the proposed machine learning-based approach produces the same results as the reliability-based sensitivity analysis with improved computational efficiency and accuracy, it could be extended to other structural systems.


Author(s):  
Randell M. Johnson ◽  
Joe H. Chow ◽  
Michael V. Dillon

Underspeed needle control of two Pelton turbine hydro units operating in a small power system has caused many incidents of partial system blackouts. Among the causes are conservative governor designs with regard to small signal stability limits, non-minimum phase power characteristics, and long tunnel-penstock traveling wave effects. A needle control model is developed from “water to wires” and validated for hydro-turbine dynamics using turbine test data. Model parameters are tuned with trajectory sensitivity. Proposed governor designs decompose the needle regulation gains into the power and frequency governor loops with a multi-time-scale approach. Elements of speed loop gain scheduling and a new inner-loop pressure stabilization circuit are devised to improve the frequency regulation and to damp the traveling wave effects. Simulation studies show the improvements of the proposed control designs.


2021 ◽  
Author(s):  
Sebastian Johannes Fritsch ◽  
Konstantin Sharafutdinov ◽  
Moein Einollahzadeh Samadi ◽  
Gernot Marx ◽  
Andreas Schuppert ◽  
...  

BACKGROUND During the course of the COVID-19 pandemic, a variety of machine learning models were developed to predict different aspects of the disease, such as long-term causes, organ dysfunction or ICU mortality. The number of training datasets used has increased significantly over time. However, these data now come from different waves of the pandemic, not always addressing the same therapeutic approaches over time as well as changing outcomes between two waves. The impact of these changes on model development has not yet been studied. OBJECTIVE The aim of the investigation was to examine the predictive performance of several models trained with data from one wave predicting the second wave´s data and the impact of a pooling of these data sets. Finally, a method for comparison of different datasets for heterogeneity is introduced. METHODS We used two datasets from wave one and two to develop several predictive models for mortality of the patients. Four classification algorithms were used: logistic regression (LR), support vector machine (SVM), random forest classifier (RF) and AdaBoost classifier (ADA). We also performed a mutual prediction on the data of that wave which was not used for training. Then, we compared the performance of models when a pooled dataset from two waves was used. The populations from the different waves were checked for heterogeneity using a convex hull analysis. RESULTS 63 patients from wave one (03-06/2020) and 54 from wave two (08/2020-01/2021) were evaluated. For both waves separately, we found models reaching sufficient accuracies up to 0.79 AUROC (95%-CI 0.76-0.81) for SVM on the first wave and up 0.88 AUROC (95%-CI 0.86-0.89) for RF on the second wave. After the pooling of the data, the AUROC decreased relevantly. In the mutual prediction, models trained on second wave´s data showed, when applied on first wave´s data, a good prediction for non-survivors but an insufficient classification for survivors. The opposite situation (training: first wave, test: second wave) revealed the inverse behaviour with models correctly classifying survivors and incorrectly predicting non-survivors. The convex hull analysis for the first and second wave populations showed a more inhomogeneous distribution of underlying data when compared to randomly selected sets of patients of the same size. CONCLUSIONS Our work demonstrates that a larger dataset is not a universal solution to all machine learning problems in clinical settings. Rather, it shows that inhomogeneous data used to develop models can lead to serious problems. With the convex hull analysis, we offer a solution for this problem. The outcome of such an analysis can raise concerns if the pooling of different datasets would cause inhomogeneous patterns preventing a better predictive performance.


2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
A Youssef

Abstract Study question Which models that predict pregnancy outcome in couples with unexplained RPL exist and what is the performance of the most used model? Summary answer We identified seven prediction models; none followed the recommended prediction model development steps. Moreover, the most used model showed poor predictive performance. What is known already RPL remains unexplained in 50–75% of couples For these couples, there is no effective treatment option and clinical management rests on supportive care. Essential part of supportive care consists of counselling on the prognosis of subsequent pregnancies. Indeed, multiple prediction models exist, however the quality and validity of these models varies. In addition, the prediction model developed by Brigham et al is the most widely used model, but has never been externally validated. Study design, size, duration We performed a systematic review to identify prediction models for pregnancy outcome after unexplained RPL. In addition we performed an external validation of the Brigham model in a retrospective cohort, consisting of 668 couples with unexplained RPL that visited our RPL clinic between 2004 and 2019. Participants/materials, setting, methods A systematic search was performed in December 2020 in Pubmed, Embase, Web of Science and Cochrane library to identify relevant studies. Eligible studies were selected and assessed according to the TRIPOD) guidelines, covering topics on model performance and validation statement. The performance of predicting live birth in the Brigham model was evaluated through calibration and discrimination, in which the observed pregnancy rates were compared to the predicted pregnancy rates. Main results and the role of chance Seven models were compared and assessed according to the TRIPOD statement. This resulted in two studies of low, three of moderate and two of above average reporting quality. These studies did not follow the recommended steps for model development and did not calculate a sample size. Furthermore, the predictive performance of neither of these models was internally- or externally validated. We performed an external validation of Brigham model. Calibration showed overestimation of the model and too extreme predictions, with a negative calibration intercept of –0.52 (CI 95% –0.68 – –0.36), with a calibration slope of 0.39 (CI 95% 0.07 – 0.71). The discriminative ability of the model was very low with a concordance statistic of 0.55 (CI 95% 0.50 – 0.59). Limitations, reasons for caution None of the studies are specifically named prediction models, therefore models may have been missed in the selection process. The external validation cohort used a retrospective design, in which only the first pregnancy after intake was registered. Follow-up time was not limited, which is important in counselling unexplained RPL couples. Wider implications of the findings: Currently, there are no suitable models that predict on pregnancy outcome after RPL. Moreover, we are in need of a model with several variables such that prognosis is individualized, and factors from both the female as the male to enable a couple specific prognosis. Trial registration number Not applicable


Energies ◽  
2019 ◽  
Vol 12 (5) ◽  
pp. 851 ◽  
Author(s):  
Qingyi Kong ◽  
Mingxing Du ◽  
Ziwei Ouyang ◽  
Kexin Wei ◽  
William Hurley

The on-state voltage is an important electrical parameter of insulated gate bipolar transistor (IGBT) modules. Due to limits in instrumentation and methods, it is difficult to ensure accurate measurements of the on-state voltage in practical working conditions. Based on the physical structure and conduction mechanism of the IGBT module, this paper models the on-state voltage and gives a detailed method for extracting the on-state voltage. Experiments not only demonstrate the feasibility of the on-state voltage separation method but also suggest a method for measuring and extracting the model parameters. Furthermore, on-state voltage measurements and simulation results certified the accuracy of this method.


Energies ◽  
2020 ◽  
Vol 13 (11) ◽  
pp. 2899
Author(s):  
Abhinandana Boodi ◽  
Karim Beddiar ◽  
Yassine Amirat ◽  
Mohamed Benbouzid

This paper proposes an approach to develop building dynamic thermal models that are of paramount importance for controller application. In this context, controller requires a low-order, computationally efficient, and accurate models to achieve higher performance. An efficient building model is developed by having proper structural knowledge of low-order model and identifying its parameter values. Simplified low-order systems can be developed using thermal network models using thermal resistances and capacitances. In order to determine the low-order model parameter values, a specific approach is proposed using a stochastic particle swarm optimization. This method provides a significant approximation of the parameters when compared to the reference model whilst allowing low-order model to achieve 40% to 50% computational efficiency than the reference one. Additionally, extensive simulations are carried to evaluate the proposed simplified model with solar radiation and identified model parameters. The developed simplified model is afterward validated with real data from a case study building where the achieved results clearly show a high degree of accuracy compared to the actual data.


Sign in / Sign up

Export Citation Format

Share Document