Concept of a System Architecture for a Simulation Data Management In the Digital Twin

Author(s):  
Benjamin Röhm ◽  
Reiner Anderl

Abstract The Department of Computer Integrated Design (DiK) at the TU Darmstadt deals with the Digital Twin topic from the perspective of virtual product development. A concept for the architecture of a Digital Twin was developed, which allows the administration of simulation input and output data. The concept was built under consideration of classical CAE process chains in product development. The central part of the concept is the management of simulation input and output data in a simulation data management system in the Digital Twin (SDM-DT). The SDM-DT takes over the connection between Digital Shadow and Digital Master for simulation data and simulation models. The concept is prototypically implemented. For this purpose, real product condition data were collected via a sensor network and transmitted to the Digital Shadow. The condition data were prepared and sent as a simulation input deck to the SDM-DT in the Digital Twin based on the product development results. Before the simulation data and models are simulated, there is a comparison between simulation input data with historical input data from product development. The developed and implemented concept goes beyond existing approaches and deals with a central simulation data management in Digital Twins.

2003 ◽  
Vol 10 (19) ◽  
Author(s):  
Christian Kirkegaard ◽  
Anders Møller ◽  
Michael I. Schwartzbach

XML documents generated dynamically by programs are typically represented as text strings or DOM trees. This is a low-level approach for several reasons: 1) Traversing and modifying such structures can be tedious and error prone; 2) Although schema languages, e.g. DTD, allow classes of XML documents to be defined, there are generally no automatic mechanisms for statically checking that a program transforms from one class to another as intended. We introduce X<small>ACT</small>, a high-level approach for Java using XML templates as a first-class data type with operations for manipulating XML values based on XPath. In addition to an efficient runtime representation, the data type permits static type checking using DTD schemas as types. By specifying schemas for the input and output of a program, our algorithm will statically verify that valid input data is always transformed into valid output data and that no errors occur during processing.


1980 ◽  
Vol 2 (6) ◽  
pp. 277-283 ◽  
Author(s):  
V. Stibic

A list of general properties of a user-friendly online sys tem is presented. Unfortunately, the requirements of experi enced users on the one hand, and of the beginners or inci dental users on the other hand, are contradictory. Synonymous commands, less strictly formalized input data transformed to standardized formats by intelligent input programs, explicit as well as implicit input data, free choice between default or user's own parameters and procedures or macrocommands, can make any system more friendly even for heterogeneous user population. Similarly, flexibility of output (e.g. elo quent natural-language messages for non-experienced users and concise coded and abbreviated output for experts) im proves acceptance of the system by all users. Examples of flexible, free forms of commands, input and output data are given.


2019 ◽  
Vol 67 (5) ◽  
pp. 1362-1382 ◽  
Author(s):  
Aleksandrina Goeva ◽  
Henry Lam ◽  
Huajie Qian ◽  
Bo Zhang

Studies on simulation input uncertainty are often built on the availability of input data. In this paper, we investigate an inverse problem where, given only the availability of output data, we nonparametrically calibrate the input models and other related performance measures of interest. We propose an optimization-based framework to compute statistically valid bounds on input quantities. The framework utilizes constraints that connect the statistical information of the real-world outputs with the input–output relation via a simulable map. We analyze the statistical guarantees of this approach from the view of data-driven distributionally robust optimization, and show how they relate to the function complexity of the constraints arising in our framework. We investigate an iterative procedure based on a stochastic quadratic penalty method to approximately solve the resulting optimization. We conduct numerical experiments to demonstrate our performances in bounding the input models and related quantities.


1972 ◽  
Vol 94 (2) ◽  
pp. 401-405 ◽  
Author(s):  
R. L. Davis ◽  
H. Dean Keith

The finite-element technique has been applied in the analysis of a variety of pressure vessel problems. The example problems described in this paper suggest that the finite-element method is perhaps the most suitable means currently available for obtaining quick and accurate solutions for real-life pressure vessel problems. Finite-element programs can be used by the practicing engineer. Companion programs are available that can be used to check the input data and graphically display both the input and output data.


Author(s):  
Thomas White

In North America, the process for determining appropriate railroad infrastructure for new service or an increased volume of existing service usually includes the use of simulation software. Decisions are generally based on statistical analysis of the simulation output. The simulation and analysis that are commonly conducted, however, may not provide an accurate assessment of the adequacy of the infrastructure. Furthermore, the output data comparisons commonly used to describe the effect of infrastructure on traffic may not be easily associated with traffic conditions. These shortcomings can be mitigated with appropriate care in developing the simulation input data and changing the output analysis methodology.


2021 ◽  
Author(s):  
Miroslava Ivko Jordovic Pavlovic ◽  
Katarina Djordjevic ◽  
Zarko Cojbasic ◽  
Slobodanka Galovic ◽  
Marica Popovic ◽  
...  

Abstract In this paper, the influence of the input and output data scaling and normalization on the neural network overall performances is investigated aimed at inverse problem-solving in photoacoustics of semiconductors. The logarithmic scaling of the photoacoustic signal amplitudes as input data and numerical scaling of the sample thermal parameters as output data are presented as useful tools trying to reach maximal network precision. Max and min-max normalizations to the input data are presented to change their numerical values in the dataset to common scales, without distorting differences. It was demonstrated in theory that the largest network prediction error of all targeted parameters is obtained by a network with non-scaled output data. Also, it was found out that the best network prediction was achieved with min-max normalization of the input data and network predicted output data scale within the range of [110]. Network training and prediction performances analyzed with experimental input data show that the benefits and improvements of input and output scaling and normalization are not guaranteed but are strongly dependent on a specific problem to be solved.


2019 ◽  
Vol 49 (2) ◽  
pp. 31-53
Author(s):  
Mateusz Papis ◽  
Marek Matyjewski

Abstract The paper presents the possibility of using fuzzy logic in aviation provided in an example of estimating the risk of a glider pilot. The results of expert questionnaires were used to define the input data concerning categories of loss and the probability of undesirable events occurrence. A model of inference and fuzzy sets concerning input and output data were defined. The risk analysis was performed in accordance with the standard fuzzy regulator scheme. Moreover, the results obtained were verified with the use of Risk Matrix and Risk Score methods.


2021 ◽  
Vol 4 (2) ◽  
pp. 36
Author(s):  
Maulshree Singh ◽  
Evert Fuenmayor ◽  
Eoin Hinchy ◽  
Yuansong Qiao ◽  
Niall Murray ◽  
...  

Digital Twin (DT) refers to the virtual copy or model of any physical entity (physical twin) both of which are interconnected via exchange of data in real time. Conceptually, a DT mimics the state of its physical twin in real time and vice versa. Application of DT includes real-time monitoring, designing/planning, optimization, maintenance, remote access, etc. Its implementation is expected to grow exponentially in the coming decades. The advent of Industry 4.0 has brought complex industrial systems that are more autonomous, smart, and highly interconnected. These systems generate considerable amounts of data useful for several applications such as improving performance, predictive maintenance, training, etc. A sudden influx in the number of publications related to ‘Digital Twin’ has led to confusion between different terminologies related to the digitalization of industries. Another problem that has arisen due to the growing popularity of DT is a lack of consensus on the description of DT as well as so many different types of DT, which adds to the confusion. This paper intends to consolidate the different types of DT and different definitions of DT throughout the literature for easy identification of DT from the rest of the complimentary terms such as ‘product avatar’, ‘digital thread’, ‘digital model’, and ‘digital shadow’. The paper looks at the concept of DT since its inception to its predicted future to realize the value it can bring to certain sectors. Understanding the characteristics and types of DT while weighing its pros and cons is essential for any researcher, business, or sector before investing in the technology.


Sign in / Sign up

Export Citation Format

Share Document