Digital twin-based assembly data management and process traceability for complex products

Author(s):  
Cunbo Zhuang ◽  
Jingcheng Gong ◽  
Jianhua Liu
Author(s):  
Benjamin Röhm ◽  
Reiner Anderl

Abstract The Department of Computer Integrated Design (DiK) at the TU Darmstadt deals with the Digital Twin topic from the perspective of virtual product development. A concept for the architecture of a Digital Twin was developed, which allows the administration of simulation input and output data. The concept was built under consideration of classical CAE process chains in product development. The central part of the concept is the management of simulation input and output data in a simulation data management system in the Digital Twin (SDM-DT). The SDM-DT takes over the connection between Digital Shadow and Digital Master for simulation data and simulation models. The concept is prototypically implemented. For this purpose, real product condition data were collected via a sensor network and transmitted to the Digital Shadow. The condition data were prepared and sent as a simulation input deck to the SDM-DT in the Digital Twin based on the product development results. Before the simulation data and models are simulated, there is a comparison between simulation input data with historical input data from product development. The developed and implemented concept goes beyond existing approaches and deals with a central simulation data management in Digital Twins.


Author(s):  
Sumit Singh ◽  
Essam Shehab ◽  
Nigel Higgins ◽  
Kevin Fowler ◽  
Dylan Reynolds ◽  
...  

Digital Twin (DT) is the imitation of the real world product, process or system. Digital Twin is the ideal solution for data-driven optimisations in different phases of the product lifecycle. With the rapid growth in DT research, data management for digital twin is a challenging field for both industries and academia. The challenges for DT data management are analysed in this article are data variety, big data & data mining and DT dynamics. The current research proposes a novel concept of DT ontology model and methodology to address these data management challenges. The DT ontology model captures and models the conceptual knowledge of the DT domain. Using the proposed methodology, such domain knowledge is transformed into a minimum data model structure to map, query and manage databases for DT applications. The proposed research is further validated using a case study based on Condition-Based Monitoring (CBM) DT application. The query formulation around minimum data model structure further shows the effectiveness of the current approach by returning accurate results, along with maintaining semantics and conceptual relationships along DT lifecycle. The method not only provides flexibility to retain knowledge along DT lifecycle but also helps users and developers to design, maintain and query databases effectively for DT applications and systems of different scale and complexities.


2020 ◽  
Vol 54 ◽  
pp. 361-371 ◽  
Author(s):  
Sihan Huang ◽  
Guoxin Wang ◽  
Yan Yan ◽  
Xiongbing Fang
Keyword(s):  

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4410 ◽  
Author(s):  
Seunghwan Jeong ◽  
Gwangpyo Yoo ◽  
Minjong Yoo ◽  
Ikjun Yeom ◽  
Honguk Woo

Hyperconnectivity via modern Internet of Things (IoT) technologies has recently driven us to envision “digital twin”, in which physical attributes are all embedded, and their latest updates are synchronized on digital spaces in a timely fashion. From the point of view of cyberphysical system (CPS) architectures, the goals of digital twin include providing common programming abstraction on the same level of databases, thereby facilitating seamless integration of real-world physical objects and digital assets at several different system layers. However, the inherent limitations of sampling and observing physical attributes often pose issues related to data uncertainty in practice. In this paper, we propose a learning-based data management scheme where the implementation is layered between sensors attached to physical attributes and domain-specific applications, thereby mitigating the data uncertainty between them. To do so, we present a sensor data management framework, namely D2WIN, which adopts reinforcement learning (RL) techniques to manage the data quality for CPS applications and autonomous systems. To deal with the scale issue incurred by many physical attributes and sensor streams when adopting RL, we propose an action embedding strategy that exploits their distance-based similarity in the physical space coordination. We introduce two embedding methods, i.e., a user-defined function and a generative model, for different conditions. Through experiments, we demonstrate that the D2WIN framework with the action embedding outperforms several known heuristics in terms of achievable data quality under certain resource restrictions. We also test the framework with an autonomous driving simulator, clearly showing its benefit. For example, with only 30% of updates selectively applied by the learned policy, the driving agent maintains its performance about 96.2%, as compared to the ideal condition with full updates.


2021 ◽  
Author(s):  
Xuemin Sun ◽  
Rong Zhang ◽  
Shimin Liu ◽  
Qibing Lv ◽  
Jinsong Bao ◽  
...  

Abstract In the process of complex products assembly-commissioning, manual operation is the main reason for low efficiency. The human-robot cooperation (HRC) technology combines the advantages of human and robot, and makes it complete the task in the shared space. It is an effective way to solve the problem by introducing the HRC technology into the complex products of assembly-commissioning. However, the current HRC technology has insufficient perception and cognitive ability of tasks. Therefore, this paper presents a digital twin-driven HRC assembly-commissioning framework. In this framework, a virtual-real mapping environment for HRC is constructed. In order to improve the cognitive ability of robot units to tasks, this paper proposes a method of intention recognition that integrates the features of parts into human joint sequences. In order to improve the adaptability of robot unit to task, the assembly-commissioning task knowledge graph is constructed to quickly extract the implement sequence of robot unit. At the same time, the deep deterministic policy gradient (DDPG) is used to adaptively adjust the robot unit implement action in the process of assembly-commissioning. Finally, the effectiveness of the proposed method is verified by taking a particular type of automobile generator as a case study product.


Sign in / Sign up

Export Citation Format

Share Document