scholarly journals Data management for developing digital twin ontology model

Author(s):  
Sumit Singh ◽  
Essam Shehab ◽  
Nigel Higgins ◽  
Kevin Fowler ◽  
Dylan Reynolds ◽  
...  

Digital Twin (DT) is the imitation of the real world product, process or system. Digital Twin is the ideal solution for data-driven optimisations in different phases of the product lifecycle. With the rapid growth in DT research, data management for digital twin is a challenging field for both industries and academia. The challenges for DT data management are analysed in this article are data variety, big data & data mining and DT dynamics. The current research proposes a novel concept of DT ontology model and methodology to address these data management challenges. The DT ontology model captures and models the conceptual knowledge of the DT domain. Using the proposed methodology, such domain knowledge is transformed into a minimum data model structure to map, query and manage databases for DT applications. The proposed research is further validated using a case study based on Condition-Based Monitoring (CBM) DT application. The query formulation around minimum data model structure further shows the effectiveness of the current approach by returning accurate results, along with maintaining semantics and conceptual relationships along DT lifecycle. The method not only provides flexibility to retain knowledge along DT lifecycle but also helps users and developers to design, maintain and query databases effectively for DT applications and systems of different scale and complexities.

2015 ◽  
Vol 713-715 ◽  
pp. 2418-2422
Author(s):  
Lei Rao ◽  
Fan De Yang ◽  
Xin Ming Li ◽  
Dong Liu

Data management has experienced three stages: labor management, file systems, and database systems. In this paper, manage equipment data using a combination of HDFS file system and HBase database: the principles of HBase data management is studied; equipment data’s reading and writing processes is established; data model of equipment database is designed based on HBase.


Author(s):  
Benjamin Röhm ◽  
Reiner Anderl

Abstract The Department of Computer Integrated Design (DiK) at the TU Darmstadt deals with the Digital Twin topic from the perspective of virtual product development. A concept for the architecture of a Digital Twin was developed, which allows the administration of simulation input and output data. The concept was built under consideration of classical CAE process chains in product development. The central part of the concept is the management of simulation input and output data in a simulation data management system in the Digital Twin (SDM-DT). The SDM-DT takes over the connection between Digital Shadow and Digital Master for simulation data and simulation models. The concept is prototypically implemented. For this purpose, real product condition data were collected via a sensor network and transmitted to the Digital Shadow. The condition data were prepared and sent as a simulation input deck to the SDM-DT in the Digital Twin based on the product development results. Before the simulation data and models are simulated, there is a comparison between simulation input data with historical input data from product development. The developed and implemented concept goes beyond existing approaches and deals with a central simulation data management in Digital Twins.


Data ◽  
2019 ◽  
Vol 4 (2) ◽  
pp. 83 ◽  
Author(s):  
Timm Fitschen ◽  
Alexander Schlemmer ◽  
Daniel Hornung ◽  
Henrik tom Wörden ◽  
Ulrich Parlitz ◽  
...  

We present CaosDB, a Research Data Management System (RDMS) designed to ensure seamless integration of inhomogeneous data sources and repositories of legacy data in a FAIR way. Its primary purpose is the management of data from biomedical sciences, both from simulations and experiments during the complete research data lifecycle. An RDMS for this domain faces particular challenges: research data arise in huge amounts, from a wide variety of sources, and traverse a highly branched path of further processing. To be accepted by its users, an RDMS must be built around workflows of the scientists and practices and thus support changes in workflow and data structure. Nevertheless, it should encourage and support the development and observation of standards and furthermore facilitate the automation of data acquisition and processing with specialized software. The storage data model of an RDMS must reflect these complexities with appropriate semantics and ontologies while offering simple methods for finding, retrieving, and understanding relevant data. We show how CaosDB responds to these challenges and give an overview of its data model, the CaosDB Server and its easy-to-learn CaosDB Query Language. We briefly discuss the status of the implementation, how we currently use CaosDB, and how we plan to use and extend it.


2020 ◽  
Vol 7 (1) ◽  
pp. 205395172093561
Author(s):  
Todd Hartman ◽  
Helen Kennedy ◽  
Robin Steedman ◽  
Rhianne Jones

Low levels of public trust in data practices have led to growing calls for changes to data-driven systems, and in the EU, the General Data Protection Regulation provides a legal motivation for such changes. Data management is a vital component of data-driven systems, but what constitutes ‘good’ data management is not straightforward. Academic attention is turning to the question of what ‘good data’ might look like more generally, but public views are absent from these debates. This paper addresses this gap, reporting on a survey of the public on their views of data management approaches, undertaken by the authors and administered in the UK, where departure from the EU makes future data legislation uncertain. The survey found that respondents dislike the current approach in which commercial organizations control their personal data and prefer approaches that give them control over their data, that include oversight from regulatory bodies or that enable them to opt out of data gathering. Variations of data trusts – that is, structures that provide independent stewardship of data – were also preferable to the current approach, but not as widely preferred as control, oversight and opt out options. These features therefore constitute ‘good data management’ for survey respondents. These findings align only in part with principles of good data identified by policy experts and researchers. Our findings nuance understandings of good data as a concept and of good data management as a practice and point to where further research and policy action are needed.


Author(s):  
Alhad A. Joshi

Over the past decade, Computer Aided Engineering (Simulation) has experienced explosive growth being a significant enabler for: 1. Validating product design; 2. Providing low-cost methods for exploring a variety of product design alternatives; 3. Optimizing parts for better service performance; 4. Reducing dependence on physical testing; 5. Reducing warranty costs; 6. Achieving faster time to market. This rapid growth in the number of simulations performed and the amount of data generated in the absence of any significant data and process management initiatives has led to considerable inefficiencies in the CAE domain. Many companies now recognize the need to manage their CAE process and data as well as their desire to leverage their existing PDM systems as the primary repositories of CAE data. Some major issues are: 1. There is a need for a PDM data model to support CAE; 2. The CAE data model can be very complex; 3. There is an immense variety of CAE applications and data types; 4. Many CAE simulations require access to physical test data for input and correlation; 5. Data management discipline is not typically part of the CAE culture today. Despite the unique challenges posed by bringing PDM into the CAE world, the transition could occur faster than it has in the CAD world. This presentation will showcase an approach for managing CAE data in traditional PDM systems. Two working examples of CAE process automation software solutions integrated with CAD and PDM will be discussed. In particular, these applications will show how CAE users can leverage established PDM infrastructure and interact with EDS’ Teamcenter/Enterprise, Teamcenter/Engineering and Dassault Systeme’s SmarTeam through seamless integrations with their CAE systems.


2011 ◽  
Vol 58-60 ◽  
pp. 2085-2090 ◽  
Author(s):  
Xin Xin Liu ◽  
Shao Hua Tang ◽  
Kai Wei

This paper presents OntoRT, an ontology model for Role-base Trust-management(RT) framework, which covers a large fragment of RT including RT0, RT1, RT2 and application domain specification documents (ADSDs). RT addresses distributed authorization problems in decentralized collaborative systems. OntoRT establishes a common vocabulary for RT roles and policies across domains. We describe OntoRT formally in Description Logic(DL) SHOIN(D) and DL-safe SWRL rules. Basing on our logical formalization it is feasible to authorize and analyze RT policies automatically via the state of arts DL reasoners. Finally, we show how OntoRT can be integrated with OWL-DL ontologies which are W3C standard for representing information on the Web. By referring to OWL-DL ontologies that provide rich domain knowledge, specification and management of RT policies are simplified.


Sign in / Sign up

Export Citation Format

Share Document