generic data model
Recently Published Documents


TOTAL DOCUMENTS

24
(FIVE YEARS 6)

H-INDEX

4
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Samer Alkarkoukly ◽  
Abdul-Mateen Rajput

openEHR is an open-source technology for e-health, aims to build data models for interoperable Electronic Health Records (EHRs) and to enhance semantic interoperability. openEHR architecture consists of different building blocks, among them is the “template” which consists of different archetypes and aims to collect the data for a specific use-case. In this paper, we created a generic data model for a virtual pancreatic cancer patient, using the openEHR approach and tools, to be used for testing and virtual environments. The data elements for this template were derived from the “Oncology minimal data set” of HiGHmed project. In addition, we generated virtual data profiles for 10 patients using the template. The objective of this exercise is to provide a data model and virtual data profiles for testing and experimenting scenarios within the openEHR environment. Both of the template and the 10 virtual patient profiles are available publicly.


2021 ◽  
Vol Special Issue on Data Science... ◽  
Author(s):  
Jacky Akoka ◽  
Isabelle Comyn-Wattiau ◽  
Stéphane LamassÉ ◽  
Cédric Du Mouza

International audience Prosopographic databases, which allow the study of social groups through their bibliography, are used today by a significant number of historians. Computerization has allowed intensive and large-scale exploitation of these databases. The modeling of these proposopographic databases has given rise to several data models. An important problem is to ensure a level of quality of the stored information. In this article , we propose a generic data model allowing to describe most of the existing prosopographic databases and to enrich them by integrating several quality concepts such as uncertainty, reliability, accuracy or completeness.


Author(s):  
Zhengyi Song ◽  
Young Moon

Abstract Cyber-Manufacturing System (CMS) is a vision for the factory of the future, where physical manufacturing resources and processes are integrated with computational workflows to provide on-demand, adaptive, and scalable manufacturing services. In CMS, functional manufacturing components in a factory floor are digitized and encapsulated in production services; and are accessible by users throughout the network. CMS utilizes data-centric technologies to program manufacturing activities in factory floors. Leveraging advanced technologies, CMS can provide robust solutions to achieve better manufacturing agility, flexibility, scalability, and sustainability than from traditional factories. While data is the main driver of the manufacturing activities in CMS, the lack of (i) a generic data model of explicit representation of the entities and stakeholders in CMS and (ii) workflow definition and analysis for service-orientated functionalities and manufacturing intelligence of CMS is still hindering the implementation of a fully executable CMS. To address such problems, this paper (i) formalizes a data modeling of CMS using Entity-Relationship (E-R) diagram, (ii) presents the definition and analysis of workflows along with data pipelines and Extract/Transform/Load (ETL) processes that automate the entire lifecycle activities in CMS and (iii) deploys the proposed data model and workflows in a Web-based application, and (iv) tests the functionality of this application with an industrial case and eventually validates the proposed data model and workflows.


Energies ◽  
2019 ◽  
Vol 12 (10) ◽  
pp. 1893 ◽  
Author(s):  
Paul Schott ◽  
Johannes Sedlmeir ◽  
Nina Strobel ◽  
Thomas Weber ◽  
Gilbert Fridgen ◽  
...  

In this article, we present a new descriptive model for industrial flexibility with respect to power consumption. The advancing digitization in the energy sector opens up new possibilities for utilizing and automatizing the marketing of flexibility potentials and therefore facilitates a more advanced energy management. This requires a standardized description and modeling of power-related flexibility. The data model in this work has been developed in close collaboration with several partners from different industries in the context of a major German research project. A suitable set of key figures allows for also describing complex production processes that exhibit interdependencies and storage-like properties. The data model can be applied to other areas as well, e.g., power plants, plug-in electric vehicles, or power-related flexibility of households.


2019 ◽  
Vol 3 (1) ◽  
pp. 26-39 ◽  
Author(s):  
Maria Esteva ◽  
Ramona L. Walls ◽  
Andrew B. Magill ◽  
Weijia Xu ◽  
Ruizhu Huang ◽  
...  

Abstract The Identifier Services (IDS) project conducted research into and built a prototype to manage distributed genomics datasets remotely and over time. Inspired by archival concepts, IDS allows researchers to track dataset evolution through multiple copies, modifications, and derivatives, independent of where data are located – both symbolically, in the research lifecycle, and physically, in a repository or storage facility. The prototype implementation is based on a three-step data modeling process involving: a) understanding and recording of different researcher workflows, b) mapping the workflows and data to a generic data model and identifying functions, and c) integrating the data model as architecture and interactive functions into cyberinfrastructure (CI). Identity functions are operationalized as continuous tracking of authenticity attributes including data location, differences between seemingly identical datasets, metadata, data integrity, and the roles of different types of local and global identifiers used during the research lifecycle. CI resources were used to conduct identity functions at scale, including scheduling content comparison tasks on high-performance computing resources. The prototype was developed and evaluated considering six data test cases, and feedback was received through a focus-group activity. While there are some technical roadblocks to overcome, our project demonstrates that identity functions are innovative solutions to manage large distributed genomic datasets.


Author(s):  
Susanne Bleisch ◽  
Daria Hollenstein

Locations become places through personal significance and experience. While place data are not emotion data, per se, personal significance and experience are often emotional. In this paper, we explore the potential of using visual data exploration to support the qualitative analysis of place-related emotion data. To do so, we draw upon Creswell’s (2009) definition of place to define a generic data model that contains emotion data for a given location and its locale. For each data dimension in our model, we present symbolization options that can be combined to create a range of interactive visualizations, specifically supporting re-expression. We discuss the usefulness of example visualizations, created based on a data set from a pilot study on how elderly women experience their neighborhood. We find that the visualizations support four broad qualitative data analysis tasks: revising categorizations, making connections and relationships, aggregating for synthesis, and corroborating evidence by combining sense of place with locale information to support a holistic interpretation of place data. In conclusion, the paper contributes to the literature in three ways. It provides a generic data model and associated symbolization options, and uses examples to show how place-related emotion data can be visualized. Further, the example visualizations make explicit how re-expression, the combination of emotion data with locale information, and visualization of vagueness and linked data support the analysis of emotion data. Finally, we advocate for visualization-supported qualitative data analysis in interdisciplinary teams so that more suitable maps are used and so that cartographers can better understand and support qualitative data analysis.


Author(s):  
Georg Laipple ◽  
Stephane Dauzere-Peres ◽  
Thomas Ponsignon ◽  
Philippe Vialletelle

2017 ◽  
Author(s):  
Shayan Shahand ◽  
Sílvia Olabarriaga

The lessons learned during six years of experience in design, development, and operation of four Science Gateway (SG) generations motivated us to develop yet another generation of platforms coined “Rosemary”. At the core of Rosemary the three fundamental SG functions, namely related to data, computing, and collaboration management, are integrated together. Our earlier studies showed that complete integration between these functions is a feature that is usually overlooked in the existing SG platforms. Rosemary provides a generic data model, RESTful API, and responsive UI that can be customized through programming to build customized SGs. Moreover, Rosemary is designed and implemented to be flexible to changes in e-Infrastructures and user community requirements. The software frameworks, tools and libraries employed in the realization of Rosemary streamline the development, deployment and operation of customized SGs for the users needs. The code of Rosemary is open source, available at https://github.com/AMCeScience/Rosemary-Vanilla. So far the platform has been used to implement prototypes of three SGs for high-throughput analysis and management of neuroimaging data, sharing of data in in-vitro fertilization research, and provenance tracking of DNA sequencing data. This paper presents the design considerations, data model, and system architecture of Rosemary and highlights some of the features that are intrinsic to its design and implementation with examples from the three prototypes.


2016 ◽  
Author(s):  
Shayan Shahand ◽  
Sílvia Olabarriaga

The lessons learned during six years of experience in design, development, and operation of four Science Gateway (SG) generations motivated us to develop yet another generation of platforms coined “Rosemary”. At the core of Rosemary the three fundamental SG functions, namely related to data, computing, and collaboration management, are integrated together. Our earlier studies showed that complete integration between these functions is a feature that is usually overlooked in the existing SG platforms. Rosemary provides a generic data model, RESTful API, and responsive UI that can be customized through programming to build customized SGs. Moreover, Rosemary is designed and implemented to be flexible to changes in e-Infrastructures and user community requirements. The software frameworks, tools and libraries employed in the realization of Rosemary streamline the development, deployment and operation of customized SGs for the users needs. The code of Rosemary is open source, available at https://github.com/AMCeScience/Rosemary-Vanilla. So far the platform has been used to implement prototypes of three SGs for high-throughput analysis and management of neuroimaging data, sharing of data in in-vitro fertilization research, and provenance tracking of DNA sequencing data. This paper presents the design considerations, data model, and system architecture of Rosemary and highlights some of the features that are intrinsic to its design and implementation with examples from the three prototypes.


Sign in / Sign up

Export Citation Format

Share Document