Implementation of an approach for variable data insertion procedures synthesis for structureindependent databases

Author(s):  
A.S. Grishchenko ◽  

The aim of this work is to implement an approach for the synthesis of variable data insertion procedures for structureindependent databases. This approach makes it possible to synthesize procedures for inserting data into databases, in which two types of inhomogeneities will be absent: inhomogeneities associated with various forms of representation of minimal structural units; semantic heterogeneities associated with the use of elements that are semantically heterogeneous to the elements of the structure of a structure-independent database. Due to the absence of these inhomogeneities, the productivity of the procedures will be increased in comparison with the initial one. The approbation is carried out on the example of a structure-independent database based on the Tenzer data model. As a result of the work carried out, the following were obtained: the minimum unit of data storage for the database, built on the basis of the Tenzer model; algorithm of action using the developed minimum data storage unit; procedure code. As the minimum data storage unit, a triple of the form <Object, Property, Value> was chosen, and checking the obtained algorithm of actions showed that all the elements present are homogeneous to the database structure based on the Tenzer data model, which indicates the successful testing of this approach.

Author(s):  
Wan Wang

Abstract A data model for kinematic structure of mechanisms and its coding principle are proposed, based on the topological graph and contract graph. In the model every basic chain is mapped by a code of 5 decimal digits and a mechanism is mapped by a set of code of basic chains. The model occupies minimal memory, and contains a complete set of useful primary parameters of structure, and significantly reduce computer time for isomorphism identification.


Author(s):  
Bálint Molnár ◽  
András Béleczki ◽  
Bence Sarkadi-Nagy

Data structures and especially the relationship among the data entities have changed in the last couple of years. The network-like graph representations of data-model are becoming more and more common nowadays, since they are more suitable to depict these, than the well-established relational data-model. The graphs can describe large and complex networks — like social networks — but also capable of storing rich information about complex data. This was mostly of relational data-model trait before. This also can be achieved with the use of the knowledge representation tool called “hypergraphs”. To utilize the possibilities of this model, we need a practical way to store and process hypergraphs. In this paper, we propose a way by which we can store hypergraphs model in the SAP HANA in-memory database system which has a “Graph Core” engine besides the relational data model. Graph Core has many graph algorithms by default however it is not capable to store or to work with hypergraphs neither are any of these algorithms specifically tailored for hypergraphs either. Hence in this paper, besides the case study of the two information systems, we also propose pseudo-code level algorithms to accommodate hypergraph semantics to process our IS model.


Author(s):  
Sumit Singh ◽  
Essam Shehab ◽  
Nigel Higgins ◽  
Kevin Fowler ◽  
Dylan Reynolds ◽  
...  

Digital Twin (DT) is the imitation of the real world product, process or system. Digital Twin is the ideal solution for data-driven optimisations in different phases of the product lifecycle. With the rapid growth in DT research, data management for digital twin is a challenging field for both industries and academia. The challenges for DT data management are analysed in this article are data variety, big data & data mining and DT dynamics. The current research proposes a novel concept of DT ontology model and methodology to address these data management challenges. The DT ontology model captures and models the conceptual knowledge of the DT domain. Using the proposed methodology, such domain knowledge is transformed into a minimum data model structure to map, query and manage databases for DT applications. The proposed research is further validated using a case study based on Condition-Based Monitoring (CBM) DT application. The query formulation around minimum data model structure further shows the effectiveness of the current approach by returning accurate results, along with maintaining semantics and conceptual relationships along DT lifecycle. The method not only provides flexibility to retain knowledge along DT lifecycle but also helps users and developers to design, maintain and query databases effectively for DT applications and systems of different scale and complexities.


2020 ◽  
Vol 10 (1) ◽  
pp. 140-146
Author(s):  
Klaus Böhm ◽  
Tibor Kubjatko ◽  
Daniel Paula ◽  
Hans-Georg Schweiger

AbstractWith the upcoming new legislative rules in the EU on Event Data Recorder beginning 2022 the question is whether the discussed data base is sufficient for the needs of clarifying accidents involving automated vehicles. Based on the reconstruction of real accidents including vehicles with ADAS combined with specially designed crash tests a broader data base than US EDR regulation (NHTSA 49 CFR Part 563.7) is proposed. The working group AHEAD, to which the authors contribute, has already elaborated a data model that fits the needs of automated driving. The structure of this data model is shown. Moreover, the special benefits of storing internal video or photo feeds form the vehicle camera systems combined with object data is illustrated. When using a sophisticate 3D measurement method of the accident scene the videos or photos can also serve as a control instance for the stored vehicle data. The AHEAD Data Model enhanced with the storage of the video and photo feeds should be considered in the planned roadmap of the Informal Working Group (IWG) on EDR/ DSSAD (Data Storage System for Automated Driving) reporting to UNECE WP29. Also, a data access over the air using technology already applied in China for electric vehicles called Real Time Monitoring would allow a quantum leap in forensic accident reconstruction.


Author(s):  
M. Alderighi ◽  
F. Amorini ◽  
A. Anzalone ◽  
G. Cardella ◽  
S. Cavallaro ◽  
...  
Keyword(s):  

2019 ◽  
Vol 1 (1) ◽  
pp. 107
Author(s):  
Mila Kharisma ◽  
Iwan Sugriwan ◽  
Ade Agung Harnawan

Soil moisture very important to be measured per unit of time, especially in  peat soils that have high porosity value. The measuring device for detection of soil moisture is realized on this research. The soil moisture measuring instrument is built by three main blocks of an instrument, that are four soil moisture sensors YL-69s, Arduino Uno as measurement processing unit that equipped with sd card as data storage unit and 20x4 character LCD as a display unit of the measurement result. The span value of the measuring device ranges from 0% to 95 % with deviation from 0% to 4.88%.The advantages of the measurement instrument system are simple in the operational process, real-time monitoring and stored automatically.


2019 ◽  
Author(s):  
Nezar Abdennur ◽  
Leonid Mirny

Most existing coverage-based (epi)genomic datasets are one-dimensional, but newer technologies probing interactions (physical, genetic, etc.) produce quantitative maps with two-dimensional genomic coordinate systems. Storage and computational costs mount sharply with data resolution when such maps are stored in dense form. Hence, there is a pressing need to develop data storage strategies that handle the full range of useful resolutions in multidimensional genomic datasets by taking advantage of their sparse nature, while supporting efficient compression and providing fast random access to facilitate development of scalable algorithms for data analysis. We developed a file format called cooler, based on a sparse data model, that can support genomically-labeled matrices at any resolution. It has the flexibility to accommodate various descriptions of the data axes (genomic coordinates, tracks and bin annotations), resolutions, data density patterns, and metadata. Cooler is based on HDF5 and is supported by a Python library and command line suite to create, read, inspect and manipulate cooler data collections. The format has been adopted as a standard by the NIH 4D Nucleome Consortium. Cooler is cross-platform, BSD-licensed, and can be installed from the Python Package Index or the bioconda repository. The source code is maintained on Github at https://github.com/mirnylab/cooler.


Author(s):  
Andrey S. Grishchenko

The aim of developing an approach to the synthesis of variable data insertion procedures in structure-independent databases is to identify and analyze heterogeneities in the existing construction process. Two types of heterogeneities were identified associated with various forms of representing the minimum structural units for constructing the process of constructing the data insertion procedure, as well as semantic heterogeneities associated with the use of elements that are semantically heterogeneous to structural elements of a structure-independent database. To eliminate them, you need to use an approach based on the action designer. It allows to use actions as minimal structural units in the synthesis process, to reveal the meanings of characteristics, presenting them in the form of an action structure. The result of the article is a formulated approach to the synthesis of variable data insertion procedures in structure-independent databases.


Sign in / Sign up

Export Citation Format

Share Document