description framework
Recently Published Documents


TOTAL DOCUMENTS

446
(FIVE YEARS 152)

H-INDEX

18
(FIVE YEARS 4)

Author(s):  
Ahmed Swar ◽  
Ghada Khoriba ◽  
Mohamed Belal

<span lang="EN-US">Data integration enables combining data from various data sources in a standard format. Internet of things (IoT) applications use ontology approaches to provide a machine-understandable conceptualization of a domain. We propose a unified ontology schema approach to solve all IoT integration problems at once. The data unification layer maps data from different formats to data patterns based on the unified ontology model. This paper proposes a middleware consisting of an ontology-based approach that collects data from different devices. IoT middleware requires an additional semantic layer for cloud-based IoT platforms to build a schema for data generated from diverse sources. We tested the proposed model on real data consisting of approximately 160,000 readings from various sources in different formats like CSV, JSON, raw data, and XML. The data were collected through the file transfer protocol (FTP) and generated 960,000 resource description framework (RDF) triples. We evaluated the proposed approach by running different queries on different machines on SPARQL protocol and RDF query language (SPARQL) endpoints to check query processing time, validation of integration, and performance of the unified ontology model. The average response time for query execution on generated RDF triples on the three servers were approximately 0.144 seconds, 0.070 seconds, 0.062 seconds, respectively.</span>


Author(s):  
Runumi Devi ◽  
Deepti Mehrotra ◽  
Sana Ben Abdallah Ben Lamine

Electronic Health Record (EHR) systems in healthcare organisations are primarily maintained in isolation from each other that makes interoperability of unstructured(text) data stored in these EHR systems challenging in the healthcare domain. Similar information may be described using different terminologies by different applications that can be evaded by transforming the content into the Resource Description Framework (RDF) model that is interoperable amongst organisations. RDF requires a document’s contents to be translated into a repository of triplets (subject, predicate, object) known as RDF statements. Natural Language Processing (NLP) techniques can help get actionable insights from these text data and create triplets for RDF model generation. This paper discusses two NLP-based approaches to generate the RDF models from unstructured patients’ documents, namely dependency structure-based and constituent(phrase) structure-based parser. Models generated by both approaches are evaluated in two aspects: exhaustiveness of the represented knowledge and the model generation time. The precision measure is used to compute the models’ exhaustiveness in terms of the number of facts that are transformed into RDF representations.


2022 ◽  
Vol 12 (1) ◽  
pp. 1-16
Author(s):  
Qazi Mudassar Ilyas ◽  
Muneer Ahmad ◽  
Sonia Rauf ◽  
Danish Irfan

Resource Description Framework (RDF) inherently supports data mergers from various resources into a single federated graph that can become very large even for an application of modest size. This results in severe performance degradation in the execution of RDF queries. As every RDF query essentially traverses a graph to find the output of the Query, an efficient path traversal reduces the execution time of RDF queries. Hence, query path optimization is required to reduce the execution time as well as the cost of a query. Query path optimization is an NP-hard problem that cannot be solved in polynomial time. Genetic algorithms have proven to be very useful in optimization problems. We propose a hybrid genetic algorithm for query path optimization. The proposed algorithm selects an initial population using iterative improvement thus reducing the initial solution space for the genetic algorithm. The proposed algorithm makes significant improvements in the overall performance. We show that the overall number of joins for complex queries is reduced considerably, resulting in reduced cost.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yu-Jung Cheng ◽  
Shu-Lai Chou

Purpose This study applies digital humanity tools (Gephi and Protégé) for establishing and visualizing ontologies in the cultural heritage domain. According to that, this study aims to develop a novel evaluation approach using five ontology indicators (data overview, visual presentation, highlight links, scalability and querying) to evaluate the knowledge structure presentation of cultural heritage ontology. Design/methodology/approach The researchers collected and organized 824 pieces of government’s open data (GOD), converted GOD into the resource description framework format, applied Protégé and Gephi to establish and visualize cultural heritage ontology. After ontology is built, this study recruited 60 ontology participants (30 from information and communications technology background; 30 from cultural heritage background) to operate this ontology and gather their different perspectives of visual ontology. Findings Based on the ontology participant’s feedback, this study discovered that Gephi is more supporting than Protégé when visualizing ontology. Especially in data overview, visual presentation and highlight links dimensions, which is supported visualization and demonstrated ontology class hierarchy and property relation, facilitated the wider application of ontology. Originality/value This study offers two contributions. First, the researchers analyzed data on East Asian architecture with novel digital humanities tools to visualize ontology for cultural heritage. Second, the study collected participant’s feedback regarding the visualized ontology to enhance its design, which can serve as a reference for future ontological development.


2021 ◽  
Vol 10 (12) ◽  
pp. 832
Author(s):  
Xiangfu Meng ◽  
Lin Zhu ◽  
Qing Li ◽  
Xiaoyan Zhang

Resource Description Framework (RDF), as a standard metadata description framework proposed by the World Wide Web Consortium (W3C), is suitable for modeling and querying Web data. With the growing importance of RDF data in Web data management, there is an increasing need for modeling and querying RDF data. Previous approaches mainly focus on querying RDF. However, a large amount of RDF data have spatial and temporal features. Therefore, it is important to study spatiotemporal RDF data query approaches. In this paper, firstly, we formally define spatiotemporal RDF data, and construct a spatiotemporal RDF model st-RDF that is used to represent and manipulate spatiotemporal RDF data. Secondly, we present a spatiotemporal RDF query algorithm stQuery based on subgraph matching. This algorithm can quickly determine whether the query result is empty for queries whose temporal or spatial range exceeds a specific range by adopting a preliminary query filtering mechanism in the query process. Thirdly, we propose a sorting strategy that calculates the matching order of query nodes to speed up the subgraph matching. Finally, we conduct experiments in terms of effect and query efficiency. The experimental results show the performance advantages of our approach.


Author(s):  
Anna A. Stukalova ◽  
Natalya A. Balutkina

The article provides review of foreign and domestic publications on the problems of creation, development and use of authority files (AF) of names of persons, names of organizations, geographical names and other objects both at the international, national and regional levels. The paper presents analysis of the foreign experience of AF maintenance. The authors note that, due to the availability of universal collections and qualified specialists, AF formation abroad is usually carried out by national libraries. A substantive analysis of foreign publications has shown that national AFs (NAF) are characterized by data variability and diversity of approaches. The authors studied the experience of successful combination of NAF created according to different methods within the framework of the international corporate project — Virtual International Authority File (VIAF). The article notes that most of the Russian libraries do not use AF, since AF, created in republican and regional scientific libraries, as a rule, are not publicly available. At the same time, creation by a separate library of its own AF leads to high labour and material costs, and the formation of a large number of AF leads to the variability of the AFs created for the same objects. The authors conclude that for efficient use of AFs within the country, it is necessary to apply unified methods and rules for creation of authority records. Another way out is the application of the Semantic Web technology, which allows linking AFs created according to different methods. It is necessary to make maximum use of existing dictionaries or create dictionaries based on the World Wide Web Consortium (W3C), Resource Description Framework (RDF), RDF Schema (RDFS) and Web Ontology Language (OWL) standards.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 881
Author(s):  
Sini Govindapillai ◽  
Lay-Ki Soon ◽  
Su-Cheng Haw

Knowledge graph (KG) publishes machine-readable representation of knowledge on the Web. Structured data in the knowledge graph is published using Resource Description Framework (RDF) where knowledge is represented as a triple (subject, predicate, object). Due to the presence of erroneous, outdated or conflicting data in the knowledge graph, the quality of facts cannot be guaranteed. Trustworthiness of facts in knowledge graph can be enhanced by the addition of metadata like the source of information, location and time of the fact occurrence. Since RDF does not support metadata for providing provenance and contextualization, an alternate method, RDF reification is employed by most of the knowledge graphs. RDF reification increases the magnitude of data as several statements are required to represent a single fact. Another limitation for applications that uses provenance data like in the medical domain and in cyber security is that not all facts in these knowledge graphs are annotated with provenance data. In this paper, we have provided an overview of prominent reification approaches together with the analysis of popular, general knowledge graphs Wikidata and YAGO4 with regard to the representation of provenance and context data. Wikidata employs qualifiers to include metadata to facts, while YAGO4 collects metadata from Wikidata qualifiers. However, facts in Wikidata and YAGO4 can be fetched without using reification to cater for applications that do not require metadata. To the best of our knowledge, this is the first paper that investigates the method and the extent of metadata covered by two prominent KGs, Wikidata and YAGO4.


2021 ◽  
Vol 9 (1) ◽  
pp. 17
Author(s):  
Esther Mietzsch ◽  
Daniel Martini ◽  
Kristin Kolshus ◽  
Andrea Turbati ◽  
Imma Subirats

AGROVOC is the multilingual thesaurus managed and published by the Food and Agriculture Organization of the United Nations (FAO). Its content is available in more than 40 languages and covers all the FAO’s areas of interest. The structural basis is a resource description framework (RDF) and simple knowledge organization system (SKOS). More than 39,000 concepts identified by a uniform resource identifier (URI) and 800,000 terms are related through a hierarchical system and aligned to knowledge organization systems. This paper aims to illustrate the recent developments in the context of AGROVOC and to present use cases where it has contributed to enhancing the interoperability of data shared by different information systems.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Senthilselvan Natarajan ◽  
Subramaniyaswamy Vairavasundaram ◽  
Yuvaraja Teekaraman ◽  
Ramya Kuppusamy ◽  
Arun Radhakrishnan

Modern web wants the data to be in Resource Description Framework (RDF) format, a machine-readable form that is easy to share and reuse data without human intervention. However, most of the information is still available in relational form. The existing conventional methods transform the data from RDB to RDF using instance-level mapping, which has not yielded the expected results because of poor mapping. Hence, in this paper, a novel schema-based RDB-RDF mapping method (relational database to Resource Description Framework) is proposed, which is an improvised version for transforming the relational database into the Resource Description Framework. It provides both data materialization and on-demand mapping. RDB-RDF reduces the data retrieval time for nonprimary key search by using schema-level mapping. The resultant mapped RDF graph presents the relational database in a conceptual schema and maintains the instance triples as data graph. This mechanism is known as data materialization, which suits well for the static dataset. To get the data in a dynamic environment, query translation (on-demand mapping) is best instead of whole data conversion. The proposed approach directly converts the SPARQL query into SQL query using the mapping descriptions available in the proposed system. The mapping description is the key component of this proposed system which is responsible for quick data retrieval and query translation. Join expression introduced in the proposed RDB-RDF mapping method efficiently handles all complex operations with primary and foreign keys. Experimental evaluation is done on the graphics designer database. It is observed from the result that the proposed schema-based RDB-RDF mapping method accomplishes more comprehensible mapping than conventional methods by dissolving structural and operational differences.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Tanvi Chawla ◽  
Girdhari Singh ◽  
Emmanuel S. Pilli

AbstractResource Description Framework (RDF) model owing to its flexible structure is increasingly being used to represent Linked data. The rise in amount of Linked data and Knowledge graphs has resulted in an increase in the volume of RDF data. RDF is used to model metadata especially for social media domains where the data is linked. With the plethora of RDF data sources available on the Web, scalable RDF data management becomes a tedious task. In this paper, we present MuSe—an efficient distributed RDF storage scheme for storing and querying RDF data with Hadoop MapReduce. In MuSe, the Big RDF data is stored at two levels for answering the common triple patterns in SPARQL queries. MuSe considers the type of frequently occuring triple patterns and optimizes RDF storage to answer such triple patterns in minimum time. It accesses only the tables that are sufficient for answering a triple pattern instead of scanning the whole RDF dataset. The extensive experiments on two synthetic RDF datasets i.e. LUBM and WatDiv, show that MuSe outperforms the compared state-of-the art frameworks in terms of query execution time and scalability.


Sign in / Sign up

Export Citation Format

Share Document