resource description framework
Recently Published Documents


TOTAL DOCUMENTS

157
(FIVE YEARS 38)

H-INDEX

15
(FIVE YEARS 2)

F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 881
Author(s):  
Sini Govindapillai ◽  
Lay-Ki Soon ◽  
Su-Cheng Haw

Knowledge graph (KG) publishes machine-readable representation of knowledge on the Web. Structured data in the knowledge graph is published using Resource Description Framework (RDF) where knowledge is represented as a triple (subject, predicate, object). Due to the presence of erroneous, outdated or conflicting data in the knowledge graph, the quality of facts cannot be guaranteed. Trustworthiness of facts in knowledge graph can be enhanced by the addition of metadata like the source of information, location and time of the fact occurrence. Since RDF does not support metadata for providing provenance and contextualization, an alternate method, RDF reification is employed by most of the knowledge graphs. RDF reification increases the magnitude of data as several statements are required to represent a single fact. Another limitation for applications that uses provenance data like in the medical domain and in cyber security is that not all facts in these knowledge graphs are annotated with provenance data. In this paper, we have provided an overview of prominent reification approaches together with the analysis of popular, general knowledge graphs Wikidata and YAGO4 with regard to the representation of provenance and context data. Wikidata employs qualifiers to include metadata to facts, while YAGO4 collects metadata from Wikidata qualifiers. However, facts in Wikidata and YAGO4 can be fetched without using reification to cater for applications that do not require metadata. To the best of our knowledge, this is the first paper that investigates the method and the extent of metadata covered by two prominent KGs, Wikidata and YAGO4.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Senthilselvan Natarajan ◽  
Subramaniyaswamy Vairavasundaram ◽  
Yuvaraja Teekaraman ◽  
Ramya Kuppusamy ◽  
Arun Radhakrishnan

Modern web wants the data to be in Resource Description Framework (RDF) format, a machine-readable form that is easy to share and reuse data without human intervention. However, most of the information is still available in relational form. The existing conventional methods transform the data from RDB to RDF using instance-level mapping, which has not yielded the expected results because of poor mapping. Hence, in this paper, a novel schema-based RDB-RDF mapping method (relational database to Resource Description Framework) is proposed, which is an improvised version for transforming the relational database into the Resource Description Framework. It provides both data materialization and on-demand mapping. RDB-RDF reduces the data retrieval time for nonprimary key search by using schema-level mapping. The resultant mapped RDF graph presents the relational database in a conceptual schema and maintains the instance triples as data graph. This mechanism is known as data materialization, which suits well for the static dataset. To get the data in a dynamic environment, query translation (on-demand mapping) is best instead of whole data conversion. The proposed approach directly converts the SPARQL query into SQL query using the mapping descriptions available in the proposed system. The mapping description is the key component of this proposed system which is responsible for quick data retrieval and query translation. Join expression introduced in the proposed RDB-RDF mapping method efficiently handles all complex operations with primary and foreign keys. Experimental evaluation is done on the graphics designer database. It is observed from the result that the proposed schema-based RDB-RDF mapping method accomplishes more comprehensible mapping than conventional methods by dissolving structural and operational differences.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 881
Author(s):  
Sini Govindapillai ◽  
Lay-Ki Soon ◽  
Su-Cheng Haw

Knowledge graph (KG) publishes machine-readable representation of knowledge on the Web. Structured data in the knowledge graph is published using Resource Description Framework (RDF) where knowledge is represented as a triple (subject, predicate, object). Due to the presence of erroneous, outdated or conflicting data in the knowledge graph, the quality of facts cannot be guaranteed. Therefore, the provenance of knowledge can assist in building up the trust of these knowledge graphs. In this paper, we have provided an analysis of popular, general knowledge graphs Wikidata and YAGO4 with regard to the representation of provenance and context data. Since RDF does not support metadata for providing provenance and contextualization, an alternate method, RDF reification is employed by most of the knowledge graphs. Trustworthiness of facts in knowledge graph can be enhanced by the addition of metadata like the source of information, location and time of the fact occurrence. Wikidata employs qualifiers to include metadata to facts, while YAGO4 collects metadata from Wikidata qualifiers. RDF reification increases the magnitude of data as several statements are required to represent a single fact. However, facts in Wikidata and YAGO4 can be fetched without using reification. Another limitation for applications that uses provenance data is that not all facts in these knowledge graphs are annotated with provenance data. Structured data in the knowledge graph is noisy. Therefore, the reliability of data in knowledge graphs can be increased by provenance data. To the best of our knowledge, this is the first paper that investigates the method and the extent of the addition of metadata of two prominent KGs, Wikidata and YAGO4.


2021 ◽  
Vol 26 (1) ◽  
pp. 44-53
Author(s):  
Ouahiba Djama

Abstract The description of resources and their relationships is an essential task on the web. Generally, the web users do not share the same interests and viewpoints. Each user wants that the web provides data and information according to their interests and specialty. The existing query languages, which allow querying data on the web, cannot take into consideration the viewpoint of the user. We propose introducing the viewpoint in the description of the resources. The Resource Description Framework (RDF) represents a common framework to share data and describe resources. In this study, we aim at introducing the notion of the viewpoint in the RDF. Therefore, we propose a View-Point Resource Description Framework (VP-RDF) as an extension of RDF by adding new elements. The existing query languages (e.g., SPARQL) can query the VP-RDF graphs and provide the user with data and information according to their interests and specialty. Therefore, VP-RDF can be useful in intelligent systems on the web.


2021 ◽  
Author(s):  
Yiqing ZHAO ◽  
Anastasios Dimou ◽  
Feichen Shen ◽  
Nansu Zong ◽  
Jaime I. Davila ◽  
...  

Abstract Background: Next-generation sequencing provides comprehensive information about individuals’ genetic makeup and is commonplace in precision oncology practice. Due to the heterogeneity of individual patient’s disease conditions and treatment journeys, not all targeted therapies were initiated despite actionable mutations. To better understand and support the clinical decision-making process in precision oncology, there is a need to examine real-world associations of patients’ genetic information and treatment choice.Methods: To fill the gap of insufficient use of real-world data (RWD) in electronic health records (EHRs), we generated a single Resource Description Framework (RDF) resource, called PO2RDF (precision oncology to RDF) by integrating information regarding gene, variant, disease, and drug from genetic reports and EHRs. Results: There are total 2,309,014 triples contained in the PO2RDF. Among them 32,815 triples are related to Gene, 34,695 triples are related to Variant, 8,787 triples are related to Disease, 26,154 triples are related to Drug. We performed one use case analysis to demonstrate the usability of the PO2RDF: we examined real-world associations between EGFR mutations and targeted therapies to confirm existing knowledge and detect off-label use. Conclusions: In conclusion, our work proposed to use RDF to organize and distribute clinical RWD that is otherwise inaccessible externally. Our work serves as a pilot study that will lead to new clinical applications and could ultimately stimulate progress in the field of precision oncology.


2021 ◽  
Author(s):  
Marvin Martens ◽  
Chris Evelo ◽  
Egon Willighagen

<div>The AOP-Wiki is the main environment for the development and storage of Adverse Outcome Pathways. These Adverse Outcome Pathways describe mechanistic information about toxicodynamic processes and can be used to develop effective risk assessment strategies. However, it is challenging to automatically and systematically parse, filter, and use its contents. We explored solutions to better structure the AOP-Wiki content and to link it with chemical and biological resources. Together this allows more detailed exploration which can be automated.</div><div><br></div><div>We converted the complete AOP-Wiki content into Resource Description Framework. We used over twenty ontologies for the semantic annotation of property-object relations, including the ChemInformatics Ontology, Dublin Core, and the Adverse Outcome Pathway Ontology. The latter was used over 8,000 times. Furthermore, over 3,500 link-outs were added to twelve chemical databases and over 6,500 link-outs to four gene and protein databases. </div><div><br></div><div>SPARQL queries can be used against the Resource Description Framework to answer biological and toxicological questions, such as listing measurement methods for all Key Events leading to an Adverse Outcome of interest. The full power that the use of this new resource provides becomes apparent when combining the content with external databases using federated queries. For example, we can link genes related to Key Events with molecular pathway on WikiPathways in which they occur and find all Adverse Outcome Pathways caused by stressors that are part of a particular chemical group. Overall, the AOP-Wiki Resource Description Framework allows new ways to explore the rapidly growing Adverse Outcome Pathway knowledge and makes the integration of this database in automated workflows possible.</div>


2021 ◽  
Author(s):  
Marvin Martens ◽  
Chris Evelo ◽  
Egon Willighagen

<div>The AOP-Wiki is the main environment for the development and storage of Adverse Outcome Pathways. These Adverse Outcome Pathways describe mechanistic information about toxicodynamic processes and can be used to develop effective risk assessment strategies. However, it is challenging to automatically and systematically parse, filter, and use its contents. We explored solutions to better structure the AOP-Wiki content and to link it with chemical and biological resources. Together this allows more detailed exploration which can be automated.</div><div><br></div><div>We converted the complete AOP-Wiki content into Resource Description Framework. We used over twenty ontologies for the semantic annotation of property-object relations, including the ChemInformatics Ontology, Dublin Core, and the Adverse Outcome Pathway Ontology. The latter was used over 8,000 times. Furthermore, over 3,500 link-outs were added to twelve chemical databases and over 6,500 link-outs to four gene and protein databases. </div><div><br></div><div>SPARQL queries can be used against the Resource Description Framework to answer biological and toxicological questions, such as listing measurement methods for all Key Events leading to an Adverse Outcome of interest. The full power that the use of this new resource provides becomes apparent when combining the content with external databases using federated queries. For example, we can link genes related to Key Events with molecular pathway on WikiPathways in which they occur and find all Adverse Outcome Pathways caused by stressors that are part of a particular chemical group. Overall, the AOP-Wiki Resource Description Framework allows new ways to explore the rapidly growing Adverse Outcome Pathway knowledge and makes the integration of this database in automated workflows possible.</div>


Author(s):  
Jose L. Martinez-Rodriguez ◽  
Ivan Lopez-Arevalo ◽  
Jaime I. Lopez-Veyna ◽  
Ana B. Rios-Alvarado ◽  
Edwin Aldana-Bobadilla

One of the goals of data scientists and curators is to get information (contained in text) organized and integrated in a way that can be easily consumed by people and machines. A starting point for such a goal is to get a model to represent the information. This model should ease to obtain knowledge semantically (e.g., using reasoners and inferencing rules). In this sense, the Semantic Web is focused on representing the information through the Resource Description Framework (RDF) model, in which the triple (subject, predicate, object) is the basic unit of information. In this context, the natural language processing (NLP) field has been a cornerstone in the identification of elements that can be represented by triples of the Semantic Web. However, existing approaches for the representation of RDF triples from texts use diverse techniques and tasks for such purpose, which complicate the understanding of the process by non-expert users. This chapter aims to discuss the main concepts involved in the representation of the information through the Semantic Web and the NLP fields.


2020 ◽  
Vol 25 (6) ◽  
pp. 793-801
Author(s):  
Maturi Sreerama Murty ◽  
Nallamothu Nagamalleswara Rao

Following the accessibility of Resource Description Framework (RDF) resources is a key capacity in the establishment of Linked Data frameworks. It replaces center around information reconciliation contrasted with work rate. Exceptional Connected Data that empowers applications to improve by changing over legacy information into RDF resources. This data contains bibliographic, geographic, government, arrangement, and alternate routes. Regardless, a large portion of them don't monitor the subtleties and execution of each sponsored resource. In such cases, it is vital for those applications to track, store and scatter provenance information that mirrors their source data and introduced tasks. We present the RDF information global positioning framework. Provenance information is followed during the progress cycle and oversaw multiple times. From that point, this data is appropriated utilizing of this concept URIs. The proposed design depends on the Harvard Library Database. The tests were performed on informational indexes with changes made to the qualities??In the RDF and the subtleties related with the provenance. The outcome has quieted the guarantee as in it pulls in record wholesalers to make significant realities that develop while taking almost no time and exertion.


Sign in / Sign up

Export Citation Format

Share Document