Knowledge Graphs and Semantic Web

2020 ◽  
2020 ◽  
Vol 77 (1) ◽  
pp. 93-105
Author(s):  
Junzhi Jia

PurposeThe purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions.Design/methodology/approachThis paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions.FindingsVocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data.Originality/valueThis paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.


Semantic Web ◽  
2021 ◽  
pp. 1-36
Author(s):  
Enrico Daga ◽  
Albert Meroño-Peñuela ◽  
Enrico Motta

Sequences are among the most important data structures in computer science. In the Semantic Web, however, little attention has been given to Sequential Linked Data. In previous work, we have discussed the data models that Knowledge Graphs commonly use for representing sequences and showed how these models have an impact on query performance and that this impact is invariant to triplestore implementations. However, the specific list operations that the management of Sequential Linked Data requires beyond the simple retrieval of an entire list or a range of its elements – e.g. to add or remove elements from a list –, and their impact in the various list data models, remain unclear. Covering this knowledge gap would be a significant step towards the realization of a Semantic Web list Application Programming Interface (API) that standardizes list manipulation and generalizes beyond specific data models. In order to address these challenges towards the realization of such an API, we build on our previous work in understanding the effects of various sequential data models for Knowledge Graphs, extending our benchmark and proposing a set of read-write Semantic Web list operations in SPARQL, with insert, update and delete support. To do so, we identify five classic list-based computer science sequential data structures (linked list, double linked list, stack, queue, and array), from which we derive nine atomic read-write operations for Semantic Web lists. We propose a SPARQL implementation of these operations with five typical RDF data models and compare their performance by executing them against six increasing dataset sizes and four different triplestores. In light of our results, we discuss the feasibility of our devised API and reflect on the state of affairs of Sequential Linked Data.


2020 ◽  
Vol 4 (1) ◽  
pp. 32-42 ◽  
Author(s):  
Georgios Lampropoulos ◽  
Euclid Keramopoulos ◽  
Konstantinos Diamantaras

2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Konstantinos Ilias Kotis ◽  
Konstantina Zachila ◽  
Evaggelos Paparidis

Remarkable progress in research has shown the efficiency of Knowledge Graphs (KGs) in extracting valuable external knowledge in various domains. A Knowledge Graph (KG) can illustrate high-order relations that connect two objects with one or multiple related attributes. The emerging Graph Neural Networks (GNN) can extract both object characteristics and relations from KGs. This paper presents how Machine Learning (ML) meets the Semantic Web and how KGs are related to Neural Networks and Deep Learning. The paper also highlights important aspects of this area of research, discussing open issues such as the bias hidden in KGs at different levels of graph representation.


2021 ◽  
Vol 11 (11) ◽  
pp. 5110
Author(s):  
Muhammad Yahya ◽  
John G. Breslin ◽  
Muhammad Intizar Ali

In recent years, due to technological advancements, the concept of Industry 4.0 (I4.0) is gaining popularity, while presenting several technical challenges being tackled by both the industrial and academic research communities. Semantic Web including Knowledge Graphs is a promising technology that can play a significant role in realizing I4.0 implementations. This paper surveys the use of the Semantic Web and Knowledge Graphs for I4.0 from different perspectives such as managing information related to equipment maintenance, resource optimization, and the provision of on-time and on-demand production and services. Moreover, to solve the challenges of limited depth and expressiveness in the current ontologies, we have proposed an enhanced reference generalized ontological model (RGOM) based on Reference Architecture Model for I4.0 (RAMI 4.0). RGOM can facilitate a range of I4.0 concepts including improved asset monitoring, production enhancement, reconfiguration of resources, process optimizations, product orders and deliveries, and the life cycle of products. Our proposed RGOM can be used to generate a knowledge graph capable of providing answers in response to any real-time query.


Database ◽  
2019 ◽  
Vol 2019 ◽  
Author(s):  
Lars Vogt ◽  
Roman Baum ◽  
Philipp Bhatty ◽  
Christian Köhler ◽  
Sandra Meid ◽  
...  

Abstract We introduce Semantic Ontology-Controlled application for web Content Management Systems (SOCCOMAS), a development framework for FAIR (‘findable’, ‘accessible’, ‘interoperable’, ‘reusable’) Semantic Web Content Management Systems (S-WCMSs). Each S-WCMS run by SOCCOMAS has its contents managed through a corresponding knowledge base that stores all data and metadata in the form of semantic knowledge graphs in a Jena tuple store. Automated procedures track provenance, user contributions and detailed change history. Each S-WCMS is accessible via both a graphical user interface (GUI), utilizing the JavaScript framework AngularJS, and a SPARQL endpoint. As a consequence, all data and metadata are maximally findable, accessible, interoperable and reusable and comply with the FAIR Guiding Principles. The source code of SOCCOMAS is written using the Semantic Programming Ontology (SPrO). SPrO consists of commands, attributes and variables, with which one can describe an S-WCMS. We used SPrO to describe all the features and workflows typically required by any S-WCMS and documented these descriptions in a SOCCOMAS source code ontology (SC-Basic). SC-Basic specifies a set of default features, such as provenance tracking and publication life cycle with versioning, which will be available in all S-WCMS run by SOCCOMAS. All features and workflows specific to a particular S-WCMS, however, must be described within an instance source code ontology (INST-SCO), defining, e.g. the function and composition of the GUI, with all its user interactions, the underlying data schemes and representations and all its workflow processes. The combination of descriptions in SC-Basic and a given INST-SCO specify the behavior of an S-WCMS. SOCCOMAS controls this S-WCMS through the Java-based middleware that accompanies SPrO, which functions as an interpreter. Because of the ontology-controlled design, SOCCOMAS allows easy customization with a minimum of technical programming background required, thereby seamlessly integrating conventional web page technologies with semantic web technologies. SOCCOMAS and the Java Interpreter are available from (https://github.com/SemanticProgramming).


Sign in / Sign up

Export Citation Format

Share Document