On the Graph Structure of the Web of Data

Author(s):  
Alberto Nogales Moyano ◽  
Miguel Angel Sicilia ◽  
Elena Garcia Barriocanal

This article describes how the Web of Data has emerged as the realization of a machine readable web relying on the resource description framework language as a way to provide richer semantics to datasets. While the web of data is based on similar principles as the original web, being interlinked in the principal mechanism to relate information, the differences in the structure of the information is evident. Several studies have analysed the graph structure of the web, yielding important insights that were used in relevant applications. However, those findings cannot be transposed to the Web of Data, due to fundamental differences in the production, link creation and usage. This article reports on a study of the graph structure of the Web of Data using methods and techniques from similar studies for the Web. Results show that the Web of Data also complies with the theory of the bow-tie. Other characteristics are the low distance between nodes or the closeness and degree centrality are low. Regarding the datasets, the biggest one is Open Data Euskadi but the one with more connections to other datasets is Dbpedia.

Author(s):  
Franck Cotton ◽  
Daniel Gillman

Linked Open Statistical Metadata (LOSM) is Linked Open Data (LOD) applied to statistical metadata. LOD is a model for identifying, structuring, interlinking, and querying data published directly on the web. It builds on the standards of the semantic web defined by the W3C. LOD uses the Resource Description Framework (RDF), a simple data model expressing content as predicates linking resources between them or with literal properties. The simplicity of the model makes it able to represent any data, including metadata. We define statistical data as data produced through some statistical process or intended for statistical analyses, and statistical metadata as metadata describing statistical data. LOSM promotes discovery and the meaning and structure of statistical data in an automated way. Consequently, it helps with understanding and interpreting data and preventing inadequate or flawed visualizations for statistical data. This enhances statistical literacy and efforts at visualizing statistics.


2018 ◽  
Vol 2 ◽  
pp. e26658 ◽  
Author(s):  
Anton Güntsch ◽  
Quentin Groom ◽  
Roger Hyam ◽  
Simon Chagnoux ◽  
Dominik Röpert ◽  
...  

A simple, permanent and reliable specimen identifier system is needed to take the informatics of collections into a new era of interoperability. A system of identifiers based on HTTP URI (Uniform Resource Identifiers), endorsed by the Consortium of European Taxonomic Facilities (CETAF), has now been rolled out to 14 member organisations (Güntsch et al. 2017). CETAF-Identifiers have a Linked Open Data redirection mechanism for both human- and machine-readable access and, if fully implemented, provide Resource Description Framework (RDF) -encoded specimen data following best practices continuously improved by members of the initiative. To date, more than 20 million physical collection objects have been equipped with CETAF Identifiers (Groom et al. 2017). To facilitate the implementation of stable identifiers, simple redirection scripts and guidelines for deciding on the local identifier syntax have been compiled (http://cetafidentifiers.biowikifarm.net/wiki/Main_Page). Furthermore, a capable "CETAF Specimen URI Tester" (http://herbal.rbge.info/) provides an easy-to-use service for testing whether the existing identifiers are operational. For the usability and potential of any identifier system associated with evolving data objects, active links to the source information are critically important. This is particularly true for natural history collections facing the next wave of industrialised mass digitisation, where specimens come online with only basic, but rapidly evolving label data. Specimen identifier systems must therefore have components for monitoring the availability and correct implementation of individual data objects. Our next implementation steps will involve the development of a "Semantic Specimen Catalogue", which has a list of all existing specimen identifiers together with the latest RDF metadata snapshot. The catalogue will be used for semantic inference across collections as well as the basis for periodic testing of identifiers.


Author(s):  
Leila Zemmouchi-Ghomari

The data on the web is heterogeneous and distributed, which makes its integration a sine qua non-condition for its effective exploitation within the context of the semantic web or the so-called web of data. A promising solution for web data integration is the linked data initiative, which is based on four principles that aim to standardize the publication of structured data on the web. The objective of this chapter is to provide an overview of the essential aspects of this fairly recent and exciting field, including the model of linked data: resource description framework (RDF), its query language: simple protocol, and the RDF query language (SPARQL), the available means of publication and consumption of linked data, and the existing applications and the issues not yet addressed in research.


2018 ◽  
Vol 7 (3.33) ◽  
pp. 187
Author(s):  
Heekyung Moon ◽  
Zhanfang Zhao ◽  
Jintak Choi ◽  
Sungkook Han

Graphs provide an effective way to represent information and knowledge of real world domains. Resource Description Framework (RDF) model and Labeled Property Graphs (LPG) model are dominant graph data models widely used in Linked Open Data (LOD) and NoSQL databases. Although these graph models have plentiful data modeling capabilities, they reveal some drawbacks to model the complicated structures. This paper proposes a new property graph model called a universal property graph (UPG) that can embrace the capability of both RDF and LPG. This paper explores the core features of UPG and their functions. 


2017 ◽  
Vol 41 (2) ◽  
pp. 252-271 ◽  
Author(s):  
Alberto Nogales ◽  
Miguel Angel Sicilia-Urban ◽  
Elena García-Barriocanal

Purpose This paper reports on a quantitative study of data gathered from the Linked Open Vocabularies (LOV) catalogue, including the use of network analysis and metrics. The purpose of this paper is to gain insights into the structure of LOV and the use of vocabularies in the Web of Data. It is important to note that not all the vocabularies in it are registered in LOV. Given the de-centralised and collaborative nature of the use and adoption of these vocabularies, the results of the study can be used to identify emergent important vocabularies that are shaping the Web of Data. Design/methodology/approach The methodology is based on an analytical approach to a data set that captures a complete snapshot of the LOV catalogue dated April 2014. An initial analysis of the data is presented in order to obtain insights into the characteristics of the vocabularies found in LOV. This is followed by an analysis of the use of Vocabulary of a Friend properties that describe relations among vocabularies. Finally, the study is complemented with an analysis of the usage of the different vocabularies, and concludes by proposing a number of metrics. Findings The most relevant insight is that unsurprisingly the vocabularies with more presence are those used to model Semantic Web data, such as Resource Description Framework, RDF Schema and OWL, as well as broadly used standards as Simple Knowledge Organization System, DCTERMS and DCE. It was also discovered that the most used language is English and the vocabularies are not considered to be highly specialised in a field. Also, there is not a dominant scope of the vocabularies. Regarding the structural analysis, it is concluded that LOV is a heterogeneous network. Originality/value The paper provides an empirical analysis of the structure of LOV and the relations between its vocabularies, together with some metrics that may be of help to determine the important vocabularies from a practical perspective. The results are of interest for a better understanding of the evolution and dynamics of the Web of Data, and for applications that attempt to retrieve data in the Linked Data Cloud. These applications can benefit from the insights into the important vocabularies to be supported and the value added when mapping between and using the vocabularies.


Author(s):  
Amina Meherehera ◽  
Imane Mekideche ◽  
Leila Zemmouchi-Ghomari ◽  
Abdessamed Réda Ghomari

A large amount of data available over the Web and, in particular, the open data have, generally, heterogeneous formats and are not machine-readable. One promising solution to overcome the problems of heterogeneity and automatic interpretation is the Linked Data initiative, which aims to provide unified practices for publishing and contextually to link data on the Web, by using World Wide Web Consortium standards and the Semantic Web technologies. LinkedIn data promote the Web’s transformation from a web of documents to a web of data, ensuring that machines and software agents can interpret the semantics of data correctly and therefore infer new facts and return relevant web data search results. This paper presents an automatic generic transformation approach that manipulates several input formats of open web data to linked open data. This work aims to participate actively in the movement of publishing data compliant with linked data principles.


2017 ◽  
Vol 44 (2) ◽  
pp. 203-229 ◽  
Author(s):  
Javier D Fernández ◽  
Miguel A Martínez-Prieto ◽  
Pablo de la Fuente Redondo ◽  
Claudio Gutiérrez

The publication of semantic web data, commonly represented in Resource Description Framework (RDF), has experienced outstanding growth over the last few years. Data from all fields of knowledge are shared publicly and interconnected in active initiatives such as Linked Open Data. However, despite the increasing availability of applications managing large-scale RDF information such as RDF stores and reasoning tools, little attention has been given to the structural features emerging in real-world RDF data. Our work addresses this issue by proposing specific metrics to characterise RDF data. We specifically focus on revealing the redundancy of each data set, as well as common structural patterns. We evaluate the proposed metrics on several data sets, which cover a wide range of designs and models. Our findings provide a basis for more efficient RDF data structures, indexes and compressors.


Author(s):  
Axel Polleres ◽  
Simon Steyskal

The World Wide Web Consortium (W3C) as the main standardization body for Web standards has set a particular focus on publishing and integrating Open Data. In this chapter, the authors explain various standards from the W3C's Semantic Web activity and the—potential—role they play in the context of Open Data: RDF, as a standard data format for publishing and consuming structured information on the Web; the Linked Data principles for interlinking RDF data published across the Web and leveraging a Web of Data; RDFS and OWL to describe vocabularies used in RDF and for describing mappings between such vocabularies. The authors conclude with a review of current deployments of these standards on the Web, particularly within public Open Data initiatives, and discuss potential risks and challenges.


Author(s):  
Zongmin Ma ◽  
Li Yan

The resource description framework (RDF) is a model for representing information resources on the web. With the widespread acceptance of RDF as the de-facto standard recommended by W3C (World Wide Web Consortium) for the representation and exchange of information on the web, a huge amount of RDF data is being proliferated and becoming available. So, RDF data management is of increasing importance and has attracted attention in the database community as well as the Semantic Web community. Currently, much work has been devoted to propose different solutions to store large-scale RDF data efficiently. In order to manage massive RDF data, NoSQL (not only SQL) databases have been used for scalable RDF data store. This chapter focuses on using various NoSQL databases to store massive RDF data. An up-to-date overview of the current state of the art in RDF data storage in NoSQL databases is provided. The chapter aims at suggestions for future research.


Author(s):  
Zongmin Ma ◽  
Li Yan

The Resource Description Framework (RDF) is a model for representing information resources on the Web. With the widespread acceptance of RDF as the de-facto standard recommended by W3C (World Wide Web Consortium) for the representation and exchange of information on the Web, a huge amount of RDF data is being proliferated and becoming available. So RDF data management is of increasing importance, and has attracted attentions in the database community as well as the Semantic Web community. Currently much work has been devoted to propose different solutions to store large-scale RDF data efficiently. In order to manage massive RDF data, NoSQL (“not only SQL”) databases have been used for scalable RDF data store. This chapter focuses on using various NoSQL databases to store massive RDF data. An up-to-date overview of the current state of the art in RDF data storage in NoSQL databases is provided. The chapter aims at suggestions for future research.


Sign in / Sign up

Export Citation Format

Share Document