Linked Data Management

2017 ◽  
pp. 307-338
Author(s):  
Manfred Hauswirth ◽  
Marcin Wylot ◽  
Martin Grund ◽  
Paul Groth ◽  
Philippe Cudré-Mauroux
Keyword(s):  
Computers ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 49 ◽  
Author(s):  
Angela Di Iorio ◽  
Marco Schaerf

Library organizations have enthusiastically undertaken semantic web initiatives and in particular the data publishing as linked data. Nevertheless, different surveys report the experimental nature of initiatives and the consumer difficulty in re-using data. These barriers are a hindrance for using linked datasets, as an infrastructure that enhances the library and related information services. This paper presents an approach for encoding, as a Linked Vocabulary, the “tacit” knowledge of the information system that manages the data source. The objective is the improvement of the interpretation process of the linked data meaning of published datasets. We analyzed a digital library system, as a case study, for prototyping the “semantic data management” method, where data and its knowledge are natively managed, taking into account the linked data pillars. The ultimate objective of the semantic data management is to curate the correct consumers’ interpretation of data, and to facilitate the proper re-use. The prototype defines the ontological entities representing the knowledge, of the digital library system, that is not stored in the data source, nor in the existing ontologies related to the system’s semantics. Thus we present the local ontology and its matching with existing ontologies, Preservation Metadata Implementation Strategies (PREMIS) and Metadata Objects Description Schema (MODS), and we discuss linked data triples prototyped from the legacy relational database, by using the local ontology. We show how the semantic data management, can deal with the inconsistency of system data, and we conclude that a specific change in the system developer mindset, it is necessary for extracting and “codifying” the tacit knowledge, which is necessary to improve the data interpretation process.


2014 ◽  
Vol 08 (04) ◽  
pp. 415-439 ◽  
Author(s):  
Amna Basharat ◽  
I. Budak Arpinar ◽  
Shima Dastgheib ◽  
Ugur Kursuncu ◽  
Krys Kochut ◽  
...  

Crowdsourcing is one of the new emerging paradigms to exploit the notion of human-computation for harvesting and processing complex heterogenous data to produce insight and actionable knowledge. Crowdsourcing is task-oriented, and hence specification and management of not only tasks, but also workflows should play a critical role. Crowdsourcing research can still be considered in its infancy. Significant need is felt for crowdsourcing applications to be equipped with well defined task and workflow specifications ranging from simple human-intelligent tasks to more sophisticated and cooperative tasks to handle data and control-flow among these tasks. Addressing this need, we have attempted to devise a generic, flexible and extensible task specification and workflow management mechanism in crowdsourcing. We have contextualized this problem to linked data management as our domain of interest. More specifically, we develop CrowdLink, which utilizes an architecture for automated task specification, generation, publishing and reviewing to engage crowdworkers for verification and creation of triples in the Linked Open Data (LOD) cloud. The LOD incorporates various core data sets in the semantic web, yet is not in full conformance with the guidelines for publishing high quality linked data on the web. Our approach is not only useful in efficiently processing the LOD management tasks, it can also help in enriching and improving quality of mission-critical links in the LOD. We demonstrate usefulness of our approach through various link creation and verification tasks, and workflows using Amazon Mechanical Turk. Experimental evaluation demonstrates promising results not only in terms of ease of task generation, publishing and reviewing, but also in terms of accuracy of the links created, and verified by the crowdworkers.


Author(s):  
Olaf Hartig ◽  
Katja Hose ◽  
Juan Sequeda
Keyword(s):  

2019 ◽  
Vol 94 ◽  
pp. 103179 ◽  
Author(s):  
Vassilis Kilintzis ◽  
Ioanna Chouvarda ◽  
Nikolaos Beredimas ◽  
Pantelis Natsiavas ◽  
Nicos Maglaveras

Author(s):  
Amna Basharat ◽  
I. Budak Arpinar ◽  
Shima Dastgheib ◽  
Ugur Kursuncu ◽  
Krys Kochut ◽  
...  

2021 ◽  
pp. 91-111
Author(s):  
Raul Palma ◽  
Soumya Brahma ◽  
Christian Zinke-Wehlmann ◽  
Amit Kirschenbaum ◽  
Karel Charvát ◽  
...  

AbstractOne of the main goals of DataBio was the provision of solutions for big data management enabling, among others, the harmonisation and integration of a large variety of data generated and collected through various applications, services and devices. The DataBio approach to deliver such capabilities was based on the use of Linked Data as a federated layer to provide an integrated view over (initially) disconnected and heterogeneous datasets. The large amount of data sources,  ranging from mostly static to highly dynamic, led to the design and implementation of Linked Data Pipelines. The goal of these pipelines is to automate as much as possible the process to transform and publish different input datasets as Linked Data. In this chapter, we describe these pipelines and how they were applied to support different uses cases in the project, including the tools and methods used to implement them.


2015 ◽  
Author(s):  
Karin Rydving ◽  
Rune Kyrkjebø

Our University Library (UBL) have seen the need and potential for strengthening the infrastructure for digital full text resources at the University of Bergen and we wanted better to establish the library’s role in this area.Five years ago, several research communities expressed concerns that it was becoming increasingly difficult to sustain the competence necessary to run and maintain both physical and digital research archives. More specifically a concrete need for supporting XML-based digital humanities text resources was voiced. We felt the UBL could meet this need by providing a new service.A combination of data modeling, data conversion and an active use of open data solutions has in our view shown itself to be an effective solution. We find that in-house data modeling and processing competence is essential in order to cope with tasks connected to digital text and image resources.Our poster will outline our digital service provision by giving selected references and examples.The Wittgenstein Archives at the University of Bergen (WAB) is one example of a recipient of UBL data services. WAB maintains a richly encoded XML version of the complete Nachlass of philosopher Ludwig Wittgenstein.A library web resource building upon library data modeling and conversion is MARCUS, which shows how catalogue data and image data for the University Library’s own manuscript collections and photographic collections are currently digitized and interconnected using electronic representations of documents and Linked Data/RDF (Resource Description Format) metadata. MARCUS meets UBLs long felt need for a unified special collections digital system. This relates not only to document storage, display and dissemination, but also to the library workflow for the special collections. Both WAB and MARCUS benefits, strategically and day-to-day, from the same competencies within the library.We think that a sensible future-oriented solution entails that each institution, to a greater degree than before, works with modeling and conversion of its own data. Our view is that using Linked Data/RDF encoding will pave the way to connecting data sets in such a way that they enrich one another. Rather than functioning as system providers, we envision large institutions processing and sharing open datasets, as well as encouraging and enabling others to do the same.In line with LIBERs Ten recommendations for libraries to get started with research data management our view is that data modeling and data conversion, within the frame of an active use of open data solutions, are services that belongs within the portfolio of the research library. Presented by Irene Eikefjord, Senior Librarian, University of Bergen Library


Sign in / Sign up

Export Citation Format

Share Document