scholarly journals Dynamic District Information Server: On the Use of W3C Linked Data Standards to Unify Construction Data

Author(s):  
Cathal Hoare ◽  
Usman Ali ◽  
James O'Donnell

Author(s):  
Javier D. Fernández ◽  
Nelia Lasierra ◽  
Didier Clement ◽  
Huw Mason ◽  
Ivan Robinson


2018 ◽  
Vol 62 (1) ◽  
pp. 4 ◽  
Author(s):  
Yuji Tosaka ◽  
Jung-ran Park

This study uses data from a large original survey (nearly one thousand initial respondents) to present how the cataloging and metadata community is approaching new and emerging data standards and technologies. The data analysis demonstrates strong professional-development interest in Semantic Web and Linked Data applications. With respect to continuing education topics, Linked Data technology, BIBFRAME, and an overview of current and emerging data standards and technologies ranked high. The survey data illustrate that personal continuing education interests often varied from reported institutional needs. These results reflect the fact that library services and projects in these emerging areas have not yet progressed beyond the exploratory stage. They also suggest that cataloging and metadata professionals expect to be able to exercise a mixture of core professional skill sets including teamwork, communication, and subject analysis, and the ability to adapt and accommodate Semantic Web standards and technologies, digital libraries, and other innovations in cataloging and metadata services.



Author(s):  
Franck Michel ◽  
Catherine Faron-Zucker ◽  
Sandrine Tercerie ◽  
Antonia Ettorre ◽  
Gargominy Olivier

During the last decade, Web APIs (Application Programming Interface) have gained significant traction to the extent that they have become a de-facto standard to enable HTTP-based, machine-processable data access. Despite this success, however, they still often fail in making data interoperable, insofar as they commonly rely on proprietary data models and vocabularies that lack formal semantic descriptions essential to ensure reliable data integration. In the biodiversity domain, multiple data aggregators, such as the Global Biodiversity Information Facility (GBIF) and the Encyclopedia of Life (EoL), maintain specialized Web APIs giving access to billions of records about taxonomies, occurrences, or life traits (Triebel et al. 2012). They publish data sets spanning complementary and often overlapping regions, epochs or domains, but may also report or rely on potentially conflicting perspectives, e.g. with respect to the circumscription of taxonomic concepts. It is therefore of utmost importance for biologists and collection curators to be able to confront the knowledge they have about taxa with related data coming from third-party data sources. To tackle this issue, the French National Museum of Natural History (MNHN) has developed an application to edit TAXREF, the French taxonomic register for fauna, flora and fungus (Gargominy et al. 2018). TAXREF registers all species recorded in metropolitan France and overseas territories, accounting for 260,000+ biological taxa (200,000+ species) along with 570,000+ scientific names. The TAXREF-Web application compares data available in TAXREF with corresponding data from third-party data sources, points out disagreements and allows biologists to add, remove or amend TAXREF accordingly. This requires that TAXREF-Web developers write a specific piece of code for each considered Web API to align TAXREF representation with the Web API counterpart. This task is time-consuming and makes maintenance of the web application cumbersome. In this presentation, we report on a new implementation of TAXREF-Web that harnesses the Linked Data standards: Resource Description Framework (RDF), the Semantic Web format to represent knowledge graphs, and SPARQL, the W3C standard to query RDF graphs. In addition, we leverage the SPARQL Micro-Service architecture (Michel et al. 2018), a lightweight approach to query Web APIs using SPARQL. A SPARQL micro-service is a SPARQL endpoint that wraps a Web API service; it typically produces a small, resource-centric RDF graph by invoking the Web API and transforming the response into RDF triples. We developed SPARQL micro-services to wrap the Web APIs of GBIF, World Register of Marine Species (WoRMS), FishBase, Index Fungorum, Pan-European Species directories Infrastructure (PESI), ZooBank, International Plant Names Index (IPNI), EoL, Tropicos and Sandre. These micro-services consistently translate Web APIs responses into RDF graphs utilizing mainly two well-adopted vocabularies: Schema.org (Guha et al. 2015) and Darwin Core (Baskauf et al. 2015). This approach brings about two major advantages. First, the large adoption of Schema.org and Darwin Core ensures that the services can be immediately understood and reused by a large audience within the biodiversity community. Second, wrapping all these Web APIs in SPARQL micro-services “suddenly” makes them technically and semantically interoperable, since they all represent resources (taxa, habitats, traits, etc.) in a common manner. Consequently, the integration task is simplified: confronting data from multiple sources essentially consists of writing the appropriate SPARQL queries, thus making easier web application development and maintenance. We present several concrete cases in which we use this approach to detect disagreements between TAXREF and the aforementioned data sources, with respect to taxonomic information (author, synonymy, vernacular names, classification, taxonomic rank), habitats, bibliographic references, species interactions and life traits.





Data Science ◽  
2022 ◽  
pp. 1-42
Author(s):  
Stian Soiland-Reyes ◽  
Peter Sefton ◽  
Mercè Crosas ◽  
Leyla Jael Castro ◽  
Frederik Coppens ◽  
...  

An increasing number of researchers support reproducibility by including pointers to and descriptions of datasets, software and methods in their publications. However, scientific articles may be ambiguous, incomplete and difficult to process by automated systems. In this paper we introduce RO-Crate, an open, community-driven, and lightweight approach to packaging research artefacts along with their metadata in a machine readable manner. RO-Crate is based on Schema.org annotations in JSON-LD, aiming to establish best practices to formally describe metadata in an accessible and practical way for their use in a wide variety of situations. An RO-Crate is a structured archive of all the items that contributed to a research outcome, including their identifiers, provenance, relations and annotations. As a general purpose packaging approach for data and their metadata, RO-Crate is used across multiple areas, including bioinformatics, digital humanities and regulatory sciences. By applying “just enough” Linked Data standards, RO-Crate simplifies the process of making research outputs FAIR while also enhancing research reproducibility. An RO-Crate for this article11 https://w3id.org/ro/doi/10.5281/zenodo.5146227 is archived at https://doi.org/10.5281/zenodo.5146227.



Diabetes ◽  
2019 ◽  
Vol 68 (Supplement 1) ◽  
pp. 692-P
Author(s):  
JOHN M. OWEN


Author(s):  
Dimitris Kontokostas ◽  
Charalampos Bratsas ◽  
SSren Auer ◽  
Sebastian Hellmann ◽  
Ioannis Antoniou
Keyword(s):  


Sign in / Sign up

Export Citation Format

Share Document