Using digital humanity approaches to visualize and evaluate the cultural heritage ontology

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yu-Jung Cheng ◽  
Shu-Lai Chou

Purpose This study applies digital humanity tools (Gephi and Protégé) for establishing and visualizing ontologies in the cultural heritage domain. According to that, this study aims to develop a novel evaluation approach using five ontology indicators (data overview, visual presentation, highlight links, scalability and querying) to evaluate the knowledge structure presentation of cultural heritage ontology. Design/methodology/approach The researchers collected and organized 824 pieces of government’s open data (GOD), converted GOD into the resource description framework format, applied Protégé and Gephi to establish and visualize cultural heritage ontology. After ontology is built, this study recruited 60 ontology participants (30 from information and communications technology background; 30 from cultural heritage background) to operate this ontology and gather their different perspectives of visual ontology. Findings Based on the ontology participant’s feedback, this study discovered that Gephi is more supporting than Protégé when visualizing ontology. Especially in data overview, visual presentation and highlight links dimensions, which is supported visualization and demonstrated ontology class hierarchy and property relation, facilitated the wider application of ontology. Originality/value This study offers two contributions. First, the researchers analyzed data on East Asian architecture with novel digital humanities tools to visualize ontology for cultural heritage. Second, the study collected participant’s feedback regarding the visualized ontology to enhance its design, which can serve as a reference for future ontological development.

2017 ◽  
Vol 44 (2) ◽  
pp. 203-229 ◽  
Author(s):  
Javier D Fernández ◽  
Miguel A Martínez-Prieto ◽  
Pablo de la Fuente Redondo ◽  
Claudio Gutiérrez

The publication of semantic web data, commonly represented in Resource Description Framework (RDF), has experienced outstanding growth over the last few years. Data from all fields of knowledge are shared publicly and interconnected in active initiatives such as Linked Open Data. However, despite the increasing availability of applications managing large-scale RDF information such as RDF stores and reasoning tools, little attention has been given to the structural features emerging in real-world RDF data. Our work addresses this issue by proposing specific metrics to characterise RDF data. We specifically focus on revealing the redundancy of each data set, as well as common structural patterns. We evaluate the proposed metrics on several data sets, which cover a wide range of designs and models. Our findings provide a basis for more efficient RDF data structures, indexes and compressors.


Author(s):  
Franck Cotton ◽  
Daniel Gillman

Linked Open Statistical Metadata (LOSM) is Linked Open Data (LOD) applied to statistical metadata. LOD is a model for identifying, structuring, interlinking, and querying data published directly on the web. It builds on the standards of the semantic web defined by the W3C. LOD uses the Resource Description Framework (RDF), a simple data model expressing content as predicates linking resources between them or with literal properties. The simplicity of the model makes it able to represent any data, including metadata. We define statistical data as data produced through some statistical process or intended for statistical analyses, and statistical metadata as metadata describing statistical data. LOSM promotes discovery and the meaning and structure of statistical data in an automated way. Consequently, it helps with understanding and interpreting data and preventing inadequate or flawed visualizations for statistical data. This enhances statistical literacy and efforts at visualizing statistics.


2017 ◽  
Vol 35 (1) ◽  
pp. 159-178
Author(s):  
Timothy W. Cole ◽  
Myung-Ja K. Han ◽  
Maria Janina Sarol ◽  
Monika Biel ◽  
David Maus

Purpose Early Modern emblem books are primary sources for scholars studying the European Renaissance. Linked Open Data (LOD) is an approach for organizing and modeling information in a data-centric manner compatible with the emerging Semantic Web. The purpose of this paper is to examine ways in which LOD methods can be applied to facilitate emblem resource discovery, better reveal the structure and connectedness of digitized emblem resources, and enhance scholar interactions with digitized emblem resources. Design/methodology/approach This research encompasses an analysis of the existing XML-based Spine (emblem-specific) metadata schema; the design of a new, domain-specific, Resource Description Framework compatible ontology; the mapping and transformation of metadata from Spine to both the new ontology and (separately) to the pre-existing Schema.org ontology; and the (experimental) modification of the Emblematica Online portal as a proof of concept to illustrate enhancements supported by LOD. Findings LOD is viable as an approach for facilitating discovery and enhancing the value to scholars of digitized emblem books; however, metadata must first be enriched with additional uniform resource identifiers and the workflow upgrades required to normalize and transform existing emblem metadata are substantial and still to be fully worked out. Practical implications The research described demonstrates the feasibility of transforming existing, special collections metadata to LOD. Although considerable work and further study will be required, preliminary findings suggest potential benefits of LOD for both users and libraries. Originality/value This research is unique in the context of emblem studies and adds to the emerging body of work examining the application of LOD best practices to library special collections.


Heritage ◽  
2019 ◽  
Vol 2 (2) ◽  
pp. 1471-1498 ◽  
Author(s):  
Ikrom Nishanbaev ◽  
Erik Champion ◽  
David A. McMeekin

The amount of digital cultural heritage data produced by cultural heritage institutions is growing rapidly. Digital cultural heritage repositories have therefore become an efficient and effective way to disseminate and exploit digital cultural heritage data. However, many digital cultural heritage repositories worldwide share technical challenges such as data integration and interoperability among national and regional digital cultural heritage repositories. The result is dispersed and poorly-linked cultured heritage data, backed by non-standardized search interfaces, which thwart users’ attempts to contextualize information from distributed repositories. A recently introduced geospatial semantic web is being adopted by a great many new and existing digital cultural heritage repositories to overcome these challenges. However, no one has yet conducted a conceptual survey of the geospatial semantic web concepts for a cultural heritage audience. A conceptual survey of these concepts pertinent to the cultural heritage field is, therefore, needed. Such a survey equips cultural heritage professionals and practitioners with an overview of all the necessary tools, and free and open source semantic web and geospatial semantic web platforms that can be used to implement geospatial semantic web-based cultural heritage repositories. Hence, this article surveys the state-of-the-art geospatial semantic web concepts, which are pertinent to the cultural heritage field. It then proposes a framework to turn geospatial cultural heritage data into machine-readable and processable resource description framework (RDF) data to use in the geospatial semantic web, with a case study to demonstrate its applicability. Furthermore, it outlines key free and open source semantic web and geospatial semantic platforms for cultural heritage institutions. In addition, it examines leading cultural heritage projects employing the geospatial semantic web. Finally, the article discusses attributes of the geospatial semantic web that require more attention, that can result in generating new ideas and research questions for both the geospatial semantic web and cultural heritage fields.


Publications ◽  
2019 ◽  
Vol 7 (2) ◽  
pp. 38 ◽  
Author(s):  
Lyubomir Penev ◽  
Mariya Dimitrova ◽  
Viktor Senderov ◽  
Georgi Zhelezov ◽  
Teodor Georgiev ◽  
...  

Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jacques Chabin ◽  
Cédric Eichler ◽  
Mirian Halfeld Ferrari ◽  
Nicolas Hiot

Purpose Graph rewriting concerns the technique of transforming a graph; it is thus natural to conceive its application in the evolution of graph databases. This paper aims to propose a two-step framework where rewriting rules formalize instance or schema changes, ensuring graph’s consistency with respect to constraints, and updates are managed by ensuring rule applicability through the generation of side effects: new updates which guarantee that rule application conditions hold. Design/methodology/approach This paper proposes Schema Evolution Through UPdates, optimized version (SetUpOPT), a theoretical and applied framework for the management of resource description framework (RDF)/S database evolution on the basis of graph rewriting rules. The framework is an improvement of SetUp which avoids the computation of superfluous side effects and proposes, via SetUpoptND, a flexible and extensible package of solutions to deal with non-determinism. Findings This paper shows graph rewriting into a practical and useful application which ensures consistent evolution of RDF databases. It introduces an optimised approach for dealing with side effects and a flexible and customizable way of dealing with non-determinism. Experimental evaluation of SetUpoptND demonstrates the importance of the proposed optimisations as they significantly reduce side-effect generation and limit data degradation. Originality/value SetUp originality lies in the use of graph rewriting techniques under the closed world assumption to set an updating system which preserves database consistency. Efficiency is ensured by avoiding the generation of superfluous side effects. Flexibility is guaranteed by offering different solutions for non-determinism and allowing the integration of customized choice functions.


2020 ◽  
pp. 016555152093095
Author(s):  
Gustavo Candela ◽  
Pilar Escobar ◽  
Rafael C Carrasco ◽  
Manuel Marco-Such

Cultural heritage institutions have recently started to share their metadata as Linked Open Data (LOD) in order to disseminate and enrich them. The publication of large bibliographic data sets as LOD is a challenge that requires the design and implementation of custom methods for the transformation, management, querying and enrichment of the data. In this report, the methodology defined by previous research for the evaluation of the quality of LOD is analysed and adapted to the specific case of Resource Description Framework (RDF) triples containing standard bibliographic information. The specified quality measures are reported in the case of four highly relevant libraries.


Author(s):  
Mariana Baptista Brandt ◽  
Silvana Aparecida Borsetti Gregorio Vidotti ◽  
José Eduardo Santarem Segundo

A presente pesquisa objetiva propor um modelo de dados abertos conectados (linked open data - LOD), para um conjunto de dados abertos legislativos da Câmara dos Deputados. Para tanto, procede-se à revisão de literatura sobre os conceitos de dados abertos, dados abertos governamentais, dados conectados (linked data), e dados abertos conectados (linked open data), seguido de pesquisa aplicada, com a modelagem de dados legislativos no modelo LOD. Para esta pesquisa foi selecionado o conjunto de dados "Deputados", que contém informações como partido político, unidade federativa, e-mail, legislatura, entre outras, sobre os parlamentares. Desse modo, observa-se que a estruturação do conjunto de dados em RDF (Resource Description Framework) é possível com reuso de vocabulários e padrões já estabelecidos na Web Semântica como Dublin Core, Friend of a Friend (FOAF), RDF e RDF Schema, além de vocabulários de áreas correlatas, como a Ontologia da Câmara dos Deputados italiana e a da Assembleia Nacional Francesa. Conforme recomendação do padrão Linked Data, os recursos foram relacionados também a outros conjuntos de LOD para enriquecimento semântico, como as bases Geonames e DBpedia. O estudo que permite concluir que a disponibilização dos dados governamentais, em especial, dados legislativos, pode ser feita seguindo as recomendações da W3C (World Wide Web Consortium) e, assim, integrar os dados legislativos à Web de Dados e ampliar as possibilidades de reuso e aplicações dos dados em ações de transparência e fiscalização, aproximando os cidadãos do Congresso e de seus representantes.


2018 ◽  
Vol 2 ◽  
pp. e26658 ◽  
Author(s):  
Anton Güntsch ◽  
Quentin Groom ◽  
Roger Hyam ◽  
Simon Chagnoux ◽  
Dominik Röpert ◽  
...  

A simple, permanent and reliable specimen identifier system is needed to take the informatics of collections into a new era of interoperability. A system of identifiers based on HTTP URI (Uniform Resource Identifiers), endorsed by the Consortium of European Taxonomic Facilities (CETAF), has now been rolled out to 14 member organisations (Güntsch et al. 2017). CETAF-Identifiers have a Linked Open Data redirection mechanism for both human- and machine-readable access and, if fully implemented, provide Resource Description Framework (RDF) -encoded specimen data following best practices continuously improved by members of the initiative. To date, more than 20 million physical collection objects have been equipped with CETAF Identifiers (Groom et al. 2017). To facilitate the implementation of stable identifiers, simple redirection scripts and guidelines for deciding on the local identifier syntax have been compiled (http://cetafidentifiers.biowikifarm.net/wiki/Main_Page). Furthermore, a capable "CETAF Specimen URI Tester" (http://herbal.rbge.info/) provides an easy-to-use service for testing whether the existing identifiers are operational. For the usability and potential of any identifier system associated with evolving data objects, active links to the source information are critically important. This is particularly true for natural history collections facing the next wave of industrialised mass digitisation, where specimens come online with only basic, but rapidly evolving label data. Specimen identifier systems must therefore have components for monitoring the availability and correct implementation of individual data objects. Our next implementation steps will involve the development of a "Semantic Specimen Catalogue", which has a list of all existing specimen identifiers together with the latest RDF metadata snapshot. The catalogue will be used for semantic inference across collections as well as the basis for periodic testing of identifiers.


2018 ◽  
Vol 7 (3.33) ◽  
pp. 187
Author(s):  
Heekyung Moon ◽  
Zhanfang Zhao ◽  
Jintak Choi ◽  
Sungkook Han

Graphs provide an effective way to represent information and knowledge of real world domains. Resource Description Framework (RDF) model and Labeled Property Graphs (LPG) model are dominant graph data models widely used in Linked Open Data (LOD) and NoSQL databases. Although these graph models have plentiful data modeling capabilities, they reveal some drawbacks to model the complicated structures. This paper proposes a new property graph model called a universal property graph (UPG) that can embrace the capability of both RDF and LPG. This paper explores the core features of UPG and their functions. 


Sign in / Sign up

Export Citation Format

Share Document