scholarly journals Pattern-based design applied to cultural heritage knowledge graphs

Semantic Web ◽  
2020 ◽  
pp. 1-45
Author(s):  
Valentina Anita Carriero ◽  
Aldo Gangemi ◽  
Maria Letizia Mancinelli ◽  
Andrea Giovanni Nuzzolese ◽  
Valentina Presutti ◽  
...  

Ontology Design Patterns (ODPs) have become an established and recognised practice for guaranteeing good quality ontology engineering. There are several ODP repositories where ODPs are shared as well as ontology design methodologies recommending their reuse. Performing rigorous testing is recommended as well for supporting ontology maintenance and validating the resulting resource against its motivating requirements. Nevertheless, it is less than straightforward to find guidelines on how to apply such methodologies for developing domain-specific knowledge graphs. ArCo is the knowledge graph of Italian Cultural Heritage and has been developed by using eXtreme Design (XD), an ODP- and test-driven methodology. During its development, XD has been adapted to the need of the CH domain e.g. gathering requirements from an open, diverse community of consumers, a new ODP has been defined and many have been specialised to address specific CH requirements. This paper presents ArCo and describes how to apply XD to the development and validation of a CH knowledge graph, also detailing the (intellectual) process implemented for matching the encountered modelling problems to ODPs. Relevant contributions also include a novel web tool for supporting unit-testing of knowledge graphs, a rigorous evaluation of ArCo, and a discussion of methodological lessons learned during ArCo’s development.

Author(s):  
M. Ben Ellefi ◽  
P. Drap ◽  
O. Papini ◽  
D. Merad ◽  
J. P. Royer ◽  
...  

<p><strong>Abstract.</strong> A key challenge in cultural heritage (CH) sites visualization is to provide models and tools that effectively integrate the content of a CH data with domain-specific knowledge so that the users can query, interpret and consume the visualized information. Moreover, it is important that the intelligent visualization systems are interoperable in the semantic web environment and thus, capable of establishing a methodology to acquire, integrate, analyze, generate and share numeric contents and associated knowledge in human and machine-readable Web. In this paper, we present a model, a methodology and a software Web-tools that support the coupling of the 2D/3D Web representation with the knowledge graph database of <i>Xlendi</i> shipwreck. The Web visualization tools and the knowledge-based techniques are married into a photogrammetry driven ontological model while at the same time, user-friendly web tools for querying and semantic consumption of the shipwreck information are introduced.</p>


2021 ◽  
Vol 14 (2) ◽  
pp. 63
Author(s):  
Linqing Yang ◽  
Bo Liu ◽  
Youpei Huang ◽  
Xiaozhuo Li

The lack of entity label values is one of the problems faced by the application of Knowledge Graph. The method of automatically assigning entity label values still has shortcomings, such as costing more resources during training, leading to inaccurate label value assignment because of lacking entity semantics. In this paper, oriented to domain-specific Knowledge Graph, based on the situation that the initial entity label values of all triples are completely unknown, an Entity Label Value Assignment Method (ELVAM) based on external resources and entropy is proposed. ELVAM first constructs a Relationship Triples Cluster according to the relationship type, and randomly extracts the triples data from each cluster to form a Relationship Triples Subset; then collects the extended semantic text of the entities in the subset from the external resources to obtain nouns. Information Entropy and Conditional Entropy of the nouns are calculated through Ontology Category Hierarchy Graph, so as to obtain the entity label value with moderate granularity. Finally, the Label Triples Pattern of each Relationship Triples Cluster is summarized, and the corresponding entity is assigned the label value according to the pattern. The experimental results verify the effectiveness of ELVAM in assigning entity label values in Knowledge Graph.


2019 ◽  
Vol 62 (1) ◽  
pp. 317-336
Author(s):  
Jianbo Yuan ◽  
Zhiwei Jin ◽  
Han Guo ◽  
Hongxia Jin ◽  
Xianchao Zhang ◽  
...  

JAMIA Open ◽  
2020 ◽  
Vol 3 (3) ◽  
pp. 332-337
Author(s):  
Bhuvan Sharma ◽  
Van C Willis ◽  
Claudia S Huettner ◽  
Kirk Beaty ◽  
Jane L Snowdon ◽  
...  

Abstract Objectives Describe an augmented intelligence approach to facilitate the update of evidence for associations in knowledge graphs. Methods New publications are filtered through multiple machine learning study classifiers, and filtered publications are combined with articles already included as evidence in the knowledge graph. The corpus is then subjected to named entity recognition, semantic dictionary mapping, term vector space modeling, pairwise similarity, and focal entity match to identify highly related publications. Subject matter experts review recommended articles to assess inclusion in the knowledge graph; discrepancies are resolved by consensus. Results Study classifiers achieved F-scores from 0.88 to 0.94, and similarity thresholds for each study type were determined by experimentation. Our approach reduces human literature review load by 99%, and over the past 12 months, 41% of recommendations were accepted to update the knowledge graph. Conclusion Integrated search and recommendation exploiting current evidence in a knowledge graph is useful for reducing human cognition load.


2021 ◽  
Vol 13 (4) ◽  
pp. 2276
Author(s):  
Taejin Kim ◽  
Yeoil Yun ◽  
Namgyu Kim

Many attempts have been made to construct new domain-specific knowledge graphs using the existing knowledge base of various domains. However, traditional “dictionary-based” or “supervised” knowledge graph building methods rely on predefined human-annotated resources of entities and their relationships. The cost of creating human-annotated resources is high in terms of both time and effort. This means that relying on human-annotated resources will not allow rapid adaptability in describing new knowledge when domain-specific information is added or updated very frequently, such as with the recent coronavirus disease-19 (COVID-19) pandemic situation. Therefore, in this study, we propose an Open Information Extraction (OpenIE) system based on unsupervised learning without a pre-built dataset. The proposed method obtains knowledge from a vast amount of text documents about COVID-19 rather than a general knowledge base and add this to the existing knowledge graph. First, we constructed a COVID-19 entity dictionary, and then we scraped a large text dataset related to COVID-19. Next, we constructed a COVID-19 perspective language model by fine-tuning the bidirectional encoder representations from transformer (BERT) pre-trained language model. Finally, we defined a new COVID-19-specific knowledge base by extracting connecting words between COVID-19 entities using the BERT self-attention weight from COVID-19 sentences. Experimental results demonstrated that the proposed Co-BERT model outperforms the original BERT in terms of mask prediction accuracy and metric for evaluation of translation with explicit ordering (METEOR) score.


Sign in / Sign up

Export Citation Format

Share Document