Resource Propagation Algorithm to Reinforce Knowledge Base in Linked Data

Author(s):  
Toshitaka Maki ◽  
Kazuki Takahashi ◽  
Toshihiko Wakahara ◽  
Akihisa Kodate ◽  
Noboru Sonehara
2016 ◽  
Vol 28 (2) ◽  
pp. 241-251 ◽  
Author(s):  
Luciane Lena Pessanha Monteiro ◽  
Mark Douglas de Azevedo Jacyntho

The study addresses the use of the Semantic Web and Linked Data principles proposed by the World Wide Web Consortium for the development of Web application for semantic management of scanned documents. The main goal is to record scanned documents describing them in a way the machine is able to understand and process them, filtering content and assisting us in searching for such documents when a decision-making process is in course. To this end, machine-understandable metadata, created through the use of reference Linked Data ontologies, are associated to documents, creating a knowledge base. To further enrich the process, (semi)automatic mashup of these metadata with data from the new Web of Linked Data is carried out, considerably increasing the scope of the knowledge base and enabling to extract new data related to the content of stored documents from the Web and combine them, without the user making any effort or perceiving the complexity of the whole process.


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


Author(s):  
Noboru Sonehara ◽  
Akihisa Kodate ◽  
Toshihiko Wakahara ◽  
Toshitaka Maki ◽  
Kazuki Takahashi

Author(s):  
Toshitaka Maki ◽  
Kazuki Takahashi ◽  
Toshihiko Wakahara ◽  
Akihisa Kodate ◽  
Noboru Sonehara

2020 ◽  
Vol 1 (2) ◽  
pp. 72-85
Author(s):  
Angelica Lo Duca ◽  
Andrea Marchetti

Within the field of Digital Humanities, a great effort has been made to digitize documents and collections in order to build catalogs and exhibitions on the Web. In this paper, we present WeME, a Web application for building a knowledge base, which can be used to describe digital documents. WeME can be used by different categories of users: archivists/librarians and scholars. WeME extracts information from some well-known Linked Data nodes, i.e. DBpedia and GeoNames, as well as traditional Web sources, i.e. VIAF. As a use case of WeME, we describe the knowledge base related to the Christopher Clavius’s corre spondence. Clavius was a mathematician and an astronomer of the XVI Century. He wrote more than 300 letters, most of which are owned by the Historical Archives of the Pontifical Gregorian University (APUG) in Rome. The built knowledge base contains 139 links to DBpedia, 83 links to GeoNames and 129 links to VIAF. In order to test the usability of WeME, we invited 26 users to test the application.


PLoS ONE ◽  
2019 ◽  
Vol 14 (8) ◽  
pp. e0219992 ◽  
Author(s):  
Tao Chen ◽  
Yongjuan Zhang ◽  
Zhengjun Wang ◽  
Dongsheng Wang ◽  
Hui Li ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document