resource description
Recently Published Documents


TOTAL DOCUMENTS

473
(FIVE YEARS 117)

H-INDEX

18
(FIVE YEARS 3)

Author(s):  
Runumi Devi ◽  
Deepti Mehrotra ◽  
Sana Ben Abdallah Ben Lamine

Electronic Health Record (EHR) systems in healthcare organisations are primarily maintained in isolation from each other that makes interoperability of unstructured(text) data stored in these EHR systems challenging in the healthcare domain. Similar information may be described using different terminologies by different applications that can be evaded by transforming the content into the Resource Description Framework (RDF) model that is interoperable amongst organisations. RDF requires a document’s contents to be translated into a repository of triplets (subject, predicate, object) known as RDF statements. Natural Language Processing (NLP) techniques can help get actionable insights from these text data and create triplets for RDF model generation. This paper discusses two NLP-based approaches to generate the RDF models from unstructured patients’ documents, namely dependency structure-based and constituent(phrase) structure-based parser. Models generated by both approaches are evaluated in two aspects: exhaustiveness of the represented knowledge and the model generation time. The precision measure is used to compute the models’ exhaustiveness in terms of the number of facts that are transformed into RDF representations.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yu-Jung Cheng ◽  
Shu-Lai Chou

Purpose This study applies digital humanity tools (Gephi and Protégé) for establishing and visualizing ontologies in the cultural heritage domain. According to that, this study aims to develop a novel evaluation approach using five ontology indicators (data overview, visual presentation, highlight links, scalability and querying) to evaluate the knowledge structure presentation of cultural heritage ontology. Design/methodology/approach The researchers collected and organized 824 pieces of government’s open data (GOD), converted GOD into the resource description framework format, applied Protégé and Gephi to establish and visualize cultural heritage ontology. After ontology is built, this study recruited 60 ontology participants (30 from information and communications technology background; 30 from cultural heritage background) to operate this ontology and gather their different perspectives of visual ontology. Findings Based on the ontology participant’s feedback, this study discovered that Gephi is more supporting than Protégé when visualizing ontology. Especially in data overview, visual presentation and highlight links dimensions, which is supported visualization and demonstrated ontology class hierarchy and property relation, facilitated the wider application of ontology. Originality/value This study offers two contributions. First, the researchers analyzed data on East Asian architecture with novel digital humanities tools to visualize ontology for cultural heritage. Second, the study collected participant’s feedback regarding the visualized ontology to enhance its design, which can serve as a reference for future ontological development.


Author(s):  
Abd Latif Abdul Rahman ◽  
Zati Atiqah Mohamad Tanuri ◽  
Zuraidah Arif ◽  
Tengku Rafidatul Akma Tengku Razali ◽  
Ahmad Sufi Alawi Idris ◽  
...  
Keyword(s):  

IFLA Journal ◽  
2021 ◽  
pp. 034003522110571
Author(s):  
Catherine Smith

Anxieties over automation and personal freedom are challenging libraries’ role as havens of intellectual freedom. The introduction of artificial intelligence into the resource description process creates an opportunity to reshape the digital information landscape—and loss of trust by library users. Resource description necessarily manipulates a library’s presentation of information, which influences the ways users perceive and interact with that information. Human catalogers inevitably introduce personal and cultural biases into their work, but artificial intelligence may perpetrate biases on a previously unseen scale. The automation of this process may be perceived as a greater threat than the manipulation produced by human operators. Librarians must understand the risks of artificial intelligence and consider what oversight and countermeasures are necessary to mitigate the harm to libraries and their users before ceding resource description to artificial intelligence in place of the “professional considerations” the IFLA Statement on Libraries and Intellectual Freedom calls for in providing access to library materials.


Author(s):  
Anna A. Stukalova ◽  
Natalya A. Balutkina

The article provides review of foreign and domestic publications on the problems of creation, development and use of authority files (AF) of names of persons, names of organizations, geographical names and other objects both at the international, national and regional levels. The paper presents analysis of the foreign experience of AF maintenance. The authors note that, due to the availability of universal collections and qualified specialists, AF formation abroad is usually carried out by national libraries. A substantive analysis of foreign publications has shown that national AFs (NAF) are characterized by data variability and diversity of approaches. The authors studied the experience of successful combination of NAF created according to different methods within the framework of the international corporate project — Virtual International Authority File (VIAF). The article notes that most of the Russian libraries do not use AF, since AF, created in republican and regional scientific libraries, as a rule, are not publicly available. At the same time, creation by a separate library of its own AF leads to high labour and material costs, and the formation of a large number of AF leads to the variability of the AFs created for the same objects. The authors conclude that for efficient use of AFs within the country, it is necessary to apply unified methods and rules for creation of authority records. Another way out is the application of the Semantic Web technology, which allows linking AFs created according to different methods. It is necessary to make maximum use of existing dictionaries or create dictionaries based on the World Wide Web Consortium (W3C), Resource Description Framework (RDF), RDF Schema (RDFS) and Web Ontology Language (OWL) standards.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 881
Author(s):  
Sini Govindapillai ◽  
Lay-Ki Soon ◽  
Su-Cheng Haw

Knowledge graph (KG) publishes machine-readable representation of knowledge on the Web. Structured data in the knowledge graph is published using Resource Description Framework (RDF) where knowledge is represented as a triple (subject, predicate, object). Due to the presence of erroneous, outdated or conflicting data in the knowledge graph, the quality of facts cannot be guaranteed. Trustworthiness of facts in knowledge graph can be enhanced by the addition of metadata like the source of information, location and time of the fact occurrence. Since RDF does not support metadata for providing provenance and contextualization, an alternate method, RDF reification is employed by most of the knowledge graphs. RDF reification increases the magnitude of data as several statements are required to represent a single fact. Another limitation for applications that uses provenance data like in the medical domain and in cyber security is that not all facts in these knowledge graphs are annotated with provenance data. In this paper, we have provided an overview of prominent reification approaches together with the analysis of popular, general knowledge graphs Wikidata and YAGO4 with regard to the representation of provenance and context data. Wikidata employs qualifiers to include metadata to facts, while YAGO4 collects metadata from Wikidata qualifiers. However, facts in Wikidata and YAGO4 can be fetched without using reification to cater for applications that do not require metadata. To the best of our knowledge, this is the first paper that investigates the method and the extent of metadata covered by two prominent KGs, Wikidata and YAGO4.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Senthilselvan Natarajan ◽  
Subramaniyaswamy Vairavasundaram ◽  
Yuvaraja Teekaraman ◽  
Ramya Kuppusamy ◽  
Arun Radhakrishnan

Modern web wants the data to be in Resource Description Framework (RDF) format, a machine-readable form that is easy to share and reuse data without human intervention. However, most of the information is still available in relational form. The existing conventional methods transform the data from RDB to RDF using instance-level mapping, which has not yielded the expected results because of poor mapping. Hence, in this paper, a novel schema-based RDB-RDF mapping method (relational database to Resource Description Framework) is proposed, which is an improvised version for transforming the relational database into the Resource Description Framework. It provides both data materialization and on-demand mapping. RDB-RDF reduces the data retrieval time for nonprimary key search by using schema-level mapping. The resultant mapped RDF graph presents the relational database in a conceptual schema and maintains the instance triples as data graph. This mechanism is known as data materialization, which suits well for the static dataset. To get the data in a dynamic environment, query translation (on-demand mapping) is best instead of whole data conversion. The proposed approach directly converts the SPARQL query into SQL query using the mapping descriptions available in the proposed system. The mapping description is the key component of this proposed system which is responsible for quick data retrieval and query translation. Join expression introduced in the proposed RDB-RDF mapping method efficiently handles all complex operations with primary and foreign keys. Experimental evaluation is done on the graphics designer database. It is observed from the result that the proposed schema-based RDB-RDF mapping method accomplishes more comprehensible mapping than conventional methods by dissolving structural and operational differences.


2021 ◽  
Vol 68 (10) ◽  
Author(s):  
Margit Némethi-Takács
Keyword(s):  

Az IFLA könyvtári referenciamodell (IFLA Library Refernce Model, IFLA LRM) egy magas szintű, elméleti referenciamodell, amely négy éve jelent meg. Az LRM hatással volt és van a könyvtári feldolgozásra, ezt az is mutatja, hogy az új angol-amerikai katalogizálási szabványba (Resource Description and Access, RDA) az LRM beintegrálásra került és ma már ezen is alapul. Egyre több helyen jelenik meg, egyre többször kerül említésre a könyvtári referenciamodell, ezért úgy gondolom, hogy szükséges a modellt mind elméleti mind gyakorlati oldalról megismerni. Ez a tanulmány egy két részes sorozat második tagja, amiben egy rekordon keresztül kerülnek ismertetésre a modellt alkotó elemek és a közöttük lévő kapcsolatok. (Az előző írásban bemutattam a modellt és az azt felépítő alapelemeket.)


2021 ◽  
Vol 68 (10) ◽  
Author(s):  
Margit Némethi-Takács
Keyword(s):  

Az IFLA könyvtári referenciamodell (IFLA Library Refernce Model, IFLA LRM) egy magas szintű, elméleti referenciamodell, amely négy éve jelent meg. Az LRM hatással volt és van a könyvtári feldolgozásra, ezt az is mutatja, hogy az új angol-amerikai katalogizálási szabványba (Resource Description and Access, RDA) az LRM beintegrálásra került és ma már ezen is alapul. Egyre több helyen jelenik meg, egyre többször kerül említésre a könyvtári referenciamodell, ezért úgy gondolom, hogy szükséges a modellt mind elméleti mind gyakorlati oldalról megismerni. Ez a tanulmány egy két részes sorozat első tagja, amiben arra vállalkozom, hogy bemutassam a modellt és az azt felépítő alapelemeket. A következő írásban egy rekordon keresztül kerülnek ismertetésre a modellt alkotó elemek és a közöttük lévő kapcsolatok.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Tanvi Chawla ◽  
Girdhari Singh ◽  
Emmanuel S. Pilli

AbstractResource Description Framework (RDF) model owing to its flexible structure is increasingly being used to represent Linked data. The rise in amount of Linked data and Knowledge graphs has resulted in an increase in the volume of RDF data. RDF is used to model metadata especially for social media domains where the data is linked. With the plethora of RDF data sources available on the Web, scalable RDF data management becomes a tedious task. In this paper, we present MuSe—an efficient distributed RDF storage scheme for storing and querying RDF data with Hadoop MapReduce. In MuSe, the Big RDF data is stored at two levels for answering the common triple patterns in SPARQL queries. MuSe considers the type of frequently occuring triple patterns and optimizes RDF storage to answer such triple patterns in minimum time. It accesses only the tables that are sufficient for answering a triple pattern instead of scanning the whole RDF dataset. The extensive experiments on two synthetic RDF datasets i.e. LUBM and WatDiv, show that MuSe outperforms the compared state-of-the art frameworks in terms of query execution time and scalability.


Sign in / Sign up

Export Citation Format

Share Document