scholarly journals Conversion of the English-Xhosa Dictionary for Nurses to a Linguistic Linked Data Framework

Information ◽  
2018 ◽  
Vol 9 (11) ◽  
pp. 274 ◽  
Author(s):  
Frances Gillis-Webber

The English-Xhosa Dictionary for Nurses (EXDN) is a bilingual, unidirectional printed dictionary in the public domain, with English and isiXhosa as the language pair. By extending the digitisation efforts of EXDN from a human-readable digital object to a machine-readable state, using Resource Description Framework (RDF) as the data model, semantically interoperable structured data can be created, thus enabling EXDN’s data to be reused, aggregated and integrated with other language resources, where it can serve as a potential aid in the development of future language resources for isiXhosa, an under-resourced language in South Africa. The methodological guidelines for the construction of a Linguistic Linked Data framework (LLDF) for a lexicographic resource, as applied to EXDN, are described, where an LLDF can be defined as a framework: (1) which describes data in RDF, (2) using a model designed for the representation of linguistic information, (3) which adheres to Linked Data principles, and (4) which supports versioning, allowing for change. The result is a bidirectional lexicographic resource, previously bounded and static, now unbounded and evolving, with the ability to extend to multilingualism.

2019 ◽  
Vol 37 (3) ◽  
pp. 513-524
Author(s):  
Thomas D. Steele

Purpose Bibliographic framework initiative (BIBFRAME) is a data model created by the Library of Congress to with the long-term goal of replacing Machine Readable Cataloging (MARC). The purpose of this paper is to inform catalogers and other library professionals why MARC is lacking in the needs of current users, and how BIBFRAME works better to meet these needs. It will also explain linked data and the principles of Resource Description Framework, so catalogers will have a better understanding of BIBFRAME’s basic goals. Design/methodology/approach The review of recent literature in print and online, as well as using the BIBFRAME editor to create a BIBFRAME record, was the basis for this paper. Findings The paper concludes the user experience with the library catalog has changed and requires more in-depth search capabilities using linked data and that BIBFRAME is a first step in meeting the user needs of the future. Originality/value The paper gives the reader an entry point into the complicated future catalogers and other professionals may feel trepidation about. With a systematic walkthrough of the creation of a BIBFRAME record, the reader should feel more informed where the future of cataloging is going.


2018 ◽  
Vol 10 (8) ◽  
pp. 2613
Author(s):  
Dandan He ◽  
Zhongfu Li ◽  
Chunlin Wu ◽  
Xin Ning

Industrialized construction has raised the requirements of procurement methods used in the construction industry. The rapid development of e-commerce offers efficient and effective solutions, however the large number of participants in the construction industry means that the data involved are complex, and problems arise related to volume, heterogeneity, and fragmentation. Thus, the sector lags behind others in the adoption of e-commerce. In particular, data integration has become a barrier preventing further development. Traditional e-commerce platform, which considered data integration for common product data, cannot meet the requirements of construction product data integration. This study aimed to build an information-integrated e-commerce platform for industrialized construction procurement (ICP) to overcome some of the shortcomings existing platforms. We proposed a platform based on Building Information Modelling (BIM) and linked data, taking an innovative approach to data integration. It uses industrialized construction technology to support product standardization, BIM to support procurement process, and linked data to connect different data sources. The platform was validated using a case study. With the development of an e-commerce ontology, industrialized construction component information was extracted from BIM models and converted to Resource Description Framework (RDF) format. Related information from different data sources was also converted to RDF format, and Simple Protocol and Resource Description Framework Query Language (SPARQL) queries were implemented. The platform provides a solution for the development of e-commerce platform in the construction industry.


Author(s):  
E. Hietanen ◽  
L. Lehto ◽  
P. Latvala

In this study, a prototype service to provide data from Web Feature Service (WFS) as linked data is implemented. At first, persistent and unique Uniform Resource Identifiers (URI) are created to all spatial objects in the dataset. The objects are available from those URIs in Resource Description Framework (RDF) data format. Next, a Web Ontology Language (OWL) ontology is created to describe the dataset information content using the Open Geospatial Consortium’s (OGC) GeoSPARQL vocabulary. The existing data model is modified in order to take into account the linked data principles. The implemented service produces an HTTP response dynamically. The data for the response is first fetched from existing WFS. Then the Geographic Markup Language (GML) format output of the WFS is transformed on-the-fly to the RDF format. Content Negotiation is used to serve the data in different RDF serialization formats. This solution facilitates the use of a dataset in different applications without replicating the whole dataset. In addition, individual spatial objects in the dataset can be referred with URIs. Furthermore, the needed information content of the objects can be easily extracted from the RDF serializations available from those URIs. <br><br> A solution for linking data objects to the dataset URI is also introduced by using the Vocabulary of Interlinked Datasets (VoID). The dataset is divided to the subsets and each subset is given its persistent and unique URI. This enables the whole dataset to be explored with a web browser and all individual objects to be indexed by search engines.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Tanvi Chawla ◽  
Girdhari Singh ◽  
Emmanuel S. Pilli

AbstractResource Description Framework (RDF) model owing to its flexible structure is increasingly being used to represent Linked data. The rise in amount of Linked data and Knowledge graphs has resulted in an increase in the volume of RDF data. RDF is used to model metadata especially for social media domains where the data is linked. With the plethora of RDF data sources available on the Web, scalable RDF data management becomes a tedious task. In this paper, we present MuSe—an efficient distributed RDF storage scheme for storing and querying RDF data with Hadoop MapReduce. In MuSe, the Big RDF data is stored at two levels for answering the common triple patterns in SPARQL queries. MuSe considers the type of frequently occuring triple patterns and optimizes RDF storage to answer such triple patterns in minimum time. It accesses only the tables that are sufficient for answering a triple pattern instead of scanning the whole RDF dataset. The extensive experiments on two synthetic RDF datasets i.e. LUBM and WatDiv, show that MuSe outperforms the compared state-of-the art frameworks in terms of query execution time and scalability.


2020 ◽  
Vol 25 (6) ◽  
pp. 793-801
Author(s):  
Maturi Sreerama Murty ◽  
Nallamothu Nagamalleswara Rao

Following the accessibility of Resource Description Framework (RDF) resources is a key capacity in the establishment of Linked Data frameworks. It replaces center around information reconciliation contrasted with work rate. Exceptional Connected Data that empowers applications to improve by changing over legacy information into RDF resources. This data contains bibliographic, geographic, government, arrangement, and alternate routes. Regardless, a large portion of them don't monitor the subtleties and execution of each sponsored resource. In such cases, it is vital for those applications to track, store and scatter provenance information that mirrors their source data and introduced tasks. We present the RDF information global positioning framework. Provenance information is followed during the progress cycle and oversaw multiple times. From that point, this data is appropriated utilizing of this concept URIs. The proposed design depends on the Harvard Library Database. The tests were performed on informational indexes with changes made to the qualities??In the RDF and the subtleties related with the provenance. The outcome has quieted the guarantee as in it pulls in record wholesalers to make significant realities that develop while taking almost no time and exertion.


2016 ◽  
Author(s):  
Michel Dumontier ◽  
Alasdair J G Gray ◽  
M. Scott Marshall ◽  
Vladimir Alexiev ◽  
Peter Ansell ◽  
...  

Access to consistent, high-quality metadata is critical to finding, understanding, and reusing scientific data. However, while there are many relevant vocabularies for the annotation of a dataset, none sufficiently captures all the necessary metadata. This prevents uniform indexing and querying of dataset repositories. Towards providing a practical guide for producing a high quality description of biomedical datasets, the W3C Semantic Web for Health Care and the Life Sciences Interest Group (HCLSIG) identified Resource Description Framework (RDF) vocabularies that could be used to specify common metadata elements and their value sets. The resulting guideline covers elements of description, identification, attribution, versioning, provenance, and content summarization. This guideline reuses existing vocabularies, and is intended to meet key functional requirements including indexing, discovery, exchange, query, and retrieval of datasets, thereby enabling the publication of FAIR data. The resulting metadata profile is generic and could be used by other domains with an interest in providing machine readable descriptions of versioned datasets.


Heritage ◽  
2019 ◽  
Vol 2 (2) ◽  
pp. 1471-1498 ◽  
Author(s):  
Ikrom Nishanbaev ◽  
Erik Champion ◽  
David A. McMeekin

The amount of digital cultural heritage data produced by cultural heritage institutions is growing rapidly. Digital cultural heritage repositories have therefore become an efficient and effective way to disseminate and exploit digital cultural heritage data. However, many digital cultural heritage repositories worldwide share technical challenges such as data integration and interoperability among national and regional digital cultural heritage repositories. The result is dispersed and poorly-linked cultured heritage data, backed by non-standardized search interfaces, which thwart users’ attempts to contextualize information from distributed repositories. A recently introduced geospatial semantic web is being adopted by a great many new and existing digital cultural heritage repositories to overcome these challenges. However, no one has yet conducted a conceptual survey of the geospatial semantic web concepts for a cultural heritage audience. A conceptual survey of these concepts pertinent to the cultural heritage field is, therefore, needed. Such a survey equips cultural heritage professionals and practitioners with an overview of all the necessary tools, and free and open source semantic web and geospatial semantic web platforms that can be used to implement geospatial semantic web-based cultural heritage repositories. Hence, this article surveys the state-of-the-art geospatial semantic web concepts, which are pertinent to the cultural heritage field. It then proposes a framework to turn geospatial cultural heritage data into machine-readable and processable resource description framework (RDF) data to use in the geospatial semantic web, with a case study to demonstrate its applicability. Furthermore, it outlines key free and open source semantic web and geospatial semantic platforms for cultural heritage institutions. In addition, it examines leading cultural heritage projects employing the geospatial semantic web. Finally, the article discusses attributes of the geospatial semantic web that require more attention, that can result in generating new ideas and research questions for both the geospatial semantic web and cultural heritage fields.


2019 ◽  
Vol 121 (2) ◽  
pp. 1213-1228 ◽  
Author(s):  
Ivan Heibi ◽  
Silvio Peroni ◽  
David Shotton

Abstract In this paper, we present COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations (http://opencitations.net/index/coci). COCI is the first open citation index created by OpenCitations, in which we have applied the concept of citations as first-class data entities, and it contains more than 445 million DOI-to-DOI citation links derived from the data available in Crossref. These citations are described using the resource description framework by means of the newly extended version of the OpenCitations Data Model (OCDM). We introduce the workflow we have developed for creating these data, and also show the additional services that facilitate the access to and querying of these data via different access points: a SPARQL endpoint, a REST API, bulk downloads, Web interfaces, and direct access to the citations via HTTP content negotiation. Finally, we present statistics regarding the use of COCI citation data, and we introduce several projects that have already started to use COCI data for different purposes.


Author(s):  
Karen Coyle

Application profiles fulfill similar functions to other forms of metadata documentation, such as data dictionaries. The preference is for application profiles to be machine-readable and machine-actionable, so that they can provide validation and processing instructions, not unlike XML schema does for XML documents. These goals are behind the work of the Dublin Core Metadata Initiative in the work that has been done over the last decade to develop application profiles for data that uses the Resource Description Framework model of the World Wide Web Consortium.


2016 ◽  
Vol 35 (1) ◽  
pp. 51 ◽  
Author(s):  
Juliet L. Hardesty

Metadata, particularly within the academic library setting, is often expressed in eXtensible Markup Language (XML) and managed with XML tools, technologies, and workflows. Managing a library’s metadata currently takes on a greater level of complexity as libraries are increasingly adopting the Resource Description Framework (RDF). Semantic Web initiatives are surfacing in the library context with experiments in publishing metadata as Linked Data sets and also with development efforts such as BIBFRAME and the Fedora 4 Digital Repository incorporating RDF. Use cases show that transitions into RDF are occurring in both XML standards and in libraries with metadata encoded in XML. It is vital to understand that transitioning from XML to RDF requires a shift in perspective from replicating structures in XML to defining meaningful relationships in RDF. Establishing coordination and communication among these efforts will help as more libraries move to use RDF, produce Linked Data, and approach the Semantic Web.


Sign in / Sign up

Export Citation Format

Share Document