rdf data
Recently Published Documents


TOTAL DOCUMENTS

491
(FIVE YEARS 117)

H-INDEX

26
(FIVE YEARS 3)

2021 ◽  
Vol 10 (12) ◽  
pp. 832
Author(s):  
Xiangfu Meng ◽  
Lin Zhu ◽  
Qing Li ◽  
Xiaoyan Zhang

Resource Description Framework (RDF), as a standard metadata description framework proposed by the World Wide Web Consortium (W3C), is suitable for modeling and querying Web data. With the growing importance of RDF data in Web data management, there is an increasing need for modeling and querying RDF data. Previous approaches mainly focus on querying RDF. However, a large amount of RDF data have spatial and temporal features. Therefore, it is important to study spatiotemporal RDF data query approaches. In this paper, firstly, we formally define spatiotemporal RDF data, and construct a spatiotemporal RDF model st-RDF that is used to represent and manipulate spatiotemporal RDF data. Secondly, we present a spatiotemporal RDF query algorithm stQuery based on subgraph matching. This algorithm can quickly determine whether the query result is empty for queries whose temporal or spatial range exceeds a specific range by adopting a preliminary query filtering mechanism in the query process. Thirdly, we propose a sorting strategy that calculates the matching order of query nodes to speed up the subgraph matching. Finally, we conduct experiments in terms of effect and query efficiency. The experimental results show the performance advantages of our approach.


Semantic Web ◽  
2021 ◽  
pp. 1-19
Author(s):  
Edna Ruckhaus ◽  
Adolfo Anton-Bravo ◽  
Mario Scrocca ◽  
Oscar Corcho

We present an ontology that describes the domain of Public Transport by bus, which is common in cities around the world. This ontology is aligned to Transmodel, a reference model which is available as a UML specification and which was developed to foster interoperability of data about transport systems across Europe. The alignment with this non-ontological resource required the adaptation of the Linked Open Terms (LOT) methodology, which has been used by our team as the methodological framework for the development of many ontologies used for the publication of open city data. The ontology is structured into three main modules: (1) agencies, operators and the lines that they manage, (2) lines, routes, stops and journey patterns, and (3) planned vehicle journeys with their timetables and service calendars. Besides reusing Transmodel concepts, the ontology also reuses common ontology design patterns from GeoSPARQL and the SOSA ontology. As part of the LOT data-driven validation stage, RDF data has been generated taking as input the GTFS feeds (General Transit Feed Specification) provided by the Madrid public bus transport provider (EMT). Mapping rules from structured data sources to RDF were developed using the RDF Mapping Language (RML) to generate RDF data, and queries corresponding to competency questions were tested.


Author(s):  
Nicholas John Car ◽  
Timo Homburg

In 2012 the Open Geospatial Consortium published GeoSPARQL defining “an RDF/OWL ontology for [spatial] information”, “SPARQL extension functions” for performing spatial operations on RDF data and “RIF rules” defining entailments to be drawn from graph pattern matching. In the 8+ years since its publication, GeoSPARQL has become the most important spatial Semantic Web standard, as judged by references to it in other Semantic Web standards and its wide use for Semantic Web data. An update to GeoSPARQL was proposed in 2019 to deliver a version 1.1 with a charter to: handle outstanding change requests and source new ones from the user community and to “better present” the standard, that is to better link all the standard’s parts and better document & exemplify elements. Expected updates included new geometry representations, alignments to other ontologies, handling of new spatial referencing systems, and new artifact presentation. In this paper, we describe motivating change requests and actual resultant updates in the candidate version 1.1 of the standard alongside reference implementations and usage examples. We also describe the theory behind particular updates, initial implementations of many parts of the standard, and our expectations for GeoSPARQL 1.1’s use.


2021 ◽  
Author(s):  
Yad Fatah ◽  
Mark Nourallah ◽  
Lynn Wahab ◽  
Fatima K. Abu Salem ◽  
Shady Elbassuoni

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Tanvi Chawla ◽  
Girdhari Singh ◽  
Emmanuel S. Pilli

AbstractResource Description Framework (RDF) model owing to its flexible structure is increasingly being used to represent Linked data. The rise in amount of Linked data and Knowledge graphs has resulted in an increase in the volume of RDF data. RDF is used to model metadata especially for social media domains where the data is linked. With the plethora of RDF data sources available on the Web, scalable RDF data management becomes a tedious task. In this paper, we present MuSe—an efficient distributed RDF storage scheme for storing and querying RDF data with Hadoop MapReduce. In MuSe, the Big RDF data is stored at two levels for answering the common triple patterns in SPARQL queries. MuSe considers the type of frequently occuring triple patterns and optimizes RDF storage to answer such triple patterns in minimum time. It accesses only the tables that are sufficient for answering a triple pattern instead of scanning the whole RDF dataset. The extensive experiments on two synthetic RDF datasets i.e. LUBM and WatDiv, show that MuSe outperforms the compared state-of-the art frameworks in terms of query execution time and scalability.


2021 ◽  
Author(s):  
Jones O. Avelino ◽  
Kelli F. Cordeiro ◽  
Maria C. Cavalcanti
Keyword(s):  

O crescimento de conjuntos de dados disponíveis na Web que utilizam o padrão RDF propicia análises de dados que envolvem múltiplas dimensões. Segundo a W3C, um dos recursos para analisar dados multidimensionais é a utilização do vocabulário RDF Data Cube. Contudo ainda há uma carência de instrumentos de apoio para aplicação deste vocabulário em conjuntos de dados. Nesse sentido, este artigo propõe o INTEGRACuBe, um ambiente que utiliza um metaesquema e mecanismos semiautomatizados para apoiar o mapeamento de recursos de dados ao metamodelo RDF Data Cube. Como resultado, será possível a exploração de dados analíticos em RDF. Adicionalmente, um estudo de caso é apresentado no cenário de Gerência de Desenvolvimento de Software.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Pisit Makpaisit ◽  
Chantana Chantrapornchai

AbstractResource Description Framework (RDF) is commonly used as a standard for data interchange on the web. The collection of RDF data sets can form a large graph which consumes time to query. It is known that modern Graphic Processing Units (GPUs) can be employed to execute parallel programs in order to speedup the running time. In this paper, we propose a novel RDF data representation along with the query processing algorithm that is suitable for GPU processing. Since the main challenges of GPU architecture are the limited memory sizes, the memory transfer latency, and the vast number of GPU cores. Our system is designed to strengthen the use of GPU cores and reduce the effect of memory transfer. We propose a representation consists of indices and column-based RDF ID data that can reduce the GPU memory requirement. The indexing and pre-upload filtering techniques are then applied to reduce the data transfer between the host and GPU memory. We add the index swapping process to facilitate the sorting and joining data process based on the given variable and add the pre-upload step to reduce the size of results’ storage, and the data transfer time. The experimental results show that our representation is about 35% smaller than the traditional NT format and 40% less compared to that of gStore. The query processing time can be speedup ranging from 1.95 to 397.03 when compared with RDF3X and gStore processing time with WatDiv test suite. It achieves speedup 578.57 and 62.97 for LUBM benchmark when compared to RDF-3X and gStore. The analysis shows the query cases which can gain benefits from our approach.


Semantic Web ◽  
2021 ◽  
pp. 1-19
Author(s):  
Marilena Daquino ◽  
Ivan Heibi ◽  
Silvio Peroni ◽  
David Shotton

Semantic Web technologies are widely used for storing RDF data and making them available on the Web through SPARQL endpoints, queryable using the SPARQL query language. While the use of SPARQL endpoints is strongly supported by Semantic Web experts, it hinders broader use of RDF data by common Web users, engineers and developers unfamiliar with Semantic Web technologies, who normally rely on Web RESTful APIs for querying Web-available data and creating applications over them. To solve this problem, we have developed RAMOSE, a generic tool developed in Python to create REST APIs over SPARQL endpoints. Through the creation of source-specific textual configuration files, RAMOSE enables the querying of SPARQL endpoints via simple Web RESTful API calls that return either JSON or CSV-formatted data, thus hiding all the intrinsic complexities of SPARQL and RDF from common Web users. We provide evidence that the use of RAMOSE to provide REST API access to RDF data within OpenCitations triplestores is beneficial in terms of the number of queries made by external users of such RDF data using the RAMOSE API, compared with the direct access via the SPARQL endpoint. Our findings show the importance for suppliers of RDF data of having an alternative API access service, which enables its use by those with no (or little) experience in Semantic Web technologies and the SPARQL query language. RAMOSE can be used both to query any SPARQL endpoint and to query any other Web API, and thus it represents an easy generic technical solution for service providers who wish to create an API service to access Linked Data stored as RDF in a triplestore.


2021 ◽  
Author(s):  
Farshad Bakhshandegan Moghaddam ◽  
Carsten Draschner ◽  
Jens Lehmann ◽  
Hajira Jabeen

The last decades have witnessed significant advancements in terms of data generation, management, and maintenance. This has resulted in vast amounts of data becoming available in a variety of forms and formats including RDF. As RDF data is represented as a graph structure, applying machine learning algorithms to extract valuable knowledge and insights from them is not straightforward, especially when the size of the data is enormous. Although Knowledge Graph Embedding models (KGEs) convert the RDF graphs to low-dimensional vector spaces, these vectors often lack the explainability. On the contrary, in this paper, we introduce a generic, distributed, and scalable software framework that is capable of transforming large RDF data into an explainable feature matrix. This matrix can be exploited in many standard machine learning algorithms. Our approach, by exploiting semantic web and big data technologies, is able to extract a variety of existing features by deep traversing a given large RDF graph. The proposed framework is open-source, well-documented, and fully integrated into the active community project Semantic Analytics Stack (SANSA). The experiments on real-world use cases disclose that the extracted features can be successfully used in machine learning tasks like classification and clustering.


Author(s):  
Hatem Soliman ◽  
Izhar Ahmed Khan ◽  
Yasir Hussain

The resource description framework (RDF) was adopted by the World Wide Web (W3C) as an essential semantic web standard and the RDF scheme. It accords the hard semantics in the description and wields the crisp metadata. However, it usually produces vague or ambiguous information. Consequently, fuzzy RDF helps deal with such special data by transforming the crisp values into a fuzzy set. A method for analyzing fuzzy RDF data is proposed in this paper. To this end, first, we decompose the RDF into fuzzy RDF variables. Second, we are designing a model for global sensitivity analysis based on the decomposition of fuzzy RDF. It figures out the ambiguities of fuzzy RDF data. The proposed global sensitivity analysis model provides the importance of fuzzy RDF data by considering the response function’s structure and reselects it to a certain degree. A practical tool for sensitivity analysis of fuzzy RDF data has also been implemented based on the proposed model.


Sign in / Sign up

Export Citation Format

Share Document