scholarly journals An empirical evaluation of cost-based federated SPARQL query processing engines

Semantic Web ◽  
2021 ◽  
pp. 1-26
Author(s):  
Umair Qudus ◽  
Muhammad Saleem ◽  
Axel-Cyrille Ngonga Ngomo ◽  
Young-Koo Lee

Finding a good query plan is key to the optimization of query runtime. This holds in particular for cost-based federation engines, which make use of cardinality estimations to achieve this goal. A number of studies compare SPARQL federation engines across different performance metrics, including query runtime, result set completeness and correctness, number of sources selected and number of requests sent. Albeit informative, these metrics are generic and unable to quantify and evaluate the accuracy of the cardinality estimators of cost-based federation engines. To thoroughly evaluate cost-based federation engines, the effect of estimated cardinality errors on the overall query runtime performance must be measured. In this paper, we address this challenge by presenting novel evaluation metrics targeted at a fine-grained benchmarking of cost-based federated SPARQL query engines. We evaluate five cost-based federated SPARQL query engines using existing as well as novel evaluation metrics by using LargeRDFBench queries. Our results provide a detailed analysis of the experimental outcomes that reveal novel insights, useful for the development of future cost-based federated SPARQL query processing engines.

2019 ◽  
Vol 30 (1) ◽  
pp. 22-40 ◽  
Author(s):  
Minjae Song ◽  
Hyunsuk Oh ◽  
Seungmin Seo ◽  
Kyong-Ho Lee

The amount of RDF data being published on the Web is increasing at a massive rate. MapReduce-based distributed frameworks have become the general trend in processing SPARQL queries against RDF data. Currently, query processing systems that use MapReduce have not been able to keep up with the increase of semantic annotated data, resulting in non-interactive SPARQL query processing. The principal reason is that intermediate query results from join operations in a MapReduce framework are so massive that they consume all available network bandwidth. In this article, the authors present an efficient SPARQL processing system that uses MapReduce and HBase. The system runs a job optimized query plan using their proposed abstract RDF data to decrease the number of jobs and also decrease the amount of input data. The authors also present an efficient algorithm of using Map-side joins while also using the abstract RDF data to filter out unneeded RDF data. Experimental results show that the proposed approach demonstrates better performance when processing queries with a large amount of input data than those found in previous works.


2020 ◽  
Author(s):  
Abdulrahman Takiddin ◽  
Jens Schneider ◽  
Yin Yang ◽  
Alaa Abd-Alrazaq ◽  
Mowafa Househ

BACKGROUND Skin cancer is the most common cancer type affecting humans. Traditional skin cancer diagnosis methods are costly, require a professional physician, and take time. Hence, to aid in diagnosing skin cancer, Artificial Intelligence (AI) tools are being used, including shallow and deep machine learning-based techniques that are trained to detect and classify skin cancer using computer algorithms and deep neural networks. OBJECTIVE The aim of this study is to identify and group the different types of AI-based technologies used to detect and classify skin cancer. The study also examines the reliability of the selected papers by studying the correlation between the dataset size and number of diagnostic classes with the performance metrics used to evaluate the models. METHODS We conducted a systematic search for articles using IEEE Xplore, ACM DL, and Ovid MEDLINE databases following the PRISMA Extension for Scoping Reviews (PRISMA-ScR) guidelines. The study included in this scoping review had to fulfill several selection criteria; to be specifically about skin cancer, detecting or classifying skin cancer, and using AI technologies. Study selection and data extraction were conducted by two reviewers independently. Extracted data were synthesized narratively, where studies were grouped based on the diagnostic AI techniques and their evaluation metrics. RESULTS We retrieved 906 papers from the 3 databases, but 53 studies were eligible for this review. While shallow techniques were used in 14 studies, deep techniques were utilized in 39 studies. The studies used accuracy (n=43/53), the area under receiver operating characteristic curve (n=5/53), sensitivity (n=3/53), and F1-score (n=2/53) to assess the proposed models. Studies that use smaller datasets and fewer diagnostic classes tend to have higher reported accuracy scores. CONCLUSIONS The adaptation of AI in the medical field facilitates the diagnosis process of skin cancer. However, the reliability of most AI tools is questionable since small datasets or low numbers of diagnostic classes are used. In addition, a direct comparison between methods is hindered by a varied use of different evaluation metrics and image types.


2015 ◽  
Vol 15 (01) ◽  
pp. 1550001 ◽  
Author(s):  
A. Suruliandi ◽  
G. Murugeswari ◽  
P. Arockia Jansi Rani

Digital image processing techniques are very useful in abnormality detection in digital mammogram images. Nowadays, texture-based image segmentation of digital mammogram images is very popular due to its better accuracy and precision. Local binary pattern (LBP) descriptor has attracted many researchers working in the field of texture analysis of digital images. Because of its success, many texture descriptors have been introduced as variants of LBP. In this work, we propose a novel texture descriptor called generic weighted cubicle pattern (GWCP) and we analyzed the proposed operator for texture image classification. We also performed abnormality detection through mammogram image segmentation using k-Nearest Neighbors (KNN) algorithm and compared the performance of the proposed texture descriptor with LBP and other variants of LBP namely local ternary pattern (LTPT), extended local texture pattern (ELTP) and local texture pattern (LTPS). For evaluation, we used the performance metrics such as accuracy, error rate, sensitivity, specificity, under estimation fraction and over estimation fraction. The results prove that the proposed method outperforms other descriptors in terms of abnormality detection in mammogram images.


2012 ◽  
pp. 819-846 ◽  
Author(s):  
Pruet Boonma ◽  
Junichi Suzuki

Due to stringent constraints in memory footprint, processing efficiency and power consumption, traditional wireless sensor networks (WSNs) face two key issues: (1) a lack of interoperability with access networks and (2) a lack of flexibility to customize non-functional properties such as event filtering, data aggregation and routing. In order to address these issues, this chapter investigates interoperable publish/subscribe middleware for WSNs. The proposed middleware, called TinyDDS, enables the interoperability between WSNs and access networks by providing programming language interoperability and protocol interoperability based on the standard Data Distribution Service (DDS) specification. Moreover, TinyDDS provides a pluggable framework that allows WSN applications to have fine-grained control over application-level and middleware-level non-functional properties. Simulation and empirical evaluation results demonstrate that TinyDDS is lightweight and efficient on the TinyOS and SunSPOT platforms. The results also show that TinyDDS simplifies the development of publish/subscribe WSN applications.


2014 ◽  
Vol 10 (3) ◽  
pp. 226-244 ◽  
Author(s):  
Johannes Lorey

Purpose – The purpose of this study is to introduce several metrics that enable universal and fine-grained characterization of arbitrary Linked Data repositories. Publicly accessible SPARQL endpoints contain vast amounts of knowledge from a large variety of domains. However, oftentimes these endpoints are not configured to process specific workloads as efficiently as possible. Assisting users in leveraging SPARQL endpoints requires insight into functional and non-functional properties of these knowledge bases. Design/methodology/approach – This study presents comprehensive approaches for deriving these metrics. More specifically, the study utilizes concrete SPARQL queries to determine corresponding values. Furthermore, it validates and discusses the introduced metrics through extensive evaluation on real-world SPARQL endpoints. Findings – The evaluation determined that endpoints exhibit different characteristics. While it comes as no surprise that latency and throughput are influenced by the network infrastructure, the costs for join operations depend on a number of factors that are not obvious to a data consumer. Moreover, as the author discusses mean, median and upper quartile values, it was found both endpoints behaving consistently as well as repositories offering varying levels of performance. Originality/value – On the one hand, the contribution of the authors work lies in assisting data consumers in evaluation of the quality of service of publicly available SPARQL endpoints. On the other hand, the performance metrics introduced in this study can also be considered as additional input features for distributed query processing frameworks. Moreover, the author provides a universal means for discerning characteristics of different SPARQL endpoints without the need of (synthetic or real-world) query workloads.


2018 ◽  
pp. 21-55 ◽  
Author(s):  
Bernd Amann ◽  
Olivier Curé ◽  
Hubert Naacke

2015 ◽  
Vol 9 (6) ◽  
pp. 919-933 ◽  
Author(s):  
Xiaoyan Wang ◽  
Tao Yang ◽  
Jinchuan Chen ◽  
Long He ◽  
Xiaoyong Du

2014 ◽  
Vol 41 (10) ◽  
pp. 4596-4607 ◽  
Author(s):  
Kisung Kim ◽  
Bongki Moon ◽  
Hyoung-Joo Kim

Sign in / Sign up

Export Citation Format

Share Document