ontology quality
Recently Published Documents


TOTAL DOCUMENTS

30
(FIVE YEARS 9)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
pp. 320-337
Author(s):  
R. S. I. Wilson ◽  
J. S. Goonetillake ◽  
W. A. Indika ◽  
Athula Ginige

2020 ◽  
Vol 10 (18) ◽  
pp. 6328
Author(s):  
Gabriela R. Roldan-Molina ◽  
Jose R. Mendez ◽  
Iryna Yevseyeva ◽  
Vitor Basto-Fernandes

This paper presents OntologyFixer, a web-based tool that supports a methodology to build, assess, and improve the quality of ontology web language (OWL) ontologies. Using our software, knowledge engineers are able to fix low-quality OWL ontologies (such as those created from natural language documents using ontology learning processes). The fixing process is guided by a set of metrics and fixing mechanisms provided by the tool, and executed primarily through automated changes (inspired by quick fix actions used in the software engineering domain). To evaluate the quality, the tool supports numerical and graphical quality assessments, focusing on ontology content and structure attributes. This tool follows principles, and provides features, typical of scientific software, including user parameter requests, logging, multithreading execution, and experiment repeatability, among others. OntologyFixer architecture takes advantage of model view controller (MVC), strategy, template, and factory design patterns; and decouples graphical user interfaces (GUI) from ontology quality metrics, ontology fixing, and REST (REpresentational State Transfer) API (Application Programming Interface) components (used for pitfall identification, and ontology evaluation). We also separate part of the OntologyFixer functionality into a new package called OntoMetrics, which focuses on the identification of symptoms and the evaluation of the quality of ontologies. Finally, OntologyFixer provides mechanisms to easily develop and integrate new quick fix methods.


Author(s):  
Kridanto Surendro ◽  
◽  
Farrel Yodihartomo ◽  
Lenny Putri Yulianti ◽  
◽  
...  

2019 ◽  
Vol 37 (3) ◽  
pp. 338-354
Author(s):  
Seonghun Kim ◽  
Sam G. Oh

Purpose The purpose of this paper is to formulate apposite criteria for ontology evaluation and test them through assessments of existing ontologies. Design/methodology/approach A literature review provided the basis from which to extract the categories relevant to an evaluation of internal ontology components. According to the ontology evaluation categories, a panel of experts provided the evaluation criteria for each category via Delphi survey. Reliability was gauged by applying the criteria to assessments of existing smartphone ontologies. Findings Existing research tends to approach ontology evaluation through comparison with well-engineered ontologies, implementation in target applications and appropriateness/interconnection appraisals in relation to raw data, but such methodologies fall short of shedding light on the internal workings of ontologies, such as structure, semantic representation and interoperability. This study adopts its evaluation categories from previous research while also collecting concrete evaluation criteria from an expert panel and verifying the reliability of the resulting 53 criteria. Originality/value This is the first published study to extract ontology evaluation criteria in terms of syntax, semantics and pragmatics. The results can be used as an evaluation index following ontology construction.


Ontology helps semantic web to process and understand large amount of data available in Internet. Ontology uses concepts and their relationship with each other to represent knowledge within a domain. The represented knowledge can be analysed, inferred and reused to make decisions and to derive new knowledge. The developed ontology has to be assessed for quality before using or reusing it. Evaluation becomes a key factor to determine the quality of ontology. Different approaches and methods are used to ensure the quality desired by the user. This article identifies various aspects of ontology, provides a framework for metric based ontology evaluation, elucidates components in the framework and develops a tool based on the framework. The framework checks the syntax, structural and semantic measures of ontology. While a reasoner takes care of the syntax and parser errors, the structural metrics analyses the taxonomy of ontology. Semantic measures deal with the distance of concepts in ontology. Further, competency questions are used to do custom based quality checking of a particular domain. This article provides a systematic way to identify and measure the quality of ontology based on metrics.


Author(s):  
Niyati Baliyan ◽  
Ankita Verma

Ontology or domain specific vocabulary is indispensable to a semantic web-based application; therefore, its evaluation assumes critical importance for maintaining the quality. A modular ontology is intuitively preferred to as a monolithic ontology. A good quality modular ontology, in turn, promotes reusability. This chapter is directed at summarizing the efforts towards ontology evaluation, besides defining the process of evaluation, various approaches to evaluation and underlying motivation. In particular, a modular ontology's cohesion and coupling metrics have been discussed in detail as a case study. The authors believe that the body of knowledge in this chapter will serve as a beginning point for ontology quality engineers and at the same time acquaint them with the state-of-art in this field.


2018 ◽  
Vol 9 (1) ◽  
Author(s):  
Ying Shen ◽  
Daoyuan Chen ◽  
Buzhou Tang ◽  
Min Yang ◽  
Kai Lei
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document