A Graph-Based Approach to Ontology Debugging in DL-Lite

Author(s):  
Xuefeng Fu ◽  
Yong Zhang ◽  
Guilin Qi
Keyword(s):  
2008 ◽  
Vol 19 (5) ◽  
pp. 721-743 ◽  
Author(s):  
M. M. Ribeiro ◽  
R. Wassermann

2014 ◽  
Vol 71 ◽  
pp. 169-186 ◽  
Author(s):  
Qiu Ji ◽  
Zhiqiang Gao ◽  
Zhisheng Huang ◽  
Man Zhu
Keyword(s):  

2012 ◽  
Vol 12-13 ◽  
pp. 88-103 ◽  
Author(s):  
Kostyantyn Shchekotykhin ◽  
Gerhard Friedrich ◽  
Philipp Fleiss ◽  
Patrick Rodler

10.29007/4ckv ◽  
2018 ◽  
Author(s):  
Zohreh Shams ◽  
Mateja Jamnik ◽  
Gem Stapleton ◽  
Yuri Sato

Ontologies are notoriously hard to define, express and reason about. Many tools have been developed to ease the debugging and the reasoning process with ontologies, however they often lack accessibility and formalisation. A visual representation language, concept diagrams, was developed for expressing and reasoning about ontologies in an accessible way. Indeed, empirical studies show that concept diagrams are cognitively more accessible to users in ontology debugging tasks. In this paper we answer the question of “ How can concept diagrams be used to reason about inconsistencies and incoherence of ontologies?”. We do so by formalising a set of inference rules for concept diagrams that enables stepwise verification of the inconsistency and/or incoherence of a set of ontology axioms. The design of inference rules is driven by empirical evidence that concise (merged) diagrams are easier to comprehend for users than a set of lower level diagrams that offer a one-to-one translation of OWL ontology axioms into concept diagrams. We prove that our inference rules are sound, and exemplify how they can be used to reason about inconsistencies and incoherence. Finally, we indicate how our rules can serve as a foundation for new rules required when representing ontologies in diverse new domains.


2010 ◽  
Vol 14 (46) ◽  
Author(s):  
Martin Oscar Moguillansky ◽  
Nicolás Daniel Rotstein ◽  
Marcelo Alejandro Falappa

Author(s):  
Kostyantyn Shchekotykhin ◽  
Gerhard Friedrich ◽  
Philipp Fleiss ◽  
Patrick Rodler

2021 ◽  
Author(s):  
Simone Coetzer ◽  
Katarina Britz

A successful application of ontologies relies on representing as much accurate and relevant domain knowledge as possible, while maintaining logical consistency. As the successful implementation of a real-world ontology is likely to contain many concepts and intricate relationships between the concepts, it is necessary to follow a methodology for debugging and refining the ontology. Many ontology debugging approaches have been developed to help the knowledge engineer pinpoint the cause of logical inconsistencies and rectify them in a strategic way. We show that existing debugging approaches can lead to unintuitive results, which may lead the knowledge engineer to opt for deleting potentially crucial and nuanced knowledge. We provide a methodological and design foundation for weakening faulty axioms in a strategic way using defeasible reasoning tools. Our methodology draws from Rodler’s interactive ontology debugging approach and extends this approach by creating a methodology to systematically find conflict resolution recommendations. Importantly, our goal is not to convert a classical ontology to a defeasible ontology. Rather, we use the definition of exceptionality of a concept, which is central to the semantics of defeasible description logics, and the associated algorithm to determine the extent of a concept’s exceptionality (their ranking); then, starting with the statements containing the most general concepts (the least exceptional concepts) weakened versions of the original statements are constructed; this is done until all inconsistencies have been resolved.


Sign in / Sign up

Export Citation Format

Share Document