scholarly journals General information spaces: measuring inconsistency, rationality postulates, and complexity

Author(s):  
John Grant ◽  
Francesco Parisi

AbstractAI systems often need to deal with inconsistent information. For this reason, since the early 2000s, some AI researchers have developed ways to measure the amount of inconsistency in a knowledge base. By now there is a substantial amount of research about various aspects of inconsistency measuring. The problem is that most of this work applies only to knowledge bases formulated as sets of formulas in propositional logic. Hence this work is not really applicable to the way that information is actually stored. The purpose of this paper is to extend inconsistency measuring to real world information. We first define the concept ofgeneral information spacewhich encompasses various types of databases and scenarios in AI systems. Then, we show how to transform any general information space to aninconsistency equivalentpropositional knowledge base, and finally apply propositional inconsistency measures to find the inconsistency of the general information space. Our method allows for the direct comparison of the inconsistency of different information spaces, even though the data is presented in different ways. We demonstrate the transformation on four general information spaces: a relational database, a graph database, a spatio-temporal database, and a Blocks world scenario, where we apply several inconsistency measures after performing the transformation. Then we review so-called rationality postulates that have been developed for propositional knowledge bases as a way to judge the intuitive properties of these measures. We show that although general information spaces may be nonmonotonic, there is a way to transform the postulates so they can be applied to general information spaces and we show which of the measures satisfy which of the postulates. Finally, we discuss the complexity of inconsistency measures for general information spaces.

Author(s):  
Leila Amgoud ◽  
Dragan Doder

Several argument-based logics have been defined for handling inconsistency in propositional knowledge bases. We show that they may miss intuitive consequences, and discuss two sources of this drawback: the definition of logical argument i) may prevent formulas from being justified, and ii) may allow irrelevant information in argument's support. We circumvent these two issues by considering a general definition of argument and compiling each argument. A compilation amounts to forgetting in an argument's support any irrelevant variable. This operation returns zero, one or several concise arguments, which we then use in an instance of Dung's abstract framework. We show that the resulting logic satisfies existing rationality postulates, namely consistency and closure under deduction. Furthermore, it is more productive than the existing argument-based and coherence-based logics.


2021 ◽  
Vol 71 ◽  
Author(s):  
John Grant ◽  
Maria Vanina Martinez ◽  
Cristian Molinaro ◽  
Francesco Parisi

The problem of managing spatio-temporal data arises in many applications, such as location-based services, environmental monitoring, geographic information systems, and many others. Often spatio-temporal data arising from such applications turn out to be inconsistent, i.e., representing an impossible situation in the real world. Though several inconsistency measures have been proposed to quantify in a principled way inconsistency in propositional knowledge bases, little effort has been done so far on inconsistency measures tailored for the spatio-temporal setting. In this paper, we define and investigate new measures that are particularly suitable for dealing with inconsistent spatio-temporal information, because they explicitly take into account the spatial and temporal dimensions, as well as the dimension concerning the identifiers of the monitored objects. Specifically, we first define natural measures that look at individual dimensions (time, space, and objects), and then propose measures based on the notion of a repair. We then analyze their behavior w.r.t. common postulates defined for classical propositional knowledge bases, and find that the latter are not suitable for spatio-temporal databases, in that the proposed inconsistency measures do not often satisfy them. In light of this, we argue that also postulates should explicitly take into account the spatial, temporal, and object dimensions and thus define “dimension-aware” counterparts of common postulates, which are indeed often satisfied by the new inconsistency measures. Finally, we study the complexity of the proposed inconsistency measures.


2020 ◽  
Author(s):  
Matheus Pereira Lobo

This paper is about highlighting two categories of knowledge bases, one built as a repository of links, and other based on units of knowledge.


2014 ◽  
Vol 5 (2) ◽  
Author(s):  
Kaitlin Bova ◽  
Sara Bova ◽  
Kevin Hill ◽  
Mark Dixon ◽  
Diana Ivankovich ◽  
...  

Objectives: To evaluate a weblog (blog)-based course introducing pharmacogenetics (PGt) and personalized medicine (PM) relative to freshmen pharmacy students' knowledge base. Methods: Incoming freshmen pharmacy students were invited by email to enroll in a one semester-hour, elective, on-line blog-based course entitled "Personal Genome Evaluation". The course was offered during the students' first semester in college. A topic list related to PGt and PM was developed by a group of faculty with topics being presented via the blog once or twice weekly through week 14 of the 15 week semester. A pre-course and post-course survey was sent to the students to compare their knowledge base relative to general information, drug response related to PGt, and PM. Results: Fifty-one freshmen pharmacy students enrolled in the course and completed the pre-course survey and 49 of the 51 students completed the post-course survey. There was an increase in the students' general, PGt and PM knowledge base as evidenced by a statistically significant higher number of correct responses for 17 of 21 questions on the post-course survey as compared to the pre-course survey. Notably, following the course, students had an increased knowledge base relative to "genetic privacy", drug dosing based on metabolizer phenotype, and the breadth of PM, among other specific points. Conclusions: The study indicated that introducing PGt and PM via a blog format was feasible, increasing the students' knowledge of these emerging areas. The blog format is easily transferable and can be adopted by colleges/schools to introduce PGt and PM.   Type: Case Study


2018 ◽  
Vol 2 ◽  
pp. e25614 ◽  
Author(s):  
Florian Pellen ◽  
Sylvain Bouquin ◽  
Isabelle Mougenot ◽  
Régine Vignes-Lebbe

Xper3 (Vignes Lebbe et al. 2016) is a collaborative knowledge base publishing platform that, since its launch in november 2013, has been adopted by over 2 thousand users (Pinel et al. 2017). This is mainly due to its user friendly interface and the simplicity of its data model. The data are stored in MySQL Relational DBs, but the exchange format uses the TDWG standard format SDD (Structured Descriptive DataHagedorn et al. 2005). However, each Xper3 knowledge base is a closed world that the author(s) may or may not share with the scientific community or the public via publishing content and/or identification key (Kopfstein 2016). The explicit taxonomic, geographic and phenotypic limits of a knowledge base are not always well defined in the metadata fields. Conversely terminology vocabularies, such as Phenotype and Trait Ontology PATO and the Plant Ontology PO, and software to edit them, such as Protégé and Phenoscape, are essential in the semantic web, but difficult to handle for biologist without computer skills. These ontologies constitute open worlds, and are expressed themselves by RDF triples (Resource Description Framework). Protégé offers vizualisation and reasoning capabilities for these ontologies (Gennari et al. 2003, Musen 2015). Our challenge is to combine the user friendliness of Xper3 with the expressive power of OWL (Web Ontology Language), the W3C standard for building ontologies. We therefore focused on analyzing the representation of the same taxonomic contents under Xper3 and under different models in OWL. After this critical analysis, we chose a description model that allows automatic export of SDD to OWL and can be easily enriched. We will present the results obtained and their validation on two knowledge bases, one on parasitic crustaceans (Sacculina) and the second on current ferns and fossils (Corvez and Grand 2014). The evolution of the Xper3 platform and the perspectives offered by this link with semantic web standards will be discussed.


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


Author(s):  
Yongrui Chen ◽  
Huiying Li ◽  
Yuncheng Hua ◽  
Guilin Qi

Formal query building is an important part of complex question answering over knowledge bases. It aims to build correct executable queries for questions. Recent methods try to rank candidate queries generated by a state-transition strategy. However, this candidate generation strategy ignores the structure of queries, resulting in a considerable number of noisy queries. In this paper, we propose a new formal query building approach that consists of two stages. In the first stage, we predict the query structure of the question and leverage the structure to constrain the generation of the candidate queries. We propose a novel graph generation framework to handle the structure prediction task and design an encoder-decoder model to predict the argument of the predetermined operation in each generative step. In the second stage, we follow the previous methods to rank the candidate queries. The experimental results show that our formal query building approach outperforms existing methods on complex questions while staying competitive on simple questions.


Sign in / Sign up

Export Citation Format

Share Document