inconsistent database
Recently Published Documents


TOTAL DOCUMENTS

15
(FIVE YEARS 2)

H-INDEX

4
(FIVE YEARS 1)

Author(s):  
Meghyn Bienvenu ◽  
Camille Bourgaux

In this paper, we explore the issue of inconsistency handling over prioritized knowledge bases (KBs), which consist of an ontology, a set of facts, and a priority relation between conflicting facts. In the database setting, a closely related scenario has been studied and led to the definition of three different notions of optimal repairs (global, Pareto, and completion) of a prioritized inconsistent database. After transferring the notions of globally-, Pareto- and completion-optimal repairs to our setting, we study the data complexity of the core reasoning tasks: query entailment under inconsistency-tolerant semantics based upon optimal repairs, existence of a unique optimal repair, and enumeration of all optimal repairs. Our results provide a nearly complete picture of the data complexity of these tasks for ontologies formulated in common DL-Lite dialects. The second contribution of our work is to clarify the relationship between optimal repairs and different notions of extensions for (set-based) argumentation frameworks. Among our results, we show that Pareto-optimal repairs correspond precisely to stable extensions (and often also to preferred extensions), and we propose a novel semantics for prioritized KBs which is inspired by grounded extensions and enjoys favourable computational properties. Our study also yields some results of independent interest concerning preference-based argumentation frameworks.


Author(s):  
Kelly Farrah ◽  
Danielle Rabb

Objective: The research sought to determine the prevalence of errata for drug trial publications that are included in systematic reviews, their potential value to reviews, and their accessibility via standard information retrieval methods.Methods: The authors conducted a retrospective review of included studies from forty systematic reviews of drugs evaluated by the Canadian Agency for Drugs and Technologies in Health (CADTH) Common Drug Review (CDR) in 2015. For each article that was included in the systematic reviews, we conducted searches for associated errata using the CDR review report, PubMed, and the journal publishers’ websites. The severity of errors described in errata was evaluated using a three-category scale: trivial, minor, or major. The accessibility of errata was determined by examining inclusion in bibliographic databases, costs of obtaining errata, time lag between article and erratum publication, and correction of online articles.Results: The 40 systematic reviews included 127 articles in total, for which 26 errata were identified. These errata described 38 errors. When classified by severity, 6 errors were major; 20 errors were minor; and 12 errors were trivial. No one database contained all the errata. On average, errata were published 211 days after the original article (range: 15–1,036 days). All were freely available. Over one-third (9/24) of online articles were uncorrected after errata publication.Conclusion: Errata frequently described non-trivial errors that would either impact the interpretation of data in the article or, in fewer cases, impact the conclusions of the study. As such, it seems useful for reviewers to identify errata associated with included studies. However, publication time lag and inconsistent database indexing impair errata accessibility.


2012 ◽  
Vol 23 (5) ◽  
pp. 1167-1182 ◽  
Author(s):  
Ai-Hua WU ◽  
Zi-Jing TAN ◽  
Wei WANG

2010 ◽  
Vol 25 (3) ◽  
pp. 469-481 ◽  
Author(s):  
Ai-Hua Wu ◽  
Zi-Jing Tan ◽  
Wei Wang

Author(s):  
José A. Alonso-Jiménez ◽  
Joaquín Borrego-Díaz ◽  
Antonia M. Chávez-González

Nowadays, data management on the World Wide Web needs to consider very large knowledge databases (KDB). The larger is a KDB, the smaller the possibility of being consistent. Consistency in checking algorithms and systems fails to analyse very large KDBs, and so many have to work every day with inconsistent information. Database revision—transformation of the KDB into another, consistent database—is a solution to this inconsistency, but the task is computationally untractable. Paraconsistent logics are also a useful option to work with inconsistent databases. These logics work on inconsistent KDBs but prohibit non desired inferences. From a philosophical (logical) point of view, the paraconsistent reasoning is a need that the self human discourse practices. From a computational, logical point of view, we need to design logical formalisms that allow us to extract useful information from an inconsistent database, taking into account diverse aspects of the semantics that are “attached” to deductive databases reasoning (see Table 1). The arrival of the semantic web (SW) will force the database users to work with a KDB that is expressed by logic formulas with higher syntactic complexity than are classic logic databases.


Author(s):  
Luciano Caroprese ◽  
Ester Zumpano

Data integration aims to provide a uniform integrated access to multiple heterogeneous information sources designed independently and having strictly related contents. However, the integrated view, constructed by integrating the information provided by the different data sources by means of a specified integration strategy could potentially contain inconsistent data; that is, it can violate some of the constraints defined on the data. In the presence of an inconsistent integrated database, in other words, a database that does not satisfy some integrity constraints, two possible solutions have been investigated in the literature (Agarwal, Keller, Wiederhold, & Saraswat, 1995; Bry, 1997; Calì, Calvanese, De Giacomo, & Lenzerini, 2002; Dung, 1996; Grant & Subrahmanian, 1995; S. Greco & Zumpano, 2000; Lin & Mendelzon, 1999): repairing the database or computing consistent answers over the inconsistent database. Intuitively, a repair of the database consists of deleting or inserting a minimal number of tuples so that the resulting database is consistent, whereas the computation of the consistent answer consists of selecting the set of certain tuples (i.e., those belonging to all repaired databases) and the set of uncertain tuples (i.e., those belonging to a proper subset of repaired databases).


Sign in / Sign up

Export Citation Format

Share Document