A Probabilistic Approach to Ordering Formulas in a Possibilistic Knowledge Base

Author(s):  
P. H. Giang ◽  
D. Dubois ◽  
H. Prade

In this paper, a careful analysis of interval-valued possibilistic knowledge bases indicates that there exists a natural probability distribution over the set of orderings of formulae compatible with the weights given in the knowledge base. We propose a new view, by which a possibilistic knowledge base can be considered in term of such probability distribution. It reveals some interconnections between probabilistic and possibilistic logics. We show that the principle of minimum specificity, widely used in possibilistic logic, is a special case of the principle of maximum likehood (at least, from the standpoint of nonmonotonic reasoning). We propose a formula to calculate probability for a defeasible conclusion. Moreover, the proposed view seems to be useful for other practical purposes. As an example, we apply it to a traditional problem of fusion of possibilistic knowledge from many sources and derive a new solution.

Author(s):  
Gabriele Kern-Isberner ◽  
Christoph Beierle ◽  
Gerhard Brewka

Syntax splitting, first introduced by Parikh in 1999, is a natural and desirable property of KR systems. Syntax splitting combines two aspects: it requires that the outcome of a certain epistemic operation should only depend on relevant parts of the underlying knowledge base, where relevance is given a syntactic interpretation (relevance). It also requires that strengthening antecedents by irrelevant information should have no influence on the obtained conclusions (independence). In the context of belief revision the study of syntax splitting already proved useful and led to numerous new insights. In this paper we analyse syntax splitting in a different setting, namely nonmonotonic reasoning based on conditional knowledge bases. More precisely, we analyse inductive inference operators which, like system P, system Z, or the more recent c-inference, generate an inference relation from a conditional knowledge base. We axiomatize the two aforementioned aspects of syntax splitting, relevance and independence, as properties of such inductive inference operators. Our main results show that system P and system Z, whilst satisfying relevance, fail to satisfy independence. C-inference, in contrast, turns out to satisfy both relevance and independence and thus fully complies with syntax splitting.


Author(s):  
Christoph Beierle ◽  
Jonas Haldimann

AbstractConditionals are defeasible rules of the form If A then usually B, and they play a central role in many approaches to nonmonotonic reasoning. Normal forms of conditional knowledge bases consisting of a set of such conditionals are useful to create, process, and compare the knowledge represented by them. In this article, we propose several new normal forms for conditional knowledge bases. Compared to the previously introduced antecedent normal form, the reduced antecedent normal form (RANF) represents conditional knowledge with significantly fewer conditionals by taking nonmonotonic entailments licenced by system P into account. The renaming normal form(ρNF) addresses equivalences among conditional knowledge bases induced by renamings of the underlying signature. Combining the concept of renaming normal form with other normal forms yields the renaming antecedent normal form (ρ ANF) and the renaming reduced antecedent normal form (ρ RANF). For all newly introduced normal forms, we show their key properties regarding, existence, uniqueness, model equivalence, and inferential equivalence, and we develop algorithms transforming every conditional knowledge base into an equivalent knowledge base being in the respective normal form. For the most succinct normal form, the ρ RANF, we present an algorithm KBρra systematically generating knowledge bases over a given signature in ρ RANF. We show that the generated knowledge bases are consistent, pairwise not antecedentwise equivalent, and pairwise not equivalent under signature renaming. Furthermore, the algorithm is complete in the sense that, when taking signature renamings and model equivalence into account, every consistent knowledge base is generated. Observing that normalizing the set of all knowledge bases over a signature Σ to ρ RANF yields exactly the same result as KBρra (Σ), highlights the interrelationship between normal form transformations on the one hand and systematically generating knowledge bases in normal form on the other hand.


Author(s):  
Gerhard Brewka ◽  
Matthias Thimm ◽  
Markus Ulbricht

Minimal inconsistent subsets of knowledge bases play an important role in classical logics, most notably for repair and inconsistency measurement. It turns out that for nonmonotonic reasoning a stronger notion is needed. In this paper we develop such a notion, called strong inconsistency. We show that—in an arbitrary logic, monotonic or not—minimal strongly inconsistent subsets play the same role as minimal inconsistent subsets in classical reasoning. In particular, we show that the well-known classical duality between hitting sets of minimal inconsistent subsets and maximal consistent subsets generalizes to arbitrary logics if the strong notion of inconsistency is used. We investigate the complexity of various related reasoning problems and present a generic algorithm for computing minimal strongly inconsistent subsets of a knowledge base. We also demonstrate the potential of our new notion for applications, focusing on repair and inconsistency measurement.


2020 ◽  
Author(s):  
Matheus Pereira Lobo

This paper is about highlighting two categories of knowledge bases, one built as a repository of links, and other based on units of knowledge.


2014 ◽  
Vol 19 (3) ◽  
pp. 130-133 ◽  
Author(s):  
Michelle McCarthy

Purpose – The purpose of this paper is to draw readers’ attention to the myriad ways to find out about abuse towards people with learning disabilities. Design/methodology/approach – Whilst acknowledging the continued importance of research studies specifically focused on the topic of abuse, this commentary reviews information about abuse of adults with learning disabilities from other sources, e.g., through service audits, studies on sexual and personal relationships. Findings – Having many sources of information about abuse against people with learning disabilities is a good thing, but there are some problems associated with this. First, some forms of abuse appear to be easier to find out about than others, and second, the difficult question of how the information can be used to improve the lives of people with learning disabilities. Originality/value – This commentary encourages readers to take a broad view of abuse of people with learning disabilities and to use all the knowledge available to support individuals, whilst at the same time demanding social changes.


Mathematics ◽  
2021 ◽  
Vol 9 (10) ◽  
pp. 1085
Author(s):  
Ilya E. Tarasov

This article discusses the application of the method of approximation of experimental data by functional dependencies, which uses a probabilistic assessment of the deviation of the assumed dependence from experimental data. The application of this method involves the introduction of an independent parameter “scale of the error probability distribution function” and allows one to synthesize the deviation functions, forming spaces with a nonlinear metric, based on the existing assumptions about the sources of errors and noise. The existing method of regression analysis can be obtained from the considered method as a special case. The article examines examples of analysis of experimental data and shows the high resistance of the method to the appearance of single outliers in the sample under study. Since the introduction of an independent parameter increases the number of computations, for the practical application of the method in measuring and information systems, the architecture of a specialized computing device of the “system on a chip” class and practical approaches to its implementation based on programmable logic integrated circuits are considered.


Author(s):  
Mariusz Maslak

<p>The algorithm that allows to specify the characteristic value of the random fire load density, depending on the way how the considered building compartment is used, is presented and discussed in detail. The proposed computational procedure is based on a probabilistic approach, the alternative in relation to the traditional methodology according to which the results obtained from the inventory of such a compartment are a basis for the evaluation. It is assumed that the sought value is estimated as the upper quantile of a <em>Gumbel</em> probability distribution which is set at an appropriate level of the probability of its up-crossing. The formal model described in the paper is referred to the two selected and qualitatively different design techniques which are used in practice. The first one is based on the recommendations contained in the Eurocode EN 1991-1-2, whereas the second - on the rules specified in the standard NFPA 557.</p>


2018 ◽  
Vol 2 ◽  
pp. e25614 ◽  
Author(s):  
Florian Pellen ◽  
Sylvain Bouquin ◽  
Isabelle Mougenot ◽  
Régine Vignes-Lebbe

Xper3 (Vignes Lebbe et al. 2016) is a collaborative knowledge base publishing platform that, since its launch in november 2013, has been adopted by over 2 thousand users (Pinel et al. 2017). This is mainly due to its user friendly interface and the simplicity of its data model. The data are stored in MySQL Relational DBs, but the exchange format uses the TDWG standard format SDD (Structured Descriptive DataHagedorn et al. 2005). However, each Xper3 knowledge base is a closed world that the author(s) may or may not share with the scientific community or the public via publishing content and/or identification key (Kopfstein 2016). The explicit taxonomic, geographic and phenotypic limits of a knowledge base are not always well defined in the metadata fields. Conversely terminology vocabularies, such as Phenotype and Trait Ontology PATO and the Plant Ontology PO, and software to edit them, such as Protégé and Phenoscape, are essential in the semantic web, but difficult to handle for biologist without computer skills. These ontologies constitute open worlds, and are expressed themselves by RDF triples (Resource Description Framework). Protégé offers vizualisation and reasoning capabilities for these ontologies (Gennari et al. 2003, Musen 2015). Our challenge is to combine the user friendliness of Xper3 with the expressive power of OWL (Web Ontology Language), the W3C standard for building ontologies. We therefore focused on analyzing the representation of the same taxonomic contents under Xper3 and under different models in OWL. After this critical analysis, we chose a description model that allows automatic export of SDD to OWL and can be easily enriched. We will present the results obtained and their validation on two knowledge bases, one on parasitic crustaceans (Sacculina) and the second on current ferns and fossils (Corvez and Grand 2014). The evolution of the Xper3 platform and the perspectives offered by this link with semantic web standards will be discussed.


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


Author(s):  
Yongrui Chen ◽  
Huiying Li ◽  
Yuncheng Hua ◽  
Guilin Qi

Formal query building is an important part of complex question answering over knowledge bases. It aims to build correct executable queries for questions. Recent methods try to rank candidate queries generated by a state-transition strategy. However, this candidate generation strategy ignores the structure of queries, resulting in a considerable number of noisy queries. In this paper, we propose a new formal query building approach that consists of two stages. In the first stage, we predict the query structure of the question and leverage the structure to constrain the generation of the candidate queries. We propose a novel graph generation framework to handle the structure prediction task and design an encoder-decoder model to predict the argument of the predetermined operation in each generative step. In the second stage, we follow the previous methods to rank the candidate queries. The experimental results show that our formal query building approach outperforms existing methods on complex questions while staying competitive on simple questions.


Sign in / Sign up

Export Citation Format

Share Document