scholarly journals Truth Degrees Theory and Approximate Reasoning in 3-Valued Propositional Pre-Rough Logic

2013 ◽  
Vol 2013 ◽  
pp. 1-7
Author(s):  
Yingcang Ma ◽  
Juanjuan Zhang ◽  
Huan Liu

By means of the function induced by a logical formulaA, the concept of truth degree of the logical formulaAis introduced in the 3-valued pre-rough logic in this paper. Moreover, similarity degrees among formulas are proposed and a pseudometric is defined on the set of formulas, and hence a possible framework suitable for developing approximate reasoning theory in 3-value logic pre-rough logic is established.

2011 ◽  
Vol 107 (1) ◽  
pp. 67-83 ◽  
Author(s):  
Yanhong She ◽  
Xiaoli He ◽  
Guojun Wang

Author(s):  
QINPING ZHAO ◽  
BO LI

A system of multivalued logical equations and its solution algorithm are put forward in this paper. Based on this work we generalize SLD-resolution into multivalued logic and establish the corresponding truth value calculus. As a result, M, an approximate reasoning system, is built. We present the language and inference rules of M. Furthermore, we analyse inconsistency of assignments to truth degrees and give the solving strategies of M.


Author(s):  
Herman Akdag ◽  
Isis Truck

This article investigates different tools for knowledge representation and modelling in decision making problems. In this variety of AI systems the experts’ knowledge is often heterogeneous, that is, expressed in many forms: numerical, interval-valued, symbolic, linguistic, etc. Linguistic concepts (adverbs, sentences, sets of words…) are sometimes more efficient in many expertise domains rather than precise, interval-valued or fuzzy numbers. In these cases, the nature of the information is qualitative and the use of such concepts is appropriate and usual. Indeed, in the case of fuzzy logic for example, data are represented through fuzzy functions that allow an infinite number of truth values between 0 and 1. Instead, it can be more appropriate to use a finite number of qualitative symbols because, among other reasons, any arbitrary fuzzification becomes useless; because an approximation will be needed at the end anyway; etc. A deep study has been recently carried out about this subject in (Gottwald, 2007). In this article we propose a survey of different tools manipulating these symbols as well as human reasoning handles with natural linguistic statements. In order to imitate or automatize expert reasoning, it is necessary to study the representation and handling of discrete and linguistic data (Truck & Akdag, 2005; Truck & Akdag, 2006). One representation is the many-valued logic framework in which this article is situated. The many-valued logic, which is a generalization of classical boolean logic, introduces truth degrees which are intermediate between true and false and enables the partial truth notion representation. There are several many-valued logic systems (Lukasiewicz’s, Gödel’s, etc.) comprising finite-valued or infinite-valued sets of truth degrees. The system addressed in this article is specified by the use of LM = {t0,...,ti,...,tM-1} 1 a totally ordered finite set2 of truth-degrees (ti = tj ? i = j) between t0 (false) and tM-1 (true), given the operators ? (max), ? (min) and ¬ (negation or symbolic complementation, with ¬tj = tM-j-1) and the following Lukasiewicz implication ?L : ti ?L tj = min(tM-1, tM-1-(i-j)) These degrees can be seen as membership degrees: x partially belongs to a multiset3 A with a degree ti if and only if x ?ti A. The many-valued logic presented here deals with linguistic statements of the following form: x is va A where x is a variable, va a scalar adverb (such as “very”, “more or less”, etc.) and A a gradable linguistic predicate (such as “tall”, “hot”, “young”...). The predicate A is satisfiable to a certain degree expressed through the scalar adverb va. The following interpretation has been proposed (Akdag, De Glas & Pacholczyk, 1992): x is va A ? “x is A” is ta – true Qualitative degrees constitute a good way to represent uncertain and not quantified knowledge, indeed they can be associated with Zadeh’s linguistic variables (Zadeh, 2004) that model approximate reasoning well. Using this framework, several qualitative approaches for uncertainty representation have been presented in the literature. For example, in (Darwiche & Ginsberg, 1992; Seridi & Akdag, 2001) the researchers want to find a model which simulates cognitive activities, such as the management of uncertain statements of natural language that are defined in a finite totally ordered set of symbolic values. The approach consists in representing and exploiting the uncertainty by qualitative degrees, as probabilities do with numerical values. In order to manipulate these symbolic values, four elementary operators are outlined: multiplication, addition, subtraction and division (Seridi & Akdag, 2001). Then two other kinds of operators are given: modification tools based on scales and symbolic aggregators.


2011 ◽  
Vol 111 (2) ◽  
pp. 223-239
Author(s):  
Yanhong She ◽  
Xiaoli He ◽  
Guojun Wang

2012 ◽  
Vol 263-266 ◽  
pp. 3382-3386
Author(s):  
Xiao Long Chai ◽  
Gui Wu Hu ◽  
Ai Xiang Chen

A new Operator Fuzzy logic in the theory of semantics integral has been defined. In the way of combining syntax and semantics of the formulas, this logic which named SIOFL can give an index to the truth degree of a proposition formula. It can give the quantum measure of default information with the obtained knowledge. With the character of non-monotonic, the logic system is suitable for approximate reasoning when some information is lacked.


2016 ◽  
Vol 11 (3) ◽  
pp. 24-40
Author(s):  
Nguyễn Cát Hồ

In this paper we introduce a notion of knowledge base consisting of statements with truth degree in which every statement may have several truth degrees. A set of rules of inference handling this kind of statements, a deductive reasoning method based on  these rules will be considered. The consistency of the knowledge base will be also investigated.


2006 ◽  
Vol 8 (1) ◽  
pp. 161-170 ◽  
Author(s):  
Jaap Bos

This paper is an invited response to Peter Rudnytsky's ‘Guardians of truth’ article. Taking issue with what are presented as fundamental theoretical and methodological caveats, this article discusses the question of when and how differing discourses on the history of psychoanalysis may or may not be compatible. In particular the author questions the validity of a concept of truth as defined from within a field of knowledge, to arrive at definitions of discourse and dialogue that can be useful to acquire new forms of knowledge.


Author(s):  
Kevin Scharp ◽  
Stewart Shapiro ◽  
Bradley Armour-Garb

This chapter investigates the question of when it is reasonable to replace an inconsistent concept. After surveying a number of proposals for how one might understand constitutive principles, it goes on to endorse Burgess’s (2004) account of being pragmatically analytic, as a possible source of insight into constitutive principles. The chapter then raises a question: If truth is an inconsistent concept, does it need to be replaced? According to the argument in the chapter, when an inconsistent concept paralyzes valuable projects, it is time to replace it. And if we are to replace a concept, our replacement should be able to do the work that the inconsistency-yielding one did. This, of course, raises a fundamental question concerning what work the notion of truth does for us. The chapter mounts a case for the claim that inflationists, but not obvious deflationists, about truth should offer a replacement for the concept of truth.


Author(s):  
Fangyi Li ◽  
Changjing Shang ◽  
Ying Li ◽  
Jing Yang ◽  
Qiang Shen

AbstractApproximate reasoning systems facilitate fuzzy inference through activating fuzzy if–then rules in which attribute values are imprecisely described. Fuzzy rule interpolation (FRI) supports such reasoning with sparse rule bases where certain observations may not match any existing fuzzy rules, through manipulation of rules that bear similarity with an unmatched observation. This differs from classical rule-based inference that requires direct pattern matching between observations and the given rules. FRI techniques have been continuously investigated for decades, resulting in various types of approach. Traditionally, it is typically assumed that all antecedent attributes in the rules are of equal significance in deriving the consequents. Recent studies have shown significant interest in developing enhanced FRI mechanisms where the rule antecedent attributes are associated with relative weights, signifying their different importance levels in influencing the generation of the conclusion, thereby improving the interpolation performance. This survey presents a systematic review of both traditional and recently developed FRI methodologies, categorised accordingly into two major groups: FRI with non-weighted rules and FRI with weighted rules. It introduces, and analyses, a range of commonly used representatives chosen from each of the two categories, offering a comprehensive tutorial for this important soft computing approach to rule-based inference. A comparative analysis of different FRI techniques is provided both within each category and between the two, highlighting the main strengths and limitations while applying such FRI mechanisms to different problems. Furthermore, commonly adopted criteria for FRI algorithm evaluation are outlined, and recent developments on weighted FRI methods are presented in a unified pseudo-code form, easing their understanding and facilitating their comparisons.


Sign in / Sign up

Export Citation Format

Share Document