scholarly journals COORDINATE-FREE LOGIC

2016 ◽  
Vol 9 (3) ◽  
pp. 522-555
Author(s):  
JOOP LEO

AbstractA new logic is presented without predicates—except equality. Yet its expressive power is the same as that of predicate logic, and relations can faithfully be represented in it. In this logic we also develop an alternative for set theory. There is a need for such a new approach, since we do not live in a world of sets and predicates, but rather in a world of things with relations between them.

2012 ◽  
Vol 195-196 ◽  
pp. 829-833
Author(s):  
Jin Wei Yu

In this paper, a new kind of presentation model for software modeling and transformation is proposed, which is composed of three parts: static model, action model and presentation model. Presentation model describes user interface appearance thorough, while interface template describes the macro-layout and relation of interface, whose basic element is interactive object. Interface template-based presentation model can enhance the rationality of macro-layout of the interface, enhance the expressive power and control power, meet the requirement of auto generate high quality user interface. This solution can be used widely for suffering little from the domain and some special techniques of target applications.


2011 ◽  
pp. 63-77
Author(s):  
Hailong Wang ◽  
Zongmin Ma ◽  
Li Yan ◽  
Jingwei Cheng

In the Semantic Web context, information would be retrieved, processed, shared, reused and aligned in the maximum automatic way possible. Our experience with such applications in the Semantic Web has shown that these are rarely a matter of true or false but rather procedures that require degrees of relatedness, similarity, or ranking. Apart from the wealth of applications that are inherently imprecise, information itself is many times imprecise or vague. In order to be able to represent and reason with such type of information in the Semantic Web, different general approaches for extending semantic web languages with the ability to represent imprecision and uncertainty has been explored. In this chapter, we focus our attention on fuzzy extension approaches which are based on fuzzy set theory. We review the existing proposals for extending the theoretical counterpart of the semantic web languages, description logics (DLs), and the languages themselves. The following statements will include the expressive power of the fuzzy DLs formalism and its syntax and semantic, knowledge base, the decidability of the tableaux algorithm and its computational complexity etc. Also the fuzzy extension to OWL is discussed in this chapter.


1997 ◽  
Vol 62 (4) ◽  
pp. 1371-1378
Author(s):  
Vann McGee

Robert Solovay [8] investigated the version of the modal sentential calculus one gets by taking “□ϕ” to mean “ϕ is true in every transitive model of Zermelo-Fraenkel set theory (ZF).” Defining an interpretation to be a function * taking formulas of the modal sentential calculus to sentences of the language of set theory that commutes with the Boolean connectives and sets (□ϕ)* equal to the statement that ϕ* is true in every transitive model of ZF, and stipulating that a modal formula ϕ is valid if and only if, for every interpretation *, ϕ* is true in every transitive model of ZF, Solovay obtained a complete and decidable set of axioms.In this paper, we stifle the hope that we might continue Solovay's program by getting an analogous set of axioms for the modal predicate calculus. The set of valid formulas of the modal predicate calculus is not axiomatizable; indeed, it is complete .We also look at a variant notion of validity according to which a formula ϕ counts as valid if and only if, for every interpretation *, ϕ* is true. For this alternative conception of validity, we shall obtain a lower bound of complexity: every set which is in the set of sentences of the language of set theory true in the constructible universe will be 1-reducible to the set of valid modal formulas.


1970 ◽  
Vol 35 (2) ◽  
pp. 267-294 ◽  
Author(s):  
A. Trew

In this paper a number of nonstandard systems of predicate logic with or without identity, are translated with subsystems of applied standard system of predicate logic with identity. There are nonstandard theories of quantification which, following [16], are described as inclusive systems; their theorems are valid in all domains, including the empty domain. Theories of quantification which allow for the substitution of denotationless terms for free variables, are described, following [21], as systems of free logic; they are said to be free of the requirement that all singular terms must have denotations. Free logics and inclusive logics may each be of the other type. A nonstandard theory of identity, described, following [12] as a theory of nonreflexive identity, may be combined with a standard or with a nonstandard theory of quantification. Another kind of nonstandard system of predicate logic examined is a nonstandard version of a system of monadic predicate logic in which a distinction is made between sentence and predicate negation, and which is nonstandard in the sense that the laws relating sentence and predicate negation diverge from the standard ones. In the systems examined, this is combined with an inclusive quantification theory.


2015 ◽  
Vol 07 (04) ◽  
pp. 1550054 ◽  
Author(s):  
Faruk Karaaslan ◽  
Serkan Karataş

Molodtsov [Soft set theory-first results, Comput. Math. App. 37 (1999) 19–31] proposed the concept of soft set theory in 1999, which can be used as a mathematical tool for dealing with problems that contain uncertainty. Shabir and Naz [On bipolar soft sets, preprint (2013), arXiv:1303.1344v1 [math.LO]] defined notion of bipolar soft set in 2013. In this paper, we redefine concept of bipolar soft set and bipolar soft set operations as more functional than Shabir and Naz’s definition and operations. Also we study on their basic properties and we present a decision making method with application.


Extracting knowledge through the machine learning techniques in general lacks in its predictions the level of perfection with minimal error or accuracy. Recently, researchers have been enjoying the fruits of Rough Set Theory (RST) to uncover the hidden patterns with its simplicity and expressive power. In RST mainly the issue of attribute reduction is tackled through the notion of ‘reducts’ using lower and upper approximations of rough sets based on a given information table with conditional and decision attributes. Hence, while researchers go for dimension reduction they propose many methods among which RST approach shown to be simple and efficient for text mining tasks. The area of text mining has focused on patterns based on text files or corpus, initially preprocessed to identify and remove irrelevant and replicated words without inducing any information loss for the classifying models later generated and tested. In this current work, this hypothesis are taken as core and tested on feedbacks for elearning courses using RST’s attribution reduction and generating distinct models of n-grams and finally the results are presented for selecting final efficient model


2021 ◽  
Vol 23 (4) ◽  
pp. 695-708
Author(s):  
Katarzyna Antosz ◽  
Małgorzata Jasiulewicz-Kaczmarek ◽  
Łukasz Paśko ◽  
Chao Zhang ◽  
Shaoping Wang

Lean maintenance concept is crucial to increase the reliability and availability of maintenance equipment in the manufacturing companies. Due the elimination of losses in maintenance processes this concept reduce the number of unplanned downtime and unexpected failures, simultaneously influence a company’s operational and economic performance. Despite the widespread use of lean maintenance, there is no structured approach to support the choice of methods and tools used for the maintenance function improvement. Therefore, in this paper by using machine learning methods and rough set theory a new approach was proposed. This approach supports the decision makers in the selection of methods and tools for the effective implementation of Lean Maintenance.


Pragmatics ◽  
2006 ◽  
Vol 16 (1) ◽  
pp. 103-138 ◽  
Author(s):  
Pieter A.M. Seuren

This paper aims at an explanation of the discrepancies between natural intuitions and standard logic in terms of a distinction between NATURAL and CONSTRUCTED levels of cognition, applied to the way human cognition deals with sets. NATURAL SET THEORY (NST) restricts standard set theory cutting it down to naturalness. The restrictions are then translated into a theory of natural logic. The predicate logic resulting from these restrictions turns out to be that proposed in Hamilton (1860) and Jespersen (1917). Since, in this logic, NO is a quantifier in its own right, different from NOT-SOME, and given the assumption that natural lexicalization processes occur at the level of basic naturalness, single-morpheme lexicalizations for NOT-ALL should not occur, just as there is no single-morpheme lexicalization for NOT-SOME at that level. An analogous argument is developed for the systematic absence of lexicalizations for NOT-AND in propositional logic.


Author(s):  
Neil Tennant

The Law of Excluded Middle is not to be blamed for any of the logico-semantic paradoxes. We explain and defend our proof-theoretic criterion of paradoxicality, according to which the ‘proofs’ of inconsistency associated with the paradoxes are in principle distinct from those that establish genuine inconsistencies, in that they cannot be brought into normal form. Instead, the reduction sequences initiated by paradox-posing proofs ‘of ⊥’ do not terminate. This criterion is defended against some recent would-be counterexamples by stressing the need to use Core Logic’s parallelized forms of the elimination rules. We show how Russell’s famous paradox in set theory is not a genuine paradox; for it can be construed as a disproof, in the free logic of sets, of the assumption that the set of all non-self-membered sets exists. The Liar (by contrast) is still paradoxical, according to the proof-theoretic criterion of paradoxicality.


Sign in / Sign up

Export Citation Format

Share Document