Fallibilism and Consequence

2021 ◽  
Vol 118 (4) ◽  
pp. 214-226
Author(s):  
Adam Marushak ◽  

Alex Worsnip argues in favor of what he describes as a particularly robust version of fallibilism: subjects can sometimes know things that are, for them, possibly false (in the epistemic sense of ‘possible’). My aim in this paper is to show that Worsnip’s argument is inconclusive for a surprising reason: the existence of possibly false knowledge turns on how we ought to model entailment or consequence relations among sentences in natural language. Since it is an open question how we ought to think about consequence in natural language, it is an open question whether there is possibly false knowledge. I close with some reflections on the relation between possibly false knowledge and fallibilism. I argue that there is no straightforward way to use linguistic data about natural language epistemic modals to either verify or refute the fallibilist thesis.

Author(s):  
Paolo Santorio

On a traditional view, the semantics of natural language makes essential use of a context parameter, i.e. a set of coordinates that representss the situation of speech. In classical frameworks, this parameter plays two roles: it contributes to determining the content of utterances and it is used to define logical consequence. This paper argues that recent empirical proposals about context shift in natural language, which are supported by an increasing body of cross-linguistic data, are incompatible with this traditional view. The moral is that context has no place in semantic theory proper. We should revert back to so-called multiple-indexing frameworks that were developed by Montague and others, and relegate context to the postsemantic stage of a theory of meaning.


2013 ◽  
Vol 6 (4) ◽  
pp. 659-679 ◽  
Author(s):  
ANDRÉS CORDÓN FRANCO ◽  
HANS VAN DITMARSCH ◽  
ANGEL NEPOMUCENO

AbstractIn van Benthem (2008), van Benthem proposes a dynamic consequence relation defined as ${\psi _1}, \ldots ,{\psi _n}{ \models ^d}\phi \,{\rm{iff}}{ \models ^{pa}}[{\psi _1}] \ldots [{\psi _n}]\phi ,$ where the latter denotes consequence in public announcement logic, a dynamic epistemic logic. In this paper we investigate the structural properties of a conditional dynamic consequence relation $\models _{\rm{\Gamma }}^d$ extending van Benthem’s proposal. It takes into account a set of background conditions Γ, inspired by Makinson (2003) wherein Makinson calls this reasoning ‘modulo’ a set Γ. In the presence of common knowledge, conditional dynamic consequence is definable from (unconditional) dynamic consequence. An open question is whether dynamic consequence is compact. We further investigate a dynamic consequence relation for soft instead of hard announcements. Surprisingly, it shares many properties with (hard) dynamic consequence. Dynamic consequence relations provide a novel perspective on reasoning about protocols in multi-agent systems.


2020 ◽  
Vol 22 (2) ◽  
pp. 5-31
Author(s):  
Brian Gravely

In this article, I investigate the link between VSO-VOS orders and differential object marking (DOM) via novel data from Galician. I present an analysis that sheds light on what may be required for a language to license DOM via movement, a requirement once thought necessary for licensing DOM that has recently been discredited on the basis of an overwhelming amount of cross-linguistic data (cf. Kalin 2018). I also show evidence for the variation regarding featural specification of DPs that must be differentially marked, adding to the highly variable factors that contribute to the appearance of DOM on nominal objects in natural language. Focusing on full DP objects, I conclude that licensing DOM in Galician is predicated on both the level of animacy of postverbal nominals and object shift in VOS configurations.


Author(s):  
Friederike Moltmann

Natural language ontology is a branch of both metaphysics and linguistic semantics. Its aim is to uncover the ontological categories, notions, and structures that are implicit in the use of natural language, that is, the ontology that a speaker accepts when using a language. Natural language ontology is part of “descriptive metaphysics,” to use Strawson’s term, or “naive metaphysics,” to use Fine’s term, that is, the metaphysics of appearances as opposed to foundational metaphysics, whose interest is in what there really is. What sorts of entities natural language involves is closely linked to compositional semantics, namely what the contribution of occurrences of expressions in a sentence is taken to be. Most importantly, entities play a role as semantic values of referential terms, but also as implicit arguments of predicates and as parameters of evaluation. Natural language appears to involve a particularly rich ontology of abstract, minor, derivative, and merely intentional objects, an ontology many philosophers are not willing to accept. At the same time, a serious investigation of the linguistic facts often reveals that natural language does not in fact involve the sort of ontology that philosophers had assumed it does. Natural language ontology is concerned not only with the categories of entities that natural language commits itself to, but also with various metaphysical notions, for example the relation of part-whole, causation, material constitution, notions of existence, plurality and unity, and the mass-count distinction. An important question regarding natural language ontology is what linguistic data it should take into account. Looking at the sorts of data that researchers who practice natural language ontology have in fact taken into account makes clear that it is only presuppositions, not assertions, that reflect the ontology implicit in natural language. The ontology of language may be distinctive in that it may in part be driven specifically by language or the use of it in a discourse. Examples are pleonastic entities, discourse referents conceived of as entities of a sort, and an information-based notion of part structure involved in the semantics of plurals and mass nouns. Finally, there is the question of the universality of the ontology of natural language. Certainly, the same sort of reasoning should apply to consider it universal, in a suitable sense, as has been applied for the case of (generative) syntax.


Author(s):  
Janusz Kacprzyk ◽  
Slawomir Zadrozny

The authors discuss aspects related to the scalability of data mining tools meant in a different way than whether a data mining tool retains its intended functionality as the problem size increases. They introduce a new concept of a cognitive (perceptual) scalability meant as whether as the problem size increases the method remains fully functional in the sense of being able to provide intuitively appealing and comprehensible results to the human user. The authors argue that the use of natural language in the linguistic data summaries provides a high cognitive (perceptional) scalability because natural language is the only fully natural means of human communication and provides a common language for individuals and groups of different backgrounds, skills, knowledge. They show that the use of Zadeh’s protoform as general representations of linguistic data summaries, proposed by Kacprzyk and Zadrozny (2002; 2005a; 2005b), amplify this advantage leading to an ultimate cognitive (perceptual) scalability.


Author(s):  
David LaVergne ◽  
Judith Tiferes ◽  
Michael Jenkins ◽  
Geoff Gross ◽  
Ann Bisantz

Qualitative linguistic data provides unique, valuable information that can only come from human observers. Data fusion systems find it challenging to incorporate this “soft data” as they are primarily designed to analyze quantitative, hard-sensor data with consistent formats and qualified error characteristics. This research investigates how people produce linguistic descriptions of human physical attributes. Thirty participants were asked to describe seven actors’ ages, heights, and weights in two naturalistic video scenes, using both numeric estimates and linguistic descriptors. Results showed that not only were a large number of linguistic descriptors used, but they were also used inconsistently. Only 10% of the 189 unique terms produced were used by four or more participants. Especially for height and weight, we found that linguistic terms are poor devices for transmitting estimated values due to the large and overlapping ranges of numeric estimates associated with each term. Future work should attempt to better define the boundaries of inclusion for more frequently used terms and to create a controlled language lexicon to gauge whether or not that improves the precision of natural language terms.


Sign in / Sign up

Export Citation Format

Share Document