Beyond Concepts

Author(s):  
Ruth Garrett Millikan

This book weaves together themes from natural ontology, philosophy of mind, philosophy of language and information, areas of inquiry that have not recently been treated together. The sprawling topic is Kant’s how is knowledge possible? but viewed from a contemporary naturalist standpoint. The assumption is that we are evolved creatures that use cognition as a guide in dealing with the natural world, and that the natural world is roughly as natural science has tried to describe it. Very unlike Kant, then, we must begin with ontology, with a rough understanding of what the world is like prior to cognition, only later developing theories about the nature of cognition within that world and how it manages to reflect the rest of nature. And in trying to get from ontology to cognition we must traverse another non-Kantian domain: questions about the transmission of information both through natural signs and through purposeful signs including, especially, language. Novelties are the introduction of unitrackers and unicepts whose job is to recognize the same again as manifested through the jargon of experience, a direct reference theory for common nouns and other extensional terms, a naturalist sketch of uniceptual—roughly conceptual— development, a theory of natural information and of language function that shows how properly functioning language carries natural information, a novel description of the semantics/pragmatics distinction, a discussion of perception as translation from natural informational signs, new descriptions of indexicals and demonstratives and of intensional contexts and a new analysis of the reference of incomplete descriptions.

Author(s):  
Ruth Garrett Millikan

This book weaves together themes from natural ontology, philosophy of mind, philosophy of language and information, areas of inquiry that have not recently been treated together. The sprawling topic is Kant’s how is knowledge possible? gbut viewed from a contemporary naturalist standpoint. The assumption is that we are evolved creatures that use cognition as a guide in dealing with the natural world, and that the natural world is roughly as natural science has tried to describe it. Very unlike Kant, then, we must begin with ontology, with a rough understanding of what the world is like prior to cognition, only later developing theories about the nature of cognition within that world and how it manages to reflect the rest of nature. And in moving from ontology to cognition we must traverse another non-Kantian domain: questions about the transmission of information both through natural signs and through purposeful signs, including, especially, language.


Author(s):  
Joshua Rust

John Rogers Searle (born July 31, 1932) is the Slusser Professor of Philosophy at the University of California, Berkeley. This analytic philosopher has made major contributions to the fields of the philosophy of mind, the philosophy of language, and social ontology. He is best known for his Chinese room argument, which aims to demonstrate that the formally described systems of computer functionalism cannot give rise to intentional understanding. Searle’s early work focused on the philosophy of language, where, in Speech Acts (1969), he explores the hypothesis that speaking a language is a rule-governed form of behavior. Just as one must follow certain rules in order to be considered to be playing chess, rules determine whether a speaker is making a promise, giving a command, asking a question, making a statement, and so forth. The kind of speech act that an utterance is depends on, among other conditions, its propositional content and illocutionary force. The content depicts the world as being a certain way, and the force specifies what a speaker is trying to do with that content. For example, for an utterance to qualify as a promise a speaker must describe a future act (content) and intend that the utterance place him or herself under an obligation to do that act (force). In Intentionality (1983), Searle argues that the structure of language not only mirrors but is derivative of the structure of intentional thought, so that core elements of his analysis of speech acts can be used as the basis for a theory of intentionality. Just as we can only promise by bringing certain propositional contents under a certain illocutionary force, intentional states such as belief, desire, fear, and joy can only be about the world in virtue of a representative content and a psychological mode. A theory of intentionality does not explain how intentionality is possible, given the basic facts of the world as identified by the natural sciences. Much of Searle’s work in the philosophy of mind, as found in Minds, Brains, and Science (1984) and The Rediscovery of the Mind (1992), is dedicated to the question of how mental facts, including but not limited to intentional facts, can be reconciled with basic, natural facts. Searle’s Chinese room argument is formulated in the service of rejecting computer functionalism, a prominent attempt at such reconciliation. Searle’s positive view, which he describes as "biological naturalism," is that mental facts are both caused by and features of underlying neurophysiological processes. In Speech Acts (1969), Searle claims that using language is akin to playing chess, in that both activities are made possible by participants following what he describes as "constitutive rules," rules that must be followed in order for someone to be considered to be undertaking those activities. Other institutional facts, such as money or the U.S. presidency, are also created and maintained in virtue of our following certain constitutive rules. For example, someone can only count as a U.S. president if that person is, among other conditions, a U.S. citizen who receives a majority of electoral votes. This thought is extended and explored in Searle’s two book-length contributions to the field of social ontology, The Construction of Social Reality (1995) and Making the Social World (2010). In addition to the philosophy of language and social ontology, Searle has made book-length contributions to the philosophy of action (Rationality in Action (2001)) and the philosophy of perception (Seeing Things as They Are: A Theory of Perception (2015)). He also famously engaged Jacques Derrida’s critique of J. L. Austin’s discussion of illocutionary acts ("Reiterating the Differences: A Reply to Derrida" (1977)). Searle has summarized his various positions in Mind, Language, and Society: Philosophy in the Real World (1998) and Mind: A Brief Introduction (2004).


Author(s):  
Nicholas L. Sturgeon

Ethical naturalism is the project of fitting an account of ethics into a naturalistic worldview. It includes nihilistic theories, which see no place for real values and no successful role for ethical thought in a purely natural world. The term ‘naturalism’ is often used more narrowly, however, to refer to cognitivist naturalism, which holds that ethical facts are simply natural facts and that ethical thought succeeds in discovering them. G.E. Moore (1903), attacked cognitivist naturalism as mistaken in principle, for committing what he called the ‘naturalistic fallacy’. He thought a simple test showed that ethical facts could not be natural facts (the ‘fallacy’ lay in believing they could be), and he took it to follow that ethical knowledge would have to rest on nonsensory intuition. Later writers have added other arguments for the same conclusions. Moore himself was in no sense a naturalist, since he thought that ethics could be given a ‘non-natural’ basis. Many who elaborated his criticisms of cognitivist naturalism, however, have done so on behalf of generic ethical naturalism, and so have defended either ethical nihilism or else some more modest constructive position, usually a version of noncognitivism. Noncognitivists concede to nihilists that nature contains no real values, but deny that it was ever the function of ethical thought to discover such things. They thus leave ethical thought room for success at some other task, such as providing the agent with direction for action. Defenders of cognitivist naturalism deny that there is a ‘naturalistic fallacy’ or that ethical knowledge need rest on intuition; and they have accused Moore and his successors of relying on dubious assumptions in metaphysics, epistemology, the philosophy of language and the philosophy of mind. Thus many difficult philosophical issues have been implicated in the debate.


Author(s):  
Ruth Garrett Millikan

Using a varied list of natural signs as examples, there are straightforward reasons to reject several familiar attemps, such as Dretske’s, to capture what they have in common. Correlational theories run into an obdurate problem of defining, in a principled way, the reference class within which such correlations must hold. A similarity among the signs on our list can be found, however, by treating “being an infosign” as like “being an affordance.” “Natural information” is then the content that is carried by an infosign. A state of affairs, A, that is an infosign of a state of affairs, B, carries the natural information that B, relative to a reference class and relative to an animal that, owing to its actual location in the world, could interpret it.


Author(s):  
Béatrice Godart-Wendling

The term “philosophy of language” is intrinsically paradoxical: it denominates the main philosophical current of the 20th century but is devoid of any univocal definition. While the emergence of this current was based on the idea that philosophical questions were only language problems that could be elucidated through a logico-linguistic analysis, the interest in this approach gave rise to philosophical theories that, although having points of convergence for some of them, developed very different philosophical conceptions. The only constant in all these theories is the recognition that this current of thought originated in the work of Gottlob Frege (b. 1848–d. 1925), thus marking what was to be called “the linguistic turn.” Despite the theoretical diversity within the philosophy of language, the history of this current can however be traced in four stages: The first one began in 1892 with Frege’s paper “Über Sinn und Bedeutung” and aimed to clarify language by using the rules of logic. The Fregean principle underpinning this program was that we must banish psychological considerations from linguistic analysis in order to avoid associating the meaning of words with mental pictures or states. The work of Frege, Bertrand Russell (1872–1970), George Moore (1873–1958), Ludwig Wittgenstein (1921), Rudolf Carnap (1891–1970), and Willard Van Orman Quine (1908–2000) is representative of this period. In this logicist point of view, the questions raised mainly concerned syntax and semantics, since the goal was to define a formalism able to represent the structure of propositions and to explain how language can describe the world by mirroring it. The problem specific to this period was therefore the function of representing the world by language, thus placing at the heart of the philosophical debate the notions of reference, meaning, and truth. The second phase of the philosophy of language was adumbrated in the 1930s with the courses given by Wittgenstein (1889–1951) in Cambridge (The Blue and Brown Books), but it did not really take off until 1950–1960 with the work of Peter Strawson (1919–2006), Wittgenstein (1953), John Austin (1911–1960), and John Searle (1932–). In spite of the very different approaches developed by these theorists, the two main ideas that characterized this period were: one, that only the examination of natural (also called “ordinary”) language can give access to an understanding of how language functions, and two, that the specificity of this language resides in its ability to perform actions. It was therefore no longer a question of analyzing language in logical terms, but rather of considering it in itself, by examining the meaning of statements as they are used in given contexts. In this perspective, the pivotal concepts explored by philosophers became those of (situated) meaning, felicity conditions, use, and context. The beginning of the 1970s initiated the third phase of this movement by orienting research toward two quite distinct directions. The first, resulting from the work on proper names, natural-kind words, and indexicals undertaken by the logician philosophers Saul Kripke (1940–), David Lewis (1941–2001), Hilary Putnam (1926–2016), and David Kaplan (1933–), brought credibility to the semantics of possible worlds. The second, conducted by Paul Grice (1913–1988) on human communicational rationality, harked back to the psychologism dismissed by Frege and conceived of the functioning of language as highly dependent on a theory of mind. The focus was then put on the inferences that the different protagonists in a linguistic exchange construct from the recognition of hidden intentions in the discourse of others. In this perspective, the concepts of implicitness, relevance, and cognitive efficiency became central and required involving a greater number of contextual parameters to account for them. In the wake of this research, many theorists turned to the philosophy of mind as evidenced in the late 1980s by the work on relevance by Dan Sperber (1942–) and Deirdre Wilson (1941–). The contemporary period, marked by the thinking of Robert Brandom (1950–) and Charles Travis (1943–), is illustrated by its orientation toward a radical contextualism and the return of inferentialism that draws strongly on Frege. Within these theoretical frameworks, the notions of truth and reference no longer fall within the field of semantics but rather of pragmatics. The emphasis is placed on the commitment that the speakers make when they speak, as well as on their responsibility with respect to their utterances.


Author(s):  
Alistair Fox

This chapter examines Merata Mita’s Mauri, the first fiction feature film in the world to be solely written and directed by an indigenous woman, as an example of “Fourth Cinema” – that is, a form of filmmaking that aims to create, produce, and transmit the stories of indigenous people, and in their own image – showing how Mita presents the coming-of-age story of a Māori girl who grows into an understanding of the spiritual dimension of the relationship of her people to the natural world, and to the ancestors who have preceded them. The discussion demonstrates how the film adopts storytelling procedures that reflect a distinctively Māori view of time and are designed to signify the presence of the mauri (or life force) in the Māori world.


According to a long historical tradition, understanding comes in different varieties. In particular, it is said that understanding people has a different epistemic profile than understanding the natural world—it calls on different cognitive resources, for instance, and brings to bear distinctive normative considerations. Thus in order to understand people we might need to appreciate, or in some way sympathetically reconstruct, the reasons that led a person to act in a certain way. By comparison, when it comes to understanding natural events, like earthquakes or eclipses, no appreciation of reasons or acts of sympathetic reconstruction is arguably needed—mainly because there are no reasons on the scene to even be appreciated, and no perspectives to be sympathetically pieced together. In this volume some of the world’s leading philosophers, psychologists, and theologians shed light on the various ways in which we understand the world, pushing debates on this issue to new levels of sophistication and insight.


Author(s):  
Richard Healey

The metaphor that fundamental physics is concerned to say what the natural world is like at the deepest level may be cashed out in terms of entities, properties, or laws. The role of quantum field theories in the Standard Model of high-energy physics suggests that fundamental entities, properties, and laws are to be sought in these theories. But the contextual ontology proposed in Chapter 12 would support no unified compositional structure for the world; a quantum state assignment specifies no physical property distribution sufficient even to determine all physical facts; and quantum theory posits no fundamental laws of time evolution, whether deterministic or stochastic. Quantum theory has made a revolutionary contribution to fundamental physics because its principles have permitted tremendous unification of science through the successful application of models constructed in conformity to them: but these models do not say what the world is like at the deepest level.


Author(s):  
Aaron Segal ◽  
Tyron Goldschmidt

This chapter formulates a version of idealism and argues for it. Sections 2 and 3 explicate this version of idealism: the world is mental through-and-through. Section 2 spells this out precisely and contrasts it with rival views. Section 3 draws a consequence from this formulation of idealism: idealism is necessarily true if true at all. Sections 4 and 5 make the case for idealism. Section 4 is defensive: it draws from the conclusion of section 3 to reply to a central, perhaps the central, anti-idealist argument. Section 5 is on the offense: it develops a new argument for idealism based on the contemporary debate in philosophy of mind. The contemporary debate in philosophy of mind has been dominated by physicalism and dualism, with idealism almost totally neglected. This chapter rectifies this situation.


Sign in / Sign up

Export Citation Format

Share Document