Artificial Life and the Chinese Room Argument

2002 ◽  
Vol 8 (4) ◽  
pp. 371-378
Author(s):  
David Anderson ◽  
B. Jack Copeland

“Strong artificial life” refers to the thesis that a sufficiently sophisticated computer simulation of a life form is a life form in its own right. Can John Searle's Chinese room argument [12]—originally intended by him to show that the thesis he dubs “strong AI” is false—be deployed against strong ALife? We have often encountered the suggestion that it can be (even in print; see Harnad [8]). We do our best to transfer the argument from the domain of AI to that of ALife. We do so in order to show once and for all that the Chinese room argument proves nothing about ALife. There may indeed be powerful philosophical objections to the thesis of strong ALife, but the Chinese room argument is not among them.

Author(s):  
Robert Van Gulick

John Searle’s ‘Chinese room’ argument aims to refute ‘strong AI’ (artificial intelligence), the view that instantiating a computer program is sufficient for having contentful mental states. Imagine a program that produces conversationally appropriate Chinese responses to Chinese utterances. Suppose Searle, who understands no Chinese, sits in a room and is passed slips of paper bearing strings of shapes which, unbeknown to him, are Chinese sentences. Searle performs the formal manipulations of the program and passes back slips bearing conversationally appropriate Chinese responses. Searle seems to instantiate the program, but understands no Chinese. So, Searle concludes, strong AI is false.


2017 ◽  
Vol 10 (1) ◽  
pp. 38-49
Author(s):  
Corey Baron

This paper argues against John Searle in defense of the potential for computers to understand language (“Strong AI”) by showing that semantic meaning is itself a second-order system of rules that connects symbols and syntax with extralinguistic facts. Searle’s Chinese Room Argument is contested on theoretical and practical grounds by identifying two problems in the thought experiment, and evidence about “machine learning” is used to demonstrate that computers are already capable of learning to form true observation sentences in the same way humans do. Finally, sarcasm is used as an example to extend the argument to more complex uses of language


Scholarpedia ◽  
2009 ◽  
Vol 4 (8) ◽  
pp. 3100 ◽  
Author(s):  
John Searle

2018 ◽  
Author(s):  
Emily L Dolson ◽  
Anya E Vostinar ◽  
Michael J Wiser ◽  
Charles A Ofria

Building more open-ended evolutionary systems can simultaneously advance our understanding of biology, artificial life, and evolutionary computation. In order to do so, however, we need a way to determine when we are moving closer to this goal. We propose a set of metrics that allow us to measure a system's ability to produce commonly-agreed-upon hallmarks of open-ended evolution: change potential, novelty potential, complexity potential, and ecological potential. Our goal is to make these metrics easy to incorporate into a system, and comparable across systems so that we can make coherent progress as a field. To this end, we provide detailed algorithms (including C++ implementations) for these metrics that should be easy to incorporate into existing artificial life systems. Furthermore, we expect this toolbox to continue to grow as researchers implement these metrics in new languages and as the community reaches consensus about additional hallmarks of open-ended evolution. For example, we would welcome a measurement of a system's potential to produce major transitions in individuality. To confirm that our metrics accurately measure the hallmarks we are interested in, we test them on two very different experimental systems: NK Landscapes and the Avida Digital Evolution Platform. We find that our observed results are consistent with our prior knowledge about these systems, suggesting that our proposed metrics are effective and should generalize to other systems.


Author(s):  
Joshua Rust

John Rogers Searle (born July 31, 1932) is the Slusser Professor of Philosophy at the University of California, Berkeley. This analytic philosopher has made major contributions to the fields of the philosophy of mind, the philosophy of language, and social ontology. He is best known for his Chinese room argument, which aims to demonstrate that the formally described systems of computer functionalism cannot give rise to intentional understanding. Searle’s early work focused on the philosophy of language, where, in Speech Acts (1969), he explores the hypothesis that speaking a language is a rule-governed form of behavior. Just as one must follow certain rules in order to be considered to be playing chess, rules determine whether a speaker is making a promise, giving a command, asking a question, making a statement, and so forth. The kind of speech act that an utterance is depends on, among other conditions, its propositional content and illocutionary force. The content depicts the world as being a certain way, and the force specifies what a speaker is trying to do with that content. For example, for an utterance to qualify as a promise a speaker must describe a future act (content) and intend that the utterance place him or herself under an obligation to do that act (force). In Intentionality (1983), Searle argues that the structure of language not only mirrors but is derivative of the structure of intentional thought, so that core elements of his analysis of speech acts can be used as the basis for a theory of intentionality. Just as we can only promise by bringing certain propositional contents under a certain illocutionary force, intentional states such as belief, desire, fear, and joy can only be about the world in virtue of a representative content and a psychological mode. A theory of intentionality does not explain how intentionality is possible, given the basic facts of the world as identified by the natural sciences. Much of Searle’s work in the philosophy of mind, as found in Minds, Brains, and Science (1984) and The Rediscovery of the Mind (1992), is dedicated to the question of how mental facts, including but not limited to intentional facts, can be reconciled with basic, natural facts. Searle’s Chinese room argument is formulated in the service of rejecting computer functionalism, a prominent attempt at such reconciliation. Searle’s positive view, which he describes as "biological naturalism," is that mental facts are both caused by and features of underlying neurophysiological processes. In Speech Acts (1969), Searle claims that using language is akin to playing chess, in that both activities are made possible by participants following what he describes as "constitutive rules," rules that must be followed in order for someone to be considered to be undertaking those activities. Other institutional facts, such as money or the U.S. presidency, are also created and maintained in virtue of our following certain constitutive rules. For example, someone can only count as a U.S. president if that person is, among other conditions, a U.S. citizen who receives a majority of electoral votes. This thought is extended and explored in Searle’s two book-length contributions to the field of social ontology, The Construction of Social Reality (1995) and Making the Social World (2010). In addition to the philosophy of language and social ontology, Searle has made book-length contributions to the philosophy of action (Rationality in Action (2001)) and the philosophy of perception (Seeing Things as They Are: A Theory of Perception (2015)). He also famously engaged Jacques Derrida’s critique of J. L. Austin’s discussion of illocutionary acts ("Reiterating the Differences: A Reply to Derrida" (1977)). Searle has summarized his various positions in Mind, Language, and Society: Philosophy in the Real World (1998) and Mind: A Brief Introduction (2004).


2019 ◽  
pp. 254-263
Author(s):  
Alan J. McComas

This chapter considers the question of whether or not nonliving systems can acquire consciousness. It explores contemporary advances in technology, particularly in the field of artificial intelligence. The chapter also considers whether or not consciousness can be performed if inorganic matter replaced the components with which organisms experience consciousness. These and similar questions on nonhuman intelligence and consciousness are fleshed out with scenarios and thought experiments proposed throughout the 20th century, such as John Searle’s Chinese room argument and the archangel paradigm proposed by C. D. Broad. The chapter concludes with reflections on the human being’s inability to truly experience consciousness in the same way as nonhumans.


Sign in / Sign up

Export Citation Format

Share Document