Wittgenstein on language and artificial intelligence: The Chinese-room thought experiment revisited

Synthese ◽  
1983 ◽  
Vol 56 (3) ◽  
pp. 339-349 ◽  
Author(s):  
Klaus K. Obermeier
2019 ◽  
pp. 254-263
Author(s):  
Alan J. McComas

This chapter considers the question of whether or not nonliving systems can acquire consciousness. It explores contemporary advances in technology, particularly in the field of artificial intelligence. The chapter also considers whether or not consciousness can be performed if inorganic matter replaced the components with which organisms experience consciousness. These and similar questions on nonhuman intelligence and consciousness are fleshed out with scenarios and thought experiments proposed throughout the 20th century, such as John Searle’s Chinese room argument and the archangel paradigm proposed by C. D. Broad. The chapter concludes with reflections on the human being’s inability to truly experience consciousness in the same way as nonhumans.


Author(s):  
Robert Van Gulick

John Searle’s ‘Chinese room’ argument aims to refute ‘strong AI’ (artificial intelligence), the view that instantiating a computer program is sufficient for having contentful mental states. Imagine a program that produces conversationally appropriate Chinese responses to Chinese utterances. Suppose Searle, who understands no Chinese, sits in a room and is passed slips of paper bearing strings of shapes which, unbeknown to him, are Chinese sentences. Searle performs the formal manipulations of the program and passes back slips bearing conversationally appropriate Chinese responses. Searle seems to instantiate the program, but understands no Chinese. So, Searle concludes, strong AI is false.


2018 ◽  
Vol 39 (1) ◽  
pp. 61-64 ◽  
Author(s):  
Peter Buell Hirsch

Purpose Artificial intelligence and machine learning have spread rapidly across every aspect of business and social activity. The purpose of this paper is to examine how this rapidly growing field of analytics might be put to use in the area of reputation risk management. Design/methodology/approach The approach taken was to examine in detail the primary and emerging applications of artificial intelligence to determine how they could be applied to preventing and mitigating reputation risk by using machine learning to identify early signs of behaviors that could lead to reputation damage. Findings This review confirmed that there were at least two areas in which artificial intelligence could be applied to reputation risk management – the use of machine learning to analyze employee emails in real time to detect early signs of aberrant behavior and the use of algorithmic game theory to stress test business decisions to determine whether they contained perverse incentives leading to potential fraud. Research limitations/implications Because of the fact that this viewpoint is by its nature a thought experiment, the authors have not yet tested the practicality or feasibility of the uses of artificial intelligence it describes. Practical implications Should the concepts described be viable in real-world application, they would create extraordinarily powerful tools for companies to identify risky behaviors in development long before they had run far enough to create major reputation risk. Social implications By identifying risky behaviors at an early stage and preventing them from turning into reputation risks, the methods described could help restore and maintain trust in the relationship between companies and their stakeholders. Originality/value To the best of the author’s knowledge, artificial intelligence has never been described as a potential tool in reputation risk management.


Author(s):  
Ryosuke Yokoi ◽  
Kazuya Nakayachi

Objective Autonomous cars (ACs) controlled by artificial intelligence are expected to play a significant role in transportation in the near future. This study investigated determinants of trust in ACs. Background Trust in ACs influences different variables, including the intention to adopt AC technology. Several studies on risk perception have verified that shared value determines trust in risk managers. Previous research has confirmed the effect of value similarity on trust in artificial intelligence. We focused on moral beliefs, specifically utilitarianism (belief in promoting a greater good) and deontology (belief in condemning deliberate harm), and tested the effects of shared moral beliefs on trust in ACs. Method We conducted three experiments ( N = 128, 71, and 196, for each), adopting a thought experiment similar to the well-known trolley problem. We manipulated shared moral beliefs (shared vs. unshared) and driver (AC vs. human), providing participants with different moral dilemma scenarios. Trust in ACs was measured through a questionnaire. Results The results of Experiment 1 showed that shared utilitarian belief strongly influenced trust in ACs. In Experiment 2 and Experiment 3, however, we did not find statistical evidence that shared deontological belief had an effect on trust in ACs. Conclusion The results of the three experiments suggest that the effect of shared moral beliefs on trust varies depending on the values that ACs share with humans. Application To promote AC implementation, policymakers and developers need to understand which values are shared between ACs and humans to enhance trust in ACs.


2017 ◽  
Vol 60 (1) ◽  
pp. 28-39
Author(s):  
Nenad Filipovic

The Chinese room argument is famous argument introduced by John Searle, in which Searle presented various problems with the claim that it is possible for the artificial intelligence to have understanding of a language in a way in which intelligent beings such as humans have that capacity. The argument was influential enough to, in decades following it, sparke numerous responses and critiques, along with a few alleged improvements to it from Searle. In this article, I will analyze one atypical critique of Searle?s argument, made by Mark Sprevak. Sprevak, unlike the other critics of the argument, agrees with Searle that understanding does not exist in Chinese room in any way, but he claims that Chinese room cannot execute every possible program, like Searle claims. Because of that, Searle cannot conclude the strong conclusion he wants from The Chinese room argument. In this article, I will analyze Searle?s argument, I will give a brief overview of typical responses to it, and I will analyze Sprevak?s response. In the last section, I will present argument that shows that Sprevak, if he wants to keep his conclusions, must either give up one part of his response, or accept one of the typical responses to Searle?s argument, thus making his own response dependent on the response from others.


2018 ◽  
pp. 1-32
Author(s):  
Arthur S. Reber

The long-standing philosophical argument generally known as “hardware independent functionalism” is presented. This position maintains that consciousness is at its heart computational and any artifact that carried out all the causal functions of a mind would become conscious. This position is critiqued and shown to be hopelessly flawed. There is a long discussion on the “other minds” problem (i.e., “How do we know whether another entity, organism, person is in fact conscious?”). Included is an equally long review of Tom Nagel’s famous question (“What’s it like to be a bat?”) applied to robots and this is followed up with a review of John Searle’s “Chinese Room”—a thought experiment, now over 35 years old, which lays bare the futility of the functionalist’s position. It is acknowledged that there is a firm, almost compelling tendency to endow artifacts like human-appearing robots with sentience, and the reasons for this are discussed. The chapter ends with a summary.


Problemos ◽  
2019 ◽  
Vol 96 ◽  
pp. 121-133
Author(s):  
Hasan Çağatay

By the Chinese room thought experiment, John Searle (1980) advocates the thesis that it is impossible for computers to think in the same way that human beings do. This article intends firstly to show that the Chinese room does not justify or even test this thesis and secondly to describe exactly how the person in the Chinese room can learn Chinese. Regarding this learning process, Searle ignores the relevance of an individual’s pattern recognition capacity for understanding. To counter Searle’s claim, this paper, via examining a series of thought experiments inspired by the Chinese room, aims to underline the importance of pattern recognition for understanding to emerge.


Artnodes ◽  
2020 ◽  
Author(s):  
Meredith Tromble

What can art do for artificial intelligence? This essay circles around this question from a viewpoint grounded in the embodied knowledge base of contemporary art. The author employs the term “feelthink” to refer to the shifting webs of perception, emotion, thought, and action probed by artists engaging AI. Tracing several metaphors used by artists to consider AI, the author identifies points where the metaphors delaminate, pulling away from the phenomena to which they refer. The author advocates for these partial and imagistic understandings of AI as probes which, despite or because of their flaws, contribute important ideas for the development and cultural positioning of AI entities. The author further questions the limited scope of art ideas addressed in AI research and proposes a thought experiment in which art joins industry as a source of questions for developing artificial intelligences. In conclusion, the essay’s structuring metaphor is described as an example of “feelthink” at work.


Sign in / Sign up

Export Citation Format

Share Document