Making AI Intelligible
Latest Publications


TOTAL DOCUMENTS

8
(FIVE YEARS 8)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780192894724, 9780191915604

2021 ◽  
pp. 51-58
Author(s):  
Herman Cappelen ◽  
Josh Dever

This short chapter does two things. First, it shows that in fact workers in AI frequently talk as if AI systems express contents. We present the argument that the complex nature of the actions and communications of AI systems, even if they are very different from the complex behaviours of human beings, and the way they have ‘aboutness’, strongly suggest a contentful interpretation of those actions and communications. It then introduces some philosophical terminology that captures various aspects of language use, such as the ones in the title, to better make clear what one is saying—philosophically speaking—when one claims AI systems communicate, and to provide a vocabulary for the next few chapters.


2021 ◽  
pp. 31-48
Author(s):  
Herman Cappelen ◽  
Josh Dever

There is a view prevalent among people working in the artificial intelligence field to the effect that philosophers have nothing to tell us about AI and (putative) AI communication—that philosophy cannot help with the mathematical problems of making practical advances in AI and is, therefore, no more than a diverting irrelevance. This chapter rebuts that view. It takes the form of a dialogue between a philosopher and someone working in AI who is sceptical about philosophy’s relevance to AI. The sceptic, Alfred, argues that philosophical issues about the nature of communication are irrelevant to ongoing work in AI; the philosopher responds, showing that the sceptic’s supposedly unphilosophical perspective in fact harbours philosophical presuppositions, and ones that are worth discussing—in particular, the question of meaning and content within AI systems.


2021 ◽  
pp. 103-116
Author(s):  
Herman Cappelen ◽  
Josh Dever

This chapter continues the process of anthropocentric abstraction, here concentrating on proper names. Do AI systems use proper names? Using our example of ‘SmartCredit’, it highlights problems concerning how to treat the output of an AI system when some, but not all or most, of the information in its neural network fails to apply to the individual we interpret the output to be about. After giving reasons to think the standard Kripkean theory might not work well here, it suggests an alternative theory of communication about particular entities, the mental file framework, which is more apt for theorizing about AI systems. It then abstracts from the human-centric features of extant theories of mental files to consider how AI might use something like them to refer to particulars.


2021 ◽  
pp. 3-30
Author(s):  
Herman Cappelen ◽  
Josh Dever

This chapter introduces the topic of the book: the philosophical foundations of AI, and in particular the powerful contemporary AI that guides our lives and receives so much attention in the media. It introduces the idea of a neural network, and poses the central puzzle of the book: when a given system’s outputs take the form of what appear to be acts of verbal communication, what is going on? When an AI system indicates, for example, that we are not creditworthy, is it saying that we are not? Can it speak? Along with posing this central question, the chapter also makes clear what the book is not about—ethics, for example—and areas in which the book might have relevance for extra-philosophical debates, such as the need for explainable AI.


2021 ◽  
pp. 139-166
Author(s):  
Herman Cappelen ◽  
Josh Dever

The final chapter considers or reconsiders four topics that are important for philosophers coming to terms with AI communication: the fact that AI systems’ goals might change without human intervention and become misaligned with humans’ goals. It is argued that such possibilities make it both particularly important but also particularly difficult to give theories of AI communication. The second topic is the external mind hypothesis: the author considers its relevance for AI systems. The third considers what we can learn about so-called adversarial perturbations, and suggests they can help us reply to the sceptic Alfred from Chapter 2. Finally, the chapter concludes by considering again explainable AI, suggesting that the externalistic perspective offers can help us understand what we can and cannot require of explainable AI systems.


2021 ◽  
pp. 117-136
Author(s):  
Herman Cappelen ◽  
Josh Dever

The previous chapters have given us ways of thinking about how an AI system might use names and predicates. But language use involves more than simply tokening expressions. It also involves predicating, or asserting, or judging: applying predicates to terms to make a claim. How can AI do that, even granting it can name things and express predicates? This chapter proposes an answer, by melding together two popular theories: the act-theory of propositional content, and teleosemantics. In a now familiar way, it abstracts from human-centric features of extant theories to show how we can understand AI predication.


2021 ◽  
pp. 81-102
Author(s):  
Herman Cappelen ◽  
Josh Dever

This chapter begins to flesh out the theory of de-anthropocentrized externalism, by considering what we should say about AI systems’ use of predicates, such as ‘is a benign lesion’ or ‘will default on a loan’. Introducing Kripke’s seminal causal theory of names, the chapter shows how (and why it’s necessary) to abstract from the theory’s anthropocentric features while still preserving its key features, such as the idea that use of a term should be anchored in a baptismal event and that the term’s reference is passed on from speaker to speaker.


2021 ◽  
pp. 59-80
Author(s):  
Herman Cappelen ◽  
Josh Dever

This chapter introduces the central claim of the book about AI communication. It argues first that we should understand AI communication in terms of externalism, the thought that the semantic content an entity can express is determined to a large extent by the environment it finds itself in, rather than by its internal states. It then argues that existing externalist theories are too human-centric: they concentrate on peculiarities of human beings not shared by AI systems. It accordingly proposes to abstract from those human peculiarities when developing theories of communication for AI. The chapter ends by discussing how to decide between competing metasemantic frameworks such as externalism and internalism.


Sign in / Sign up

Export Citation Format

Share Document