Deceitful Media
Latest Publications


TOTAL DOCUMENTS

8
(FIVE YEARS 8)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780190080365, 9780190080402

2021 ◽  
pp. 127-132
Author(s):  
Simone Natale

The historical trajectory examined in this book demonstrates that humans’ reactions to machines that are programmed to simulate intelligent behaviors represent a constitutive element of what is commonly called AI. Artificial intelligence technologies are not just designed to interact with human users: they are designed to fit specific characteristics of the ways users perceive and navigate the external world. Communicative AI becomes more effective not only by evolving from a technical standpoint but also by profiting, through the dynamics of banal deception, from the social meanings humans project onto situations and things. In this conclusion, the risks and problems related to AI’s banal deception are explored in relationship with other AI-based technologies such as robotics and social media bots. A call is made for initiating a more serious debate about the role of deception in interface design and computer science. The book concludes with a reflection on the need to develop a critical and skeptical stance in interactions with computing technologies and AI. In order not to be found unprepared for the challenges posed by AI, computer scientists, software developers, designers as well as users have to consider and critically interrogate the potential outcomes of banal deception.


2021 ◽  
pp. 16-32
Author(s):  
Simone Natale

The relationship between AI and deception was initially explored by Alan Turing, who famously proposed in 1950 a practical test addressing the question “Can machines think?” This chapter argues that Turing’s proposal of the Imitation Game, now more commonly called the Turing test, located the prospects of AI not just in improvements of hardware and software but also in a more complex scenario emerging from the interaction between humans and computers. The Turing test, by placing humans at the center of its design as judges and as conversation agents alongside computers, created a space to imagine and experiment with AI technologies in terms of their credibility to human users. This entailed the discovery that AI was to be achieved not only through the development of more complex and functional computing technologies but also through the use of strategies and techniques exploiting humans’ liability to illusion and deception.


2021 ◽  
pp. 107-126
Author(s):  
Simone Natale

AI voice assistants are based on software that enters into dialogue with users through speech in order to provide replies to the users’ queries or execute tasks such as sending emails, searching on the web, or turning on a lamp. Every assistant is represented as an individual character or persona (e.g., “Siri” or “Alexa”) that despite being nonhuman can be imagined and interacted with as such. Focusing on the cases of Alexa, Siri, and Google Assistant, this chapter argues that voice assistants activate an ambivalent relationship with users, giving them the illusion of control in their interactions with the assistants while at the same time withdrawing them from actual control over the computing systems that lie behind these interfaces. The chapter illustrates how this is made possible at the interface level by mechanisms of projection that expect users to contribute to the construction of the assistant as a persona, and how this construction ultimately conceals the networked computing systems administered by the powerful corporations who developed these tools.


2021 ◽  
pp. 50-67
Author(s):  
Simone Natale

This chapter focuses on ELIZA, the first chatbot program, developed in the 1960s at the Massachusetts Institute of Technology by Joseph Weizenbaum to engage in written conversations with users of the MAC time-sharing system. The program’s alleged capacity for conversation attracted the attention of audiences in the United States and the world, and Weizenbaum’s book Computer Power and Human Reasons (1976) drew readers from well outside his discipline of computer science. In the process, the program presented AI in ways that sharply contrasted with the vision of human-machine symbiosis that dominated approaches to human-computer interaction at the time. Drawing on Weizenbaum’s writings, computer science literature, and journalistic reports, the chapter argues that the impact of this alternative vision was not without consequence, informing the development of critical approaches to digital media as well as of actual technologies and pragmatic strategies in AI research.


2021 ◽  
pp. 87-106
Author(s):  
Simone Natale

In 1991, American inventor and philanthropist Hugh Loebner funded the launch of a competition aimed at recreating the conditions of the Turing test to assess the success of conversational programs in passing as human. The Loebner Prize competition has been conducted every year since then. This chapter looks at the history of this competition in order to argue that it has functioned as a proving ground for AI’s ability to deceive humans and as a form of spectacle highlighting the potential of computing technologies. The staged confrontations between computers and humans provide a context where humans’ liability for deception and its implications for natural language programs were systematically put to test in a competitive framework. This encouraged programmers to develop strategies and tricks that are reemerging today in communicative AI technologies. Thus, the case of the Loebner Prize helps one better understand Alexa, Siri, and other AI voice assistants that are becoming increasingly widespread in contemporary societies.


2021 ◽  
pp. 1-15
Author(s):  
Simone Natale

The introduction presents the main arguments of the book and the theoretical and historical background guiding the analysis, proposing a shift in approaches to artificial intelligence on the basis of a new assumption: that what machines are changing is primarily us: humans. It introduces the concept of “banal deception,” which describes deceptive mechanisms and practices that are embedded in media technologies and contribute to their integration into everyday life. Five key characteristics of banal deception are outlined and discussed: first, its everyday and ordinary character; second, its functionality and the fact that it always has some potential value to the user; third, its obliviousness, or the fact that the deception is not understood as such but taken for granted; fourth, its low definition, which refers to the fact that it demands participation from users in the construction of sense; and fifth, that banal deception is not just imposed on users but also “programmed” by designers and developers.


2021 ◽  
pp. 33-49
Author(s):  
Simone Natale

This chapter shows that the problem of the observer—that is, the question of how humans respond to witnessing machines that exhibit intelligence—was the subject of substantial reflections in the field of AI in the 1950s and 1960s. As AI developed as a heterogeneous milieu, bringing together multiple disciplinary perspectives and approaches, many acknowledged that users might be deceived in interactions with “intelligent” machines. Most members of the AI community were confident that the deceptive character of AI would be dispelled, similarly to a magic trick, by providing users with a better understanding of computer systems. This approach, however, did not take it into account that deception is not a transitional but a structural component of people’s interactions with computers. The chapter argues that the dream of dispelling the magic aura of computers was superseded by the realization that users’ perceptions of AI systems can be manipulated in order to improve interactions between humans and machines.


2021 ◽  
pp. 68-86
Author(s):  
Simone Natale

This chapter examines how AI was embedded in a range of software applications from the late 1970s to the 1990s—a period marked by the emergence of personal computing. Focusing on diverse software artifacts such as computer daemons, digital games, and social interfaces, the chapter interrogates the ways developers introduced deceptive mechanisms within a wider framework promising universal access and ease of use for computing technologies, and how their doing so informed work that was aimed at improving the usability of computing systems. Their explorations of this territory involved a crucial shift away from considering deception something that could be dispelled by making computers more “transparent” and toward the full integration of forms of deception in the experiences of users interacting with AI.


Sign in / Sign up

Export Citation Format

Share Document