Frontiers in Artificial Intelligence and Applications - Culturally Sustainable Social Robotics
Latest Publications


TOTAL DOCUMENTS

76
(FIVE YEARS 76)

H-INDEX

0
(FIVE YEARS 0)

Published By IOS Press

9781643681542, 9781643681559

Author(s):  
Oliver Schürer

At this workshop, the transdisciplinary research group H.A.U.S. presented a new experimental research method (TPT) for the first time. Classic methods like “Living Lab” and “Wizard of Oz” are widely used and well established in social robotics research, but they imply explicit limits and raise methodological concerns. The epistemology of TPT’s performances draws on concepts related to Hans-Jörg Rheinberger’s “epistemic thing” and Michel Serres’ “quasi-object”. These theoretical underpinnings are discussed, alongside limitations of TPT and benefits of this approach compared to classic methods.


Author(s):  
David J. Gunkel

A number of recent publications have examined and advanced the concept of robot rights. These investigations have been largely theoretical and speculative. This paper seeks to move the debate about the moral and legal standing of social robots out of the realm of theory. It does so by investigating what rights a social robot would need to have in order to facilitate responsible integration of these technologies into our world. The analysis, therefore, seeks to formulate practical guidance for developing an intelligent and executable plan for culturally sustainable social robots.


Author(s):  
Joshua C. Gellers

Could robots have rights? On the one hand, robots are becoming increasingly human-like in appearance and behavior. On the other hand, legal systems around the world are increasingly recognizing the rights of nonhuman entities. Observing these macro-level trends, in this paper I present an ecological framework for evaluating the conditions under which some robots might be considered eligible for certain rights. I argue that a critical, materialist, and broadly ecological interpretation of the environment, along with decisions by jurists establishing or upholding the rights of nature, support extension of rights to nonhuman entities like robots.


Author(s):  
Autumn Edwards ◽  
Chad Edwards

The Fundamental Attribution Error (FAE) is the tendency for people to over-emphasize dispositional or personality-based explanations for others’ behavior while under-emphasizing situational explanations. Compared to people, current robots are less agentic and autonomous and more driven by programming, design, and humans-in-the-loop. People do nonetheless assign them agency and intentionality and blame. The purpose of the current experiment is to determine whether people commit the FAE in response to the behaviors of a social robot.


Author(s):  
Gerrit Krueper

Based on early Marx’s concept of the species-being, this paper provides a (historical) materialist definition of an ontology of being human and argues that it enables a theorization of a human post humanism. Such theory is based on the fact that cognitive capitalism’s rise of technology translates the human body into literal instruments of labor. However, the link of technology with the laborer enables a transfer of skills and powers that extend the body’s capabilities: creating thus, what this paper terms, the cyber-body. The material reality of this cyber-body is ambivalent: It is a reality of exploitation and abstraction, designed to eventually create infinite capital accumulation, as well as a reality of liberation from the social divisions of class, gender, race, and sexuality by use of its network connecting capabilities. Put together, this ambivalence recovers the real species-being.


Author(s):  
Dina Babushkina

My concern is the preservation of rationally justifiable moral practices, which face challenges because of the increasing integration of social robots into roles previously occupied exclusively by persons. I will focus on the attribution of responsibility and blaming as examples of such practices. I will argue that blaming robots (a) does not satisfy the rational constraints on the reactive attitude of blame and other related reactive attitudes and practices such as resentment, forgiving, and punishment, and (b) is by itself morally wrong.


Author(s):  
Frederieke Y. Jansen

While we already know that clearly utopian or dystopian depictions of human-machine relationships in science fiction film can be effective rhetorical models that shape our ideas of HRI, this paper argues that sci-fi films, like Marjorie Prime (2017) and Be Right Back (2013), can also function as more neutral virtual laboratories that allow viewers to actively explore the pros and cons of those relationships in more detail. This paper specifically explores both Marjorie Prime and Be Right Back for the way they evoke questions or ideas about what it means to be human, what it means to interact with AI, and what a meaningful relationship between these two can bring. By following a neoformalist analysis, I will show how these cases continuously present us with devices that force us to reassess the role of robots in our lives. They do this by using deceptive, reflective, and confrontational strategies within characters, cinematography, narrative structure and setting.


Author(s):  
Joanna K. Malinowska

Due to its interdisciplinary nature, the field of HRI uses many concepts typical of the social sciences and humanities, in addition to terms that are usually associated with technology. In this paper, I analyse the problems that arise when we use the term ‘empathy’ to describe and explain the interaction between robots and humans. I argue that this not only raises questions about the possibility of applying this term in situations in which only one of the participants of the interaction is a traditionally understood social subject but also requires answers to questions about such problematic concepts as values and culture.


Author(s):  
John Danaher

Human societies have, historically, undergone a number of moral revolutions. Some of these have been precipitated by technological changes. Will the integration of robots into our social lives precipitate a new moral revolution? In this keynote, I will look at the history of moral revolutions and the role of techno-social change in facilitating those revolutions. I will examine the structural properties of human moral systems and how those properties might be affected by social robots. I will argue that much of our current social morality is agency-centric and that social robots, as non-standard agents, will disrupt that model.


Author(s):  
Shannon Vallor

The conversation about social robots and ethics has matured considerably over the years, moving beyond two inadequate poles: superficially utilitarian analyses of ethical ‘risks’ of social robots that fail to question the underlying sociotechnical systems and values driving robotics development, and speculative, empirically unfounded fears of robo-pocalypses that likewise leave those underlying systems and values unexamined and unchallenged. Today our perspective in the field is normatively richer and more empirically grounded. However, there is still work to be done. In the transition from risk-mitigation that accepts the social status quo, to deeper thinking about how to design different worlds in which we might flourish with social robots, we nevertheless have not reckoned with the moral and social debt already accumulated in existing robotics systems and our broader culture of sociotechnical innovation. We relish our creative and philosophical imaginings of a future in which we live well with robots, but without a serious reckoning with the past and present, and the legacies of harm and neglect that must be redressed and repaired in order for those futures to be possible and sustainable. This talk explores those legacies and their accumulated debts, and what it will take to liberate social robotics from them.


Sign in / Sign up

Export Citation Format

Share Document