scholarly journals Reduced Sense of Agency in Human-Robot interaction

2018 ◽  
Author(s):  
Francesca Ciardo ◽  
Davide De Tommaso ◽  
Frederike Beyer ◽  
Agnieszka Wykowska

In the presence of others, sense of agency (SoA), i.e. the perceived relation-ship between our own actions and external events, is reduced. This effect is thought to contribute to diffusion of responsibility. The present study aimed at examining humans’ SoA when interacting with an artificial embodied agent. Young adults participated in a task alongside the Cozmo robot (Anki Robotics). Participants were asked to perform costly actions (i.e. losing vari-ous amounts of points) to stop an inflating balloon from exploding. In 50% of trials, only the participant could stop the inflation of the balloon (Individ-ual condition). In the remaining trials, both Cozmo and the participant were in charge of preventing the balloon from bursting (Joint condition). The longer the players waited before pressing the “stop” key, the smaller amount of points that was subtracted. However, in case the balloon burst, partici-pants would lose the largest amount of points. In the joint condition, no points were lost if Cozmo stopped the balloon. At the end of each trial, par-ticipants rated how much control they perceived over the outcome of the tri-al. Results showed that when participants successfully stopped the balloon, they rated their SoA lower in the Joint than in the Individual condition, in-dependently of the amount of lost points. This suggests that interacting with robots affects SoA, similarly to interacting with other humans

2019 ◽  
Author(s):  
Francesca Ciardo ◽  
Frederike Beyer ◽  
Davide De Tommaso ◽  
Agnieszka Wykowska

In the presence of others, sense of agency (SoA), i.e. the perceived relationship between our own actions and external events, is reduced. The present study aimed at investigating whether the phenomenon of reduced SoA is observed in human-robot interaction, similarly to human-human interaction. To this end, we tested SoA when people interacted with a robot (Experiment 1), with a passive, non-agentic air pump (Experiment 2), or when they interacted with both a robot and a human being (Experiment 3). Participants were asked to rate the perceived control they felt on the outcome of their action while performing a diffusion of responsibility task. Results showed that the intentional agency attributed to the artificial entity differently affect the performance and the perceived SoA on the outcome of the task. Experiment 1 showed that, when participants successfully performed an action, they rated SoA over the outcome as lower in trials in which the robot was also able to act (but did not), compared to when they were performing the task alone. However, this did not occur in Experiment 2, where the artificial entity was an air pump, which had the same influence on the task as the robot, but in a passive manner and thus lacked intentional agency. Results of Experiment 3 showed that SoA was reduced similarly for the human and robot agents, threby indicating that attribution of intentional agency plays a crucial role in reduction of SoA. Together, our results suggest that interacting with robotic agents affects SoA, similarly to interacting with other humans, but differently from interacting with non-agentic mechanical devices. This has important implications for the applied of social robotics, where a subjective decrease in SoA could have negative consequences, such as in robot-assisted care in hospitals.


2021 ◽  
Author(s):  
Nina-Alisa Hinz ◽  
Francesca Ciardo ◽  
Agnieszka Wykowska

The present study aimed to examine event-related potentials (ERPs) of action planning and outcome monitoring in human-robot interaction. To this end, participants were instructed to perform costly actions (i.e. losing points) to stop a balloon from inflating and to prevent its explosion. They performed the task alone (individual condition) or with a robot (joint condition). Similar to findings from human-human interactions, results showed that action planning was affected by the presence of another agent, robot in this case. Specifically, the early readiness potential (eRP) amplitude was larger in the joint, than in the individual, condition. The presence of the robot affected also outcome perception and monitoring. Our results showed that the P1/N1 complex was suppressed in the joint, compared to the individual condition when the worst outcome was expected, suggesting that the presence of the robot affects attention allocation to negative outcomes of one’s own actions. Similarly, results also showed that larger losses elicited smaller feedback-related negativity (FRN) in the joint than in the individual condition. Taken together, our results indicate that the social presence of a robot may influence the way we plan our actions and also the way we monitor their consequences. Implications of the study for the human-robot interaction field are discussed.


Author(s):  
Francesca Ciardo ◽  
Davide De Tommaso ◽  
Frederike Beyer ◽  
Agnieszka Wykowska

2019 ◽  
Vol 39 (1) ◽  
pp. 73-99 ◽  
Author(s):  
Matt Webster ◽  
David Western ◽  
Dejanira Araiza-Illan ◽  
Clare Dixon ◽  
Kerstin Eder ◽  
...  

We present an approach for the verification and validation (V&V) of robot assistants in the context of human–robot interactions, to demonstrate their trustworthiness through corroborative evidence of their safety and functional correctness. Key challenges include the complex and unpredictable nature of the real world in which assistant and service robots operate, the limitations on available V&V techniques when used individually, and the consequent lack of confidence in the V&V results. Our approach, called corroborative V&V, addresses these challenges by combining several different V&V techniques; in this paper we use formal verification (model checking), simulation-based testing, and user validation in experiments with a real robot. This combination of approaches allows V&V of the human–robot interaction task at different levels of modeling detail and thoroughness of exploration, thus overcoming the individual limitations of each technique. We demonstrate our approach through a handover task, the most critical part of a complex cooperative manufacturing scenario, for which we propose safety and liveness requirements to verify and validate. Should the resulting V&V evidence present discrepancies, an iterative process between the different V&V techniques takes place until corroboration between the V&V techniques is gained from refining and improving the assets (i.e., system and requirement models) to represent the human–robot interaction task in a more truthful manner. Therefore, corroborative V&V affords a systematic approach to “meta-V&V,” in which different V&V techniques can be used to corroborate and check one another, increasing the level of certainty in the results of V&V.


AI & Society ◽  
2021 ◽  
Author(s):  
Dafna Burema

AbstractThis paper argues that there is a need to critically assess bias in the representations of older adults in the field of Human–Robot Interaction. This need stems from the recognition that technology development is a socially constructed process that has the potential to reinforce problematic understandings of older adults. Based on a qualitative content analysis of 96 academic publications, this paper indicates that older adults are represented as; frail by default, independent by effort; silent and technologically illiterate; burdensome; and problematic for society. Within these documents, few counternarratives are present that do not take such essentialist representations. In these texts, the goal of social robots in elder care is to “enable” older adults to “better” themselves. The older body is seen as “fixable” with social robots, reinforcing an ageist and neoliberal narrative: older adults are reduced to potential care receivers in ways that shift care responsibilities away from the welfare state onto the individual.


2019 ◽  
Author(s):  
Cecilia Roselli ◽  
Francesca Ciardo ◽  
Agnieszka Wykowska

In near future, robots will become a fundamental part of our daily life; therefore, it appears crucial to investigate how they can successfully interact with humans. Since several studies already pointed out that a robotic agent can influence human’s cognitive mechanisms such as decision-making and joint attention, we focus on Sense of Agency (SoA). To this aim, we employed the Intentional Binding (IB) task to implicitly assess SoA in human-robot interaction (HRI). Participants were asked to perform an IB task alone (Individual condition) or with the Cozmo robot (Social condition). In the Social condition, participants were free to decide whether they wanted to let Cozmo press. Results showed that participants performed the action significantly more often than Cozmo. Moreover, participants were more precise in reporting the occurrence of a self-made action when Cozmo was also in charge of performing the task. However, this improvement in evaluating self-performance corresponded to a reduction in SoA. In conclusion, the present study highlights the double effect of robots as social companions. Indeed, the social presence of the robot leads to a better evaluation of self-generated actions and, at the same time, to a reduction of SoA.


2012 ◽  
Vol 10 (1) ◽  
pp. 147470491201000 ◽  
Author(s):  
Drew H. Bailey ◽  
Benjamin Winegard ◽  
Jon Oxford ◽  
David C. Geary

Men's but not women's investment in a public goods game varied dynamically with the presence or absence of a perceived out-group. Three hundred fifty-four (167 male) young adults participated in multiple iterations of a public goods game under intergroup and individual competition conditions. Participants received feedback about whether their investments in the group were sufficient to earn a bonus to be shared among all in-group members. Results for the first trial confirm previous research in which men's but not women's investments were higher when there was a competing out-group. We extended these findings by showing that men's investment in the in-group varied dynamically by condition depending on the outcome of the previous trial: In the group condition, men, but not women, decreased spending following a win (i.e., earning an in-group bonus). In the individual condition, men, but not women, increased spending following a win. We hypothesize that these patterns reflect a male bias to calibrate their level of in-group investment such that they sacrifice only what is necessary for their group to successfully compete against a rival group.


Author(s):  
Shivam Goel

Robotics in healthcare has recently emerged, backed by the recent advances in the field of machine learning and robotics. Researchers are focusing on training robots for interacting with elderly adults. This research primarily focuses on engineering more efficient robots that can learn from their mistakes, thereby aiding in better human-robot interaction. In this work, we propose a method in which a robot learns to navigate itself to the individual in need. The robotic agents' learning algorithm will be capable of navigating in an unknown environment. The robot's primary objective is to locate human in a house, and upon finding the human, the goal is to interact with them while complementing their pose and gaze. We propose an end to end learning strategy, which uses a recurrent neural network architecture in combination with Q-learning to train an optimal policy. The idea can be a contribution to better human-robot interaction.


2021 ◽  
Author(s):  
Cecilia Roselli ◽  
Francesca Ciardo ◽  
Agnieszka Wykowska

Sense of Agency (SoA) is the feeling of control over one’s actions and their consequences. In social contexts, people experience a “vicarious” SoA over other humans’ actions; however, the phenomenon disappears when the other agent is a computer. The present study aimed to investigate factors that determine when humans experience vicarious SoA in human-robot interaction (HRI). To this end, in two experiments we disentangled two potential contributing factors: (1) the possibility of representing the robot’s actions, and (2) the adoption of Intentional Stance toward robots. Participants performed an Intentional Binding (IB) task reporting the time of occurrence for self- or robot-generated actions or sensory outcomes. To assess the role of action representation, the robot either performed a physical keypress (Experiment 1) or “acted” by sending a command via Bluetooth (Experiment 2). Before the experiment, attribution of intentionality to the robot was assessed. Results showed that when participants judged the occurrence of the action, vicarious SoA was predicted by the degree of attributed intentionality, but only when the robot’s action was physical. Conversely, digital actions elicited reversed effect of vicarious IB, suggesting that disembodied actions of robots are perceived as non-intentional. When participants judged the occurrence of the sensory outcome, vicarious SoA emerged only when the causing action was physical. Notably, intentionality attribution predicted vicarious SoA for sensory outcomes independently of the nature of the causing event, physical or digital. In conclusion, both intentionality attribution and action representation play a crucial role for vicarious SoA in HRI.


2018 ◽  
Author(s):  
Raquel Oliveira ◽  
Patricia Arriaga ◽  
Filipa Correia ◽  
Ana Paiva

This project aims to investigate how stable content dimensions of stereotypes can affect Human-Robot interaction in groups. More specifically, we focus on perceived warmth and competence as dimensions of stereotypes around which the individual organizes their perception about the other. We aim to explore how the aforementioned dimensions play a role in the individual's behavioral and emotional response to the robotic agent, as well as the effect it has on his/hers future intention to work with it. Moreover, we will explore these issues in the context of small mixed group interactions, involving more than one human and more than one robot, interacting over a card game entertaining scenario.


Sign in / Sign up

Export Citation Format

Share Document