scholarly journals Intentions with actions: The role of intentionality attribution on the vicarious sense of agency in human-robot interaction

2021 ◽  
Author(s):  
Cecilia Roselli ◽  
Francesca Ciardo ◽  
Agnieszka Wykowska

Sense of Agency (SoA) is the feeling of control over one’s actions and their consequences. In social contexts, people experience a “vicarious” SoA over other humans’ actions; however, the phenomenon disappears when the other agent is a computer. The present study aimed to investigate factors that determine when humans experience vicarious SoA in human-robot interaction (HRI). To this end, in two experiments we disentangled two potential contributing factors: (1) the possibility of representing the robot’s actions, and (2) the adoption of Intentional Stance toward robots. Participants performed an Intentional Binding (IB) task reporting the time of occurrence for self- or robot-generated actions or sensory outcomes. To assess the role of action representation, the robot either performed a physical keypress (Experiment 1) or “acted” by sending a command via Bluetooth (Experiment 2). Before the experiment, attribution of intentionality to the robot was assessed. Results showed that when participants judged the occurrence of the action, vicarious SoA was predicted by the degree of attributed intentionality, but only when the robot’s action was physical. Conversely, digital actions elicited reversed effect of vicarious IB, suggesting that disembodied actions of robots are perceived as non-intentional. When participants judged the occurrence of the sensory outcome, vicarious SoA emerged only when the causing action was physical. Notably, intentionality attribution predicted vicarious SoA for sensory outcomes independently of the nature of the causing event, physical or digital. In conclusion, both intentionality attribution and action representation play a crucial role for vicarious SoA in HRI.

2021 ◽  
pp. 174702182110420
Author(s):  
Cecilia Roselli ◽  
Francesca Ciardo ◽  
Agnieszka Wykowska

Sense of Agency (SoA) is the feeling of control over one’s actions and their consequences. In social contexts, people experience a “vicarious” SoA over other humans’ actions; however, the phenomenon disappears when the other agent is a computer. This study aimed to investigate the factors that determine when humans experience vicarious SoA in Human–Robot Interaction (HRI). To this end, in two experiments, we disentangled two potential contributing factors: (1) the possibility of representing the robot’s actions and (2) the adoption of Intentional Stance towards robots. Participants performed an Intentional Binding (IB) task reporting the time of occurrence for self- or robot-generated actions or sensory outcomes. To assess the role of action representation, the robot either performed a physical keypress (Experiment 1) or “acted” by sending a command via Bluetooth (Experiment 2). Before the experiment, attribution of intentionality to the robot was assessed. Results showed that when participants judged the occurrence of the action, vicarious SoA was predicted by the degree of attributed intentionality, but only when the robot’s action was physical. Conversely, digital actions elicited the reversed effect of vicarious IB, suggesting that disembodied actions of robots are perceived as non-intentional. When participants judged the occurrence of the sensory outcome, vicarious SoA emerged only when the causing action was physical. Notably, intentionality attribution predicted vicarious SoA for sensory outcomes independently of the nature of the causing event, physical or digital. In conclusion, both intentionality attribution and action representation play a crucial role for vicarious SoA in HRI.


2020 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Jairo Pérez-Osorio ◽  
Stefan Kopp

This booklet is a collection of the position statements accepted for the HRI’20 conference workshop “Social Cognition for HRI: Exploring the relationship between mindreading and social attunement in human-robot interaction” (Wykowska, Perez-Osorio & Kopp, 2020). Unfortunately, due to the rapid unfolding of the novel coronavirus at the beginning of the present year, the conference and consequently our workshop, were canceled. On the light of these events, we decided to put together the positions statements accepted for the workshop. The contributions collected in these pages highlight the role of attribution of mental states to artificial agents in human-robot interaction, and precisely the quality and presence of social attunement mechanisms that are known to make human interaction smooth, efficient, and robust. These papers also accentuate the importance of the multidisciplinary approach to advance the understanding of the factors and the consequences of social interactions with artificial agents.


Author(s):  
Vignesh Prasad ◽  
Ruth Stock-Homburg ◽  
Jan Peters

AbstractFor some years now, the use of social, anthropomorphic robots in various situations has been on the rise. These are robots developed to interact with humans and are equipped with corresponding extremities. They already support human users in various industries, such as retail, gastronomy, hotels, education and healthcare. During such Human-Robot Interaction (HRI) scenarios, physical touch plays a central role in the various applications of social robots as interactive non-verbal behaviour is a key factor in making the interaction more natural. Shaking hands is a simple, natural interaction used commonly in many social contexts and is seen as a symbol of greeting, farewell and congratulations. In this paper, we take a look at the existing state of Human-Robot Handshaking research, categorise the works based on their focus areas, draw out the major findings of these areas while analysing their pitfalls. We mainly see that some form of synchronisation exists during the different phases of the interaction. In addition to this, we also find that additional factors like gaze, voice facial expressions etc. can affect the perception of a robotic handshake and that internal factors like personality and mood can affect the way in which handshaking behaviours are executed by humans. Based on the findings and insights, we finally discuss possible ways forward for research on such physically interactive behaviours.


Author(s):  
Ruth Stock-Homburg

AbstractKnowledge production within the interdisciplinary field of human–robot interaction (HRI) with social robots has accelerated, despite the continued fragmentation of the research domain. Together, these features make it hard to remain at the forefront of research or assess the collective evidence pertaining to specific areas, such as the role of emotions in HRI. This systematic review of state-of-the-art research into humans’ recognition and responses to artificial emotions of social robots during HRI encompasses the years 2000–2020. In accordance with a stimulus–organism–response framework, the review advances robotic psychology by revealing current knowledge about (1) the generation of artificial robotic emotions (stimulus), (2) human recognition of robotic artificial emotions (organism), and (3) human responses to robotic emotions (response), as well as (4) other contingencies that affect emotions as moderators.


Robotics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 68
Author(s):  
Lei Shi ◽  
Cosmin Copot ◽  
Steve Vanlanduit

In gaze-based Human-Robot Interaction (HRI), it is important to determine human visual intention for interacting with robots. One typical HRI interaction scenario is that a human selects an object by gaze and a robotic manipulator will pick up the object. In this work, we propose an approach, GazeEMD, that can be used to detect whether a human is looking at an object for HRI application. We use Earth Mover’s Distance (EMD) to measure the similarity between the hypothetical gazes at objects and the actual gazes. Then, the similarity score is used to determine if the human visual intention is on the object. We compare our approach with a fixation-based method and HitScan with a run length in the scenario of selecting daily objects by gaze. Our experimental results indicate that the GazeEMD approach has higher accuracy and is more robust to noises than the other approaches. Hence, the users can lessen cognitive load by using our approach in the real-world HRI scenario.


Philosophies ◽  
2019 ◽  
Vol 4 (1) ◽  
pp. 11 ◽  
Author(s):  
Frank Förster

In this article, I assess an existing language acquisition architecture, which was deployed in linguistically unconstrained human–robot interaction, together with experimental design decisions with regard to their enactivist credentials. Despite initial scepticism with respect to enactivism’s applicability to the social domain, the introduction of the notion of participatory sense-making in the more recent enactive literature extends the framework’s reach to encompass this domain. With some exceptions, both our architecture and form of experimentation appear to be largely compatible with enactivist tenets. I analyse the architecture and design decisions along the five enactivist core themes of autonomy, embodiment, emergence, sense-making, and experience, and discuss the role of affect due to its central role within our acquisition experiments. In conclusion, I join some enactivists in demanding that interaction is taken seriously as an irreducible and independent subject of scientific investigation, and go further by hypothesising its potential value to machine learning.


Information ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 112
Author(s):  
Marit Hagens ◽  
Serge Thill

Perfect information about an environment allows a robot to plan its actions optimally, but often requires significant investments into sensors and possibly infrastructure. In applications relevant to human–robot interaction, the environment is by definition dynamic and events close to the robot may be more relevant than distal ones. This suggests a non-trivial relationship between sensory sophistication on one hand, and task performance on the other. In this paper, we investigate this relationship in a simulated crowd navigation task. We use three different environments with unique characteristics that a crowd navigating robot might encounter and explore how the robot’s sensor range correlates with performance in the navigation task. We find diminishing returns of increased range in our particular case, suggesting that task performance and sensory sophistication might follow non-trivial relationships and that increased sophistication on the sensor side does not necessarily equal a corresponding increase in performance. Although this result is a simple proof of concept, it illustrates the benefit of exploring the consequences of different hardware designs—rather than merely algorithmic choices—in simulation first. We also find surprisingly good performance in the navigation task, including a low number of collisions with simulated human agents, using a relatively simple A*/NavMesh-based navigation strategy, which suggests that navigation strategies for robots in crowds need not always be sophisticated.


Author(s):  
Youdi Li ◽  
◽  
Wei Fen Hsieh ◽  
Eri Sato-Shimokawara ◽  
Toru Yamaguchi

In our daily life, it is inevitable to confront the condition which we feel confident or unconfident. Under these conditions, we might have different expressions and responses. Not to mention under the situation when a human communicates with a robot. It is necessary for robots to behave in various styles to show adaptive confidence degree, for example, in previous work, when the robot made mistakes during the interaction, different certainty expression styles have shown influence on humans’ truthfulness and acceptance. On the other hand, when human feel uncertain on the robot’s utterance, the approach of how the robot recognizes human’s uncertainty is crucial. However, relative researches are still scarce and ignore individual characteristics. In current study, we designed an experiment to obtain human verbal and non-verbal features under certain and uncertain condition. From the certain/uncertain answer experiment, we extracted the head movement and voice factors as features to investigate if we can classify these features correctly. From the result, we have found that different people had distinct features to show different certainty degree but some participants might have a similar pattern considering their relatively close psychological feature value. We aim to explore different individuals’ certainty expression patterns because it can not only facilitate humans’ confidence status detection but also is expected to be utilized on robot side to give the proper response adaptively and thus spice up the Human-Robot Interaction.


Author(s):  
Karoline Malchus ◽  
Prisca Stenneken ◽  
Petra Jaecks ◽  
Carolin Meyer ◽  
Oliver Damm ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document