Survey of Social Touch Interaction Between Humans and Robots

2020 ◽  
Vol 32 (1) ◽  
pp. 128-135
Author(s):  
Masahiro Shiomi ◽  
Hidenobu Sumioka ◽  
Hiroshi Ishiguro ◽  
◽  

In human-human interaction, social touch provides several merits, from both physical and mental perspectives. The physical existence of robots helps them reproduce human-like social touch, during their interaction with people. Such social touch shows positive effects, similar to those observed in human-human interaction. Therefore, social touch is a growing research topic in the field of human-robot interaction. This survey provides an overview of the work conducted so far on this topic.

2020 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Jairo Pérez-Osorio ◽  
Stefan Kopp

This booklet is a collection of the position statements accepted for the HRI’20 conference workshop “Social Cognition for HRI: Exploring the relationship between mindreading and social attunement in human-robot interaction” (Wykowska, Perez-Osorio & Kopp, 2020). Unfortunately, due to the rapid unfolding of the novel coronavirus at the beginning of the present year, the conference and consequently our workshop, were canceled. On the light of these events, we decided to put together the positions statements accepted for the workshop. The contributions collected in these pages highlight the role of attribution of mental states to artificial agents in human-robot interaction, and precisely the quality and presence of social attunement mechanisms that are known to make human interaction smooth, efficient, and robust. These papers also accentuate the importance of the multidisciplinary approach to advance the understanding of the factors and the consequences of social interactions with artificial agents.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6438
Author(s):  
Chiara Filippini ◽  
David Perpetuini ◽  
Daniela Cardone ◽  
Arcangelo Merla

An intriguing challenge in the human–robot interaction field is the prospect of endowing robots with emotional intelligence to make the interaction more genuine, intuitive, and natural. A crucial aspect in achieving this goal is the robot’s capability to infer and interpret human emotions. Thanks to its design and open programming platform, the NAO humanoid robot is one of the most widely used agents for human interaction. As with person-to-person communication, facial expressions are the privileged channel for recognizing the interlocutor’s emotional expressions. Although NAO is equipped with a facial expression recognition module, specific use cases may require additional features and affective computing capabilities that are not currently available. This study proposes a highly accurate convolutional-neural-network-based facial expression recognition model that is able to further enhance the NAO robot’ awareness of human facial expressions and provide the robot with an interlocutor’s arousal level detection capability. Indeed, the model tested during human–robot interactions was 91% and 90% accurate in recognizing happy and sad facial expressions, respectively; 75% accurate in recognizing surprised and scared expressions; and less accurate in recognizing neutral and angry expressions. Finally, the model was successfully integrated into the NAO SDK, thus allowing for high-performing facial expression classification with an inference time of 0.34 ± 0.04 s.


Author(s):  
J. Lindblom ◽  
B. Alenljung

A fundamental challenge of human interaction with socially interactive robots, compared to other interactive products, comes from them being embodied. The embodied nature of social robots questions to what degree humans can interact ‘naturally' with robots, and what impact the interaction quality has on the user experience (UX). UX is fundamentally about emotions that arise and form in humans through the use of technology in a particular situation. This chapter aims to contribute to the field of human-robot interaction (HRI) by addressing, in further detail, the role and relevance of embodied cognition for human social interaction, and consequently what role embodiment can play in HRI, especially for socially interactive robots. Furthermore, some challenges for socially embodied interaction between humans and socially interactive robots are outlined and possible directions for future research are presented. It is concluded that the body is of crucial importance in understanding emotion and cognition in general, and, in particular, for a positive user experience to emerge when interacting with socially interactive robots.


2019 ◽  
Vol 374 (1771) ◽  
pp. 20180033 ◽  
Author(s):  
Birgit Rauchbauer ◽  
Bruno Nazarian ◽  
Morgane Bourhis ◽  
Magalie Ochs ◽  
Laurent Prévot ◽  
...  

We present a novel functional magnetic resonance imaging paradigm for second-person neuroscience. The paradigm compares a human social interaction (human–human interaction, HHI) to an interaction with a conversational robot (human–robot interaction, HRI). The social interaction consists of 1 min blocks of live bidirectional discussion between the scanned participant and the human or robot agent. A final sample of 21 participants is included in the corpus comprising physiological (blood oxygen level-dependent, respiration and peripheral blood flow) and behavioural (recorded speech from all interlocutors, eye tracking from the scanned participant, face recording of the human and robot agents) data. Here, we present the first analysis of this corpus, contrasting neural activity between HHI and HRI. We hypothesized that independently of differences in behaviour between interactions with the human and robot agent, neural markers of mentalizing (temporoparietal junction (TPJ) and medial prefrontal cortex) and social motivation (hypothalamus and amygdala) would only be active in HHI. Results confirmed significantly increased response associated with HHI in the TPJ, hypothalamus and amygdala, but not in the medial prefrontal cortex. Future analysis of this corpus will include fine-grained characterization of verbal and non-verbal behaviours recorded during the interaction to investigate their neural correlates. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction'.


Author(s):  
Ali Momen ◽  
Eva Wiese

Social robots with expressive gaze have positive effects on human-robot interaction. In particular, research suggests that when robots are programmed to express introverted or extroverted gaze behavior, individuals enjoy interacting more with robots that match their personality. However, how this affects social-cognitive performance during human-robot interactions has not been thoroughly examined yet. In the current paper, we examine whether the perceived match between human and robot personality positively affects the degree to which the robot’s gaze is followed (i.e., gaze cueing, as a proxy for more complex social-cognitive behavior). While social attention has been examined extensively outside of human-robot interaction, recent research shows that a robot’s gaze is attended to in a similar way as a human’s gaze. While our results did not support the hypothesis that gaze cueing would be strongest when the participant’s personality matched the robot’s personality, we did find evidence that participants followed the gaze of introverted robots more strongly than the gaze of extroverted robots. This finding suggests that agent’s displaying extroverted gaze behavior may hurt performance in human-robot interaction.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2376 ◽  
Author(s):  
Michal Podpora ◽  
Arkadiusz Gardecki ◽  
Ryszard Beniak ◽  
Bartlomiej Klin ◽  
Jose Lopez Vicario ◽  
...  

This paper presents a more detailed concept of Human-Robot Interaction systems architecture. One of the main differences between the proposed architecture and other ones is the methodology of information acquisition regarding the robot’s interlocutor. In order to obtain as much information as possible before the actual interaction took place, a custom Internet-of-Things-based sensor subsystems connected to Smart Infrastructure was designed and implemented, in order to support the interlocutor identification and acquisition of initial interaction parameters. The Artificial Intelligence interaction framework of the developed robotic system (including humanoid Pepper with its sensors and actuators, additional local, remote and cloud computing services) is being extended with the use of custom external subsystems for additional knowledge acquisition: device-based human identification, visual identification and audio-based interlocutor localization subsystems. These subsystems were deeply introduced and evaluated in this paper, presenting the benefits of integrating them into the robotic interaction system. In this paper a more detailed analysis of one of the external subsystems—Bluetooth Human Identification Smart Subsystem—was also included. The idea, use case, and a prototype, integration of elements of Smart Infrastructure systems and the prototype implementation were performed in a small front office of the Weegree company as a decent test-bed application area.


Author(s):  
Wei Quan ◽  
Jinseok Woo ◽  
Yuichiro Toda ◽  
Naoyuki Kubota ◽  
◽  
...  

Human posture recognition has been a popular research topic since the development of the referent fields of human-robot interaction, and simulation operation. Most of these methods are based on supervised learning, and a large amount of training information is required to conduct an ideal assessment. In this study, we propose a solution to this by applying a number of unsupervised learning algorithms based on the forward kinematics model of the human skeleton. Next, we optimize the proposed method by integrating particle swarm optimization (PSO) for optimization. The advantage of the proposed method is no pre-training data is that required for human posture generation and recognition. We validate the method by conducting a series of experiments with human subjects.


2018 ◽  
Author(s):  
Ali Momen ◽  
Eva Wiese

Social robots with expressive gaze have positive effects on human-robot interaction. In particular, research suggests that when robots are programmed to express introverted or extraverted gaze behavior, individuals enjoy interacting more with robots that match their personality. However, how this affects social-cognitive performance during human-robot interactions has not been thoroughly examined yet. In the current paper, we examine whether the perceived match between human and robot personality positively affects the degree to which the robot’s gaze is followed (i.e., gaze cueing, as a proxy for more complex social-cognitive behavior). While social attention has been examined extensively outside of human-robot interaction, recent research shows that a robot’s gaze is attended to in a similar way as a human’s gaze. While our results did not support the hypothesis that gaze cueing would be strongest when the participant’s personality matched the robot’s personality, we did find evidence that participants followed the gaze of introverted robots more strongly than the gaze of extroverted robots. This finding suggests that agent’s displaying extraverted gaze behavior may hurt performance in human-robot interaction


Sign in / Sign up

Export Citation Format

Share Document