scholarly journals Brain activity during reciprocal social interaction investigated using conversational robots as control condition

2019 ◽  
Vol 374 (1771) ◽  
pp. 20180033 ◽  
Author(s):  
Birgit Rauchbauer ◽  
Bruno Nazarian ◽  
Morgane Bourhis ◽  
Magalie Ochs ◽  
Laurent Prévot ◽  
...  

We present a novel functional magnetic resonance imaging paradigm for second-person neuroscience. The paradigm compares a human social interaction (human–human interaction, HHI) to an interaction with a conversational robot (human–robot interaction, HRI). The social interaction consists of 1 min blocks of live bidirectional discussion between the scanned participant and the human or robot agent. A final sample of 21 participants is included in the corpus comprising physiological (blood oxygen level-dependent, respiration and peripheral blood flow) and behavioural (recorded speech from all interlocutors, eye tracking from the scanned participant, face recording of the human and robot agents) data. Here, we present the first analysis of this corpus, contrasting neural activity between HHI and HRI. We hypothesized that independently of differences in behaviour between interactions with the human and robot agent, neural markers of mentalizing (temporoparietal junction (TPJ) and medial prefrontal cortex) and social motivation (hypothalamus and amygdala) would only be active in HHI. Results confirmed significantly increased response associated with HHI in the TPJ, hypothalamus and amygdala, but not in the medial prefrontal cortex. Future analysis of this corpus will include fine-grained characterization of verbal and non-verbal behaviours recorded during the interaction to investigate their neural correlates. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction'.

AI Magazine ◽  
2017 ◽  
Vol 37 (4) ◽  
pp. 19-31 ◽  
Author(s):  
Gabriel Skantze

When humans interact and collaborate with each other, they coordinate their turn-taking behaviors using verbal and nonverbal signals, expressed in the face and voice. If robots of the future are supposed to engage in social interaction with humans, it is essential that they can generate and understand these behaviors. In this article, I give an overview of several studies that show how humans in interaction with a humanlike robot make use of the same coordination signals typically found in studies on human-human interaction, and that it is possible to automatically detect and combine these cues to facilitate real-time coordination. The studies also show that humans react naturally to such signals when used by a robot, without being given any special instructions. They follow the gaze of the robot to disambiguate referring expressions, they conform when the robot selects the next speaker using gaze, and they respond naturally to subtle cues, such as gaze aversion, breathing, facial gestures and hesitation sounds.


2020 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Jairo Pérez-Osorio ◽  
Stefan Kopp

This booklet is a collection of the position statements accepted for the HRI’20 conference workshop “Social Cognition for HRI: Exploring the relationship between mindreading and social attunement in human-robot interaction” (Wykowska, Perez-Osorio & Kopp, 2020). Unfortunately, due to the rapid unfolding of the novel coronavirus at the beginning of the present year, the conference and consequently our workshop, were canceled. On the light of these events, we decided to put together the positions statements accepted for the workshop. The contributions collected in these pages highlight the role of attribution of mental states to artificial agents in human-robot interaction, and precisely the quality and presence of social attunement mechanisms that are known to make human interaction smooth, efficient, and robust. These papers also accentuate the importance of the multidisciplinary approach to advance the understanding of the factors and the consequences of social interactions with artificial agents.


2020 ◽  
Author(s):  
Seongmin A. Park ◽  
Douglas S. Miller ◽  
Erie D. Boorman

ABSTRACTGeneralizing experiences to guide decision making in novel situations is a hallmark of flexible behavior. It has been hypothesized such flexibility depends on a cognitive map of an environment or task, but directly linking the two has proven elusive. Here, we find that discretely sampled abstract relationships between entities in an unseen two-dimensional (2-D) social hierarchy are reconstructed into a unitary 2-D cognitive map in the hippocampus and entorhinal cortex. We further show that humans utilize a grid-like code in several brain regions, including entorhinal cortex and medial prefrontal cortex, for inferred direct trajectories between entities in the reconstructed abstract space during discrete decisions. Moreover, these neural grid-like codes in the entorhinal cortex predict neural decision value computations in the medial prefrontal cortex and temporoparietal junction area during choice. Collectively, these findings show that grid-like codes are used by the human brain to infer novel solutions, even in abstract and discrete problems, and suggest a general mechanism underpinning flexible decision making and generalization.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6438
Author(s):  
Chiara Filippini ◽  
David Perpetuini ◽  
Daniela Cardone ◽  
Arcangelo Merla

An intriguing challenge in the human–robot interaction field is the prospect of endowing robots with emotional intelligence to make the interaction more genuine, intuitive, and natural. A crucial aspect in achieving this goal is the robot’s capability to infer and interpret human emotions. Thanks to its design and open programming platform, the NAO humanoid robot is one of the most widely used agents for human interaction. As with person-to-person communication, facial expressions are the privileged channel for recognizing the interlocutor’s emotional expressions. Although NAO is equipped with a facial expression recognition module, specific use cases may require additional features and affective computing capabilities that are not currently available. This study proposes a highly accurate convolutional-neural-network-based facial expression recognition model that is able to further enhance the NAO robot’ awareness of human facial expressions and provide the robot with an interlocutor’s arousal level detection capability. Indeed, the model tested during human–robot interactions was 91% and 90% accurate in recognizing happy and sad facial expressions, respectively; 75% accurate in recognizing surprised and scared expressions; and less accurate in recognizing neutral and angry expressions. Finally, the model was successfully integrated into the NAO SDK, thus allowing for high-performing facial expression classification with an inference time of 0.34 ± 0.04 s.


Author(s):  
Mark Tee Kit Tsun ◽  
Lau Bee Theng ◽  
Hudyjaya Siswoyo Jo ◽  
Patrick Then Hang Hui

This chapter summarizes the findings of a study on robotics research and application for assisting children with disabilities between the years 2009 and 2013. The said disabilities include impairment of motor skills, locomotion, and social interaction that is commonly attributed to children suffering from Autistic Spectrum Disorders (ASD) and Cerebral Palsy (CP). As opposed to assistive technologies for disabilities that largely account for restoration of physical capabilities, disabled children also require dedicated rehabilitation for social interaction and mental health. As such, the breadth of this study covers existing efforts in rehabilitation of both physical and socio-psychological domains, which involve Human-Robot Interaction. Overviewed topics include assisted locomotion training, passive stretching and active movement rehabilitation, upper-extremity motor function, social interactivity, therapist-mediators, active play encouragement, as well as several life-long assistive robotics in current use. This chapter concludes by drawing attention to ethical and adoption issues that may obstruct the field's effectiveness.


Author(s):  
J. Lindblom ◽  
B. Alenljung

A fundamental challenge of human interaction with socially interactive robots, compared to other interactive products, comes from them being embodied. The embodied nature of social robots questions to what degree humans can interact ‘naturally' with robots, and what impact the interaction quality has on the user experience (UX). UX is fundamentally about emotions that arise and form in humans through the use of technology in a particular situation. This chapter aims to contribute to the field of human-robot interaction (HRI) by addressing, in further detail, the role and relevance of embodied cognition for human social interaction, and consequently what role embodiment can play in HRI, especially for socially interactive robots. Furthermore, some challenges for socially embodied interaction between humans and socially interactive robots are outlined and possible directions for future research are presented. It is concluded that the body is of crucial importance in understanding emotion and cognition in general, and, in particular, for a positive user experience to emerge when interacting with socially interactive robots.


Entropy ◽  
2019 ◽  
Vol 21 (2) ◽  
pp. 199 ◽  
Author(s):  
Soheil Keshmiri ◽  
Hidenobu Sumioka ◽  
Ryuji Yamazaki ◽  
Hiroshi Ishiguro

Todays’ communication media virtually impact and transform every aspect of our daily communication and yet the extent of their embodiment on our brain is unexplored. The study of this topic becomes more crucial, considering the rapid advances in such fields as socially assistive robotics that envision the use of intelligent and interactive media for providing assistance through social means. In this article, we utilize the multiscale entropy (MSE) to investigate the effect of the physical embodiment on the older people’s prefrontal cortex (PFC) activity while listening to stories. We provide evidence that physical embodiment induces a significant increase in MSE of the older people’s PFC activity and that such a shift in the dynamics of their PFC activation significantly reflects their perceived feeling of fatigue. Our results benefit researchers in age-related cognitive function and rehabilitation who seek for the adaptation of these media in robot-assistive cognitive training of the older people. In addition, they offer a complementary information to the field of human-robot interaction via providing evidence that the use of MSE can enable the interactive learning algorithms to utilize the brain’s activation patterns as feedbacks for improving their level of interactivity, thereby forming a stepping stone for rich and usable human mental model.


Sign in / Sign up

Export Citation Format

Share Document