Toward Human-Robot Interaction Design through Human-Human Interaction Experiment

Author(s):  
Yutaka Hiroi ◽  
Akinori Ito
2013 ◽  
Vol 10 (01) ◽  
pp. 1350006 ◽  
Author(s):  
C. C. FORD ◽  
G. BUGMANN ◽  
P. CULVERHOUSE

This paper describes findings from a Human-to-Human Interaction experiment that examines human communicative non-verbal facial behaviour. The aim was to develop a more comfortable and effective model of social human-robot communication. Analysis of the data revealed a strong co-occurrence between human blink production and non-verbal communicative behaviours of own speech instigation and completion, interlocutor speech instigation, looking at/away from the interlocutor, facial expression instigation and completion, and mental communicative state changes. Seventy-one percent of the total 2007 analysed blinks co-occurred with these behaviours within a time window of +/- 375 ms, well beyond their chance co-occurrence probability of 23%. Thus between 48% and 71% of blinks are directly related to human communicative behaviour and are not simply "physiological" (e.g., for cleaning/humidifying the eye). Female participants are found to blink twice as often as male participants, in the same communicative scenario, and have a longer average blink duration. These results provide the basis for the implementation of a blink generation system as part of a social cognitive robot for human-robot interaction.


2020 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Jairo Pérez-Osorio ◽  
Stefan Kopp

This booklet is a collection of the position statements accepted for the HRI’20 conference workshop “Social Cognition for HRI: Exploring the relationship between mindreading and social attunement in human-robot interaction” (Wykowska, Perez-Osorio & Kopp, 2020). Unfortunately, due to the rapid unfolding of the novel coronavirus at the beginning of the present year, the conference and consequently our workshop, were canceled. On the light of these events, we decided to put together the positions statements accepted for the workshop. The contributions collected in these pages highlight the role of attribution of mental states to artificial agents in human-robot interaction, and precisely the quality and presence of social attunement mechanisms that are known to make human interaction smooth, efficient, and robust. These papers also accentuate the importance of the multidisciplinary approach to advance the understanding of the factors and the consequences of social interactions with artificial agents.


2021 ◽  
Vol 10 (3) ◽  
pp. 1-25
Author(s):  
Ajung Moon ◽  
Maneezhay Hashmi ◽  
H. F. Machiel Van Der Loos ◽  
Elizabeth A. Croft ◽  
Aude Billard

When the question of who should get access to a communal resource first is uncertain, people often negotiate via nonverbal communication to resolve the conflict. What should a robot be programmed to do when such conflicts arise in Human-Robot Interaction? The answer to this question varies depending on the context of the situation. Learning from how humans use hesitation gestures to negotiate a solution in such conflict situations, we present a human-inspired design of nonverbal hesitation gestures that can be used for Human-Robot Negotiation. We extracted characteristic features of such negotiative hesitations humans use, and subsequently designed a trajectory generator (Negotiative Hesitation Generator) that can re-create the features in robot responses to conflicts. Our human-subjects experiment demonstrates the efficacy of the designed robot behaviour against non-negotiative stopping behaviour of a robot. With positive results from our human-robot interaction experiment, we provide a validated trajectory generator with which one can explore the dynamics of human-robot nonverbal negotiation of resource conflicts.


AI & Society ◽  
2021 ◽  
Author(s):  
Nora Fronemann ◽  
Kathrin Pollmann ◽  
Wulf Loh

AbstractTo integrate social robots in real-life contexts, it is crucial that they are accepted by the users. Acceptance is not only related to the functionality of the robot but also strongly depends on how the user experiences the interaction. Established design principles from usability and user experience research can be applied to the realm of human–robot interaction, to design robot behavior for the comfort and well-being of the user. Focusing the design on these aspects alone, however, comes with certain ethical challenges, especially regarding the user’s privacy and autonomy. Based on an example scenario of human–robot interaction in elder care, this paper discusses how established design principles can be used in social robotic design. It then juxtaposes these with ethical considerations such as privacy and user autonomy. Combining user experience and ethical perspectives, we propose adjustments to the original design principles and canvass our own design recommendations for a positive and ethically acceptable social human–robot interaction design. In doing so, we show that positive user experience and ethical design may be sometimes at odds, but can be reconciled in many cases, if designers are willing to adjust and amend time-tested design principles.


Author(s):  
Shiyang Dong ◽  
Takafumi Matsumaru

AbstractThis paper shows a novel walking training system for foot-eye coordination. To design customizable trajectories for different users conveniently in walking training, a new system which can track and record the actual walking trajectories by a tutor and can use these trajectories for the walking training by a trainee is developed. We set the four items as its human-robot interaction design concept: feedback, synchronization, ingenuity and adaptability. A foot model is proposed to define the position and direction of a foot. The errors in the detection method used in the system are less than 40 mm in position and 15 deg in direction. On this basis, three parts are structured to achieve the system functions: Trajectory Designer, Trajectory Viewer and Mobile Walking Trainer. According to the experimental results,we have confirmed the systemworks as intended and designed such that the steps recorded in Trajectory Designer could be used successfully as the footmarks projected in Mobile Walking Trainer and foot-eye coordination training would be conducted smoothly.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6438
Author(s):  
Chiara Filippini ◽  
David Perpetuini ◽  
Daniela Cardone ◽  
Arcangelo Merla

An intriguing challenge in the human–robot interaction field is the prospect of endowing robots with emotional intelligence to make the interaction more genuine, intuitive, and natural. A crucial aspect in achieving this goal is the robot’s capability to infer and interpret human emotions. Thanks to its design and open programming platform, the NAO humanoid robot is one of the most widely used agents for human interaction. As with person-to-person communication, facial expressions are the privileged channel for recognizing the interlocutor’s emotional expressions. Although NAO is equipped with a facial expression recognition module, specific use cases may require additional features and affective computing capabilities that are not currently available. This study proposes a highly accurate convolutional-neural-network-based facial expression recognition model that is able to further enhance the NAO robot’ awareness of human facial expressions and provide the robot with an interlocutor’s arousal level detection capability. Indeed, the model tested during human–robot interactions was 91% and 90% accurate in recognizing happy and sad facial expressions, respectively; 75% accurate in recognizing surprised and scared expressions; and less accurate in recognizing neutral and angry expressions. Finally, the model was successfully integrated into the NAO SDK, thus allowing for high-performing facial expression classification with an inference time of 0.34 ± 0.04 s.


Author(s):  
J. Lindblom ◽  
B. Alenljung

A fundamental challenge of human interaction with socially interactive robots, compared to other interactive products, comes from them being embodied. The embodied nature of social robots questions to what degree humans can interact ‘naturally' with robots, and what impact the interaction quality has on the user experience (UX). UX is fundamentally about emotions that arise and form in humans through the use of technology in a particular situation. This chapter aims to contribute to the field of human-robot interaction (HRI) by addressing, in further detail, the role and relevance of embodied cognition for human social interaction, and consequently what role embodiment can play in HRI, especially for socially interactive robots. Furthermore, some challenges for socially embodied interaction between humans and socially interactive robots are outlined and possible directions for future research are presented. It is concluded that the body is of crucial importance in understanding emotion and cognition in general, and, in particular, for a positive user experience to emerge when interacting with socially interactive robots.


2019 ◽  
Vol 374 (1771) ◽  
pp. 20180033 ◽  
Author(s):  
Birgit Rauchbauer ◽  
Bruno Nazarian ◽  
Morgane Bourhis ◽  
Magalie Ochs ◽  
Laurent Prévot ◽  
...  

We present a novel functional magnetic resonance imaging paradigm for second-person neuroscience. The paradigm compares a human social interaction (human–human interaction, HHI) to an interaction with a conversational robot (human–robot interaction, HRI). The social interaction consists of 1 min blocks of live bidirectional discussion between the scanned participant and the human or robot agent. A final sample of 21 participants is included in the corpus comprising physiological (blood oxygen level-dependent, respiration and peripheral blood flow) and behavioural (recorded speech from all interlocutors, eye tracking from the scanned participant, face recording of the human and robot agents) data. Here, we present the first analysis of this corpus, contrasting neural activity between HHI and HRI. We hypothesized that independently of differences in behaviour between interactions with the human and robot agent, neural markers of mentalizing (temporoparietal junction (TPJ) and medial prefrontal cortex) and social motivation (hypothalamus and amygdala) would only be active in HHI. Results confirmed significantly increased response associated with HHI in the TPJ, hypothalamus and amygdala, but not in the medial prefrontal cortex. Future analysis of this corpus will include fine-grained characterization of verbal and non-verbal behaviours recorded during the interaction to investigate their neural correlates. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction'.


Sign in / Sign up

Export Citation Format

Share Document