scholarly journals Cross-cultural study on human-robot greeting interaction: acceptance and discomfort by Egyptians and Japanese

Author(s):  
Gabriele Trovato ◽  
Massimiliano Zecca ◽  
Salvatore Sessa ◽  
Lorenzo Jamone ◽  
Jaap Ham ◽  
...  

AbstractAs witnessed in several behavioural studies, a complex relationship exists between people’s cultural background and their general acceptance towards robots. However, very few studies have investigated whether a robot’s original language and gesture based on certain culture have an impact on the people of the different cultures. The purpose of this work is to provide experimental evidence which supports the idea that humans may accept more easily a robot that can adapt to their specific culture. Indeed, improving acceptance and reducing discomfort is fundamental for future deployment of robots as assistive, health-care or companion devices into a society. We conducted a Human- Robot Interaction experiment both in Egypt and in Japan. Human subjects were engaged in a simulated video conference with robots that were greeting and speaking either in Arabic or in Japanese. The subjects completed a questionnaire assessing their preferences and their emotional state, while their spontaneous reactions were recorded in different ways. The results suggest that Egyptians prefer the Arabic robot, while they feel a sense of discomfort when interacting with the Japanese robot; the opposite is also true for the Japanese. These findings confirm the importance of the localisation of a robot in order to improve human acceptance during social human-robot interaction.

2019 ◽  
Vol 10 (1) ◽  
pp. 256-266
Author(s):  
Fabio Vannucci ◽  
Alessandra Sciutti ◽  
Hagen Lehman ◽  
Giulio Sandini ◽  
Yukie Nagai ◽  
...  

AbstractIn social interactions, human movement is a rich source of information for all those who take part in the collaboration. In fact, a variety of intuitive messages are communicated through motion and continuously inform the partners about the future unfolding of the actions. A similar exchange of implicit information could support movement coordination in the context of Human-Robot Interaction. In this work, we investigate how implicit signaling in an interaction with a humanoid robot can lead to emergent coordination in the form of automatic speed adaptation. In particular, we assess whether different cultures – specifically Japanese and Italian – have a different impact on motor resonance and synchronization in HRI. Japanese people show a higher general acceptance toward robots when compared with Western cultures. Since acceptance, or better affiliation, is tightly connected to imitation and mimicry, we hypothesize a higher degree of speed imitation for Japanese participants when compared to Italians. In the experimental studies undertaken both in Japan and Italy, we observe that cultural differences do not impact on the natural predisposition of subjects to adapt to the robot.


2021 ◽  
Vol 10 (3) ◽  
pp. 1-25
Author(s):  
Ajung Moon ◽  
Maneezhay Hashmi ◽  
H. F. Machiel Van Der Loos ◽  
Elizabeth A. Croft ◽  
Aude Billard

When the question of who should get access to a communal resource first is uncertain, people often negotiate via nonverbal communication to resolve the conflict. What should a robot be programmed to do when such conflicts arise in Human-Robot Interaction? The answer to this question varies depending on the context of the situation. Learning from how humans use hesitation gestures to negotiate a solution in such conflict situations, we present a human-inspired design of nonverbal hesitation gestures that can be used for Human-Robot Negotiation. We extracted characteristic features of such negotiative hesitations humans use, and subsequently designed a trajectory generator (Negotiative Hesitation Generator) that can re-create the features in robot responses to conflicts. Our human-subjects experiment demonstrates the efficacy of the designed robot behaviour against non-negotiative stopping behaviour of a robot. With positive results from our human-robot interaction experiment, we provide a validated trajectory generator with which one can explore the dynamics of human-robot nonverbal negotiation of resource conflicts.


Author(s):  
Samuel G. Collins ◽  
Goran Trajkovski

In this chapter, we give an overview of the results of a Human-Robot Interaction experiment, in a near zerocontext environment. We stimulate the formation of a network joining together human agents and non-human agents, in order to examine emergent conditions and social actions. Human subjects, in teams of three to four, are presented with a task–to coax a robot (by any means) from one side of a table to the other–not knowing with what sensory and motor abilities the robotic structure is equipped. On the one hand, the “goal” of the exercise is to “move” the robot through any linguistic or paralinguistic means. But, from the perspective of the investigators, the goal is both broader and more nebulous–to stimulate any emergent interactions whatsoever between agents, human or non-human. Here we discuss emergent social phenomena in this assemblage of human and machine, in particular, turn-taking and discourse, suggesting (counter-intuitively) that the “transparency” of non-human agents may not be the most effective way to generate multi-agent sociality.


AI Magazine ◽  
2011 ◽  
Vol 32 (4) ◽  
pp. 53-63 ◽  
Author(s):  
Andrea L. Thomaz ◽  
Crystal Chao

Turn-taking is a fundamental part of human communication. Our goal is to devise a turn-taking framework for human-robot interaction that, like the human skill, represents something fundamental about interaction, generic to context or domain. We propose a model of turn-taking, and conduct an experiment with human subjects to inform this model. Our findings from this study suggest that information flow is an integral part of human floor-passing behavior. Following this, we implement autonomous floor relinquishing on a robot and discuss our insights into the nature of a general turn-taking model for human-robot interaction.


2018 ◽  
Vol 9 (1) ◽  
pp. 221-234 ◽  
Author(s):  
João Avelino ◽  
Tiago Paulino ◽  
Carlos Cardoso ◽  
Ricardo Nunes ◽  
Plinio Moreno ◽  
...  

Abstract Handshaking is a fundamental part of human physical interaction that is transversal to various cultural backgrounds. It is also a very challenging task in the field of Physical Human-Robot Interaction (pHRI), requiring compliant force control in order to plan the arm’s motion and for a confident, but at the same time pleasant grasp of the human user’s hand. In this paper,we focus on the study of the hand grip strength for comfortable handshakes and perform three sets of physical interaction experiments between twenty human subjects in the first experiment, thirty-five human subjects in the second one, and thirty-eight human subjects in the third one. Tests are made with a social robot whose hands are instrumented with tactile sensors that provide skin-like sensation. From these experiments, we: (i) learn the preferred grip closure according to each user group; (ii) analyze the tactile feedback provided by the sensors for each closure; (iii) develop and evaluate the hand grip controller based on previous data. In addition to the robot-human interactions, we also learn about the robot executed handshake interactions with inanimate objects, in order to detect if it is shaking hands with a human or an inanimate object. This work adds physical human-robot interaction to the repertory of social skills of our robot, fulfilling a demand previously identified by many users of the robot.


Author(s):  
Wei Quan ◽  
Jinseok Woo ◽  
Yuichiro Toda ◽  
Naoyuki Kubota ◽  
◽  
...  

Human posture recognition has been a popular research topic since the development of the referent fields of human-robot interaction, and simulation operation. Most of these methods are based on supervised learning, and a large amount of training information is required to conduct an ideal assessment. In this study, we propose a solution to this by applying a number of unsupervised learning algorithms based on the forward kinematics model of the human skeleton. Next, we optimize the proposed method by integrating particle swarm optimization (PSO) for optimization. The advantage of the proposed method is no pre-training data is that required for human posture generation and recognition. We validate the method by conducting a series of experiments with human subjects.


2022 ◽  
Vol 8 ◽  
Author(s):  
Niyati Rawal ◽  
Dorothea Koert ◽  
Cigdem Turan ◽  
Kristian Kersting ◽  
Jan Peters ◽  
...  

The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.


2019 ◽  
Vol 16 (06) ◽  
pp. 1950028
Author(s):  
Stefano Borgo ◽  
Enrico Blanzieri

Robots might not act according to human expectations if they cannot anticipate how people make sense of a situation and what behavior they consider appropriate in some given circumstances. In many cases, understanding, expectations and behavior are constrained, if not driven, by culture, and a robot that knows about human culture could improve the quality level of human–robot interaction. Can we share human culture with a robot? Can we provide robots with formal representations of different cultures? In this paper, we discuss the (elusive) notion of culture and propose an approach based on the notion of trait which, we argue, permits us to build formal modules suitable to represent culture (broadly understood) in a robot architecture. We distinguish the types of traits that such modules should contain, namely behavior, knowledge, rule and interpretation traits, and how they could be organized. We identify the interpretation process that maps situations to specific knowledge traits, called scenarios, as a key component of the trait-based culture module. Finally, we describe how culture modules can be integrated in an existing architecture, and discuss three use cases to exemplify the advantages of having a culture module in the robot architecture highlighting surprising potentialities.


Sign in / Sign up

Export Citation Format

Share Document