scholarly journals It’s in the eyes: The engaging role of mutual gaze in HRI

2019 ◽  
Author(s):  
Kyveli Kompatsiari ◽  
Francesca Ciardo ◽  
Vadim Tikhanoff ◽  
Giorgio Metta ◽  
Agnieszka Wykowska

This paper reports a study where we examined how a humanoid robot was evaluated by users, dependent on established eye contact. In two experiments, we manipulated how the robot gazes, namely either by looking at the subjects’ eyes (mutual gaze) or to a socially neutral position (neutral). Across the two experiments, we altered the level of predictiveness of the robot’s gaze direction with respect to a subsequent target stimulus (in Exp.1 the direction was non-predictive, in Exp. 2 the gaze direction was counter-predictive). Results of subjective reports showed that participants were sensitive to eye contact. Moreover, participants were more engaged and ascribed higher intentionality to the robot in the mutual gaze condition relative to the neutral condition. This was independent of predictiveness of the gaze cue. Our results suggest that embodied humanoid robots can establish eye contact, which in turn has a positive impact on perceived socialness of the robot, and on the quality of human-robot interaction (HRI). Therefore, establishing mutual gaze should be considered in design of robot behaviors for social HRI.

2019 ◽  
Author(s):  
Kyveli Kompatsiari ◽  
Vadim Tikhanoff ◽  
Francesca Ciardo ◽  
Giorgio Metta ◽  
Agnieszka Wykowska

Mutual gaze is a key element of human development, and constitutes an important factor in human interactions. In this study, we examined –through analysis of subjective reports– the influence of an online eye-contact of a humanoid robot on humans’ reception of the robot. To this end, we manipulated the robot gaze, i.e., mutual (social) gaze and neutral (non-social) gaze, throughout an experiment involving letter identification. Our results suggest that people are sensitive to the mutual gaze of an artificial agent, they feel more engaged with the robot when a mutual gaze is established, and eye-contact supports attributing human-like characteris-tics to the robot. These findings are relevant both to the human-robot interaction (HRI) research - enhancing social behavior of robots, and also for cognitive neuroscience - studying mechanisms of social cognition in relatively realistic social interactive scenarios.


2013 ◽  
Vol 37 (2) ◽  
pp. 131-136 ◽  
Author(s):  
Atsushi Senju ◽  
Angélina Vernetti ◽  
Yukiko Kikuchi ◽  
Hironori Akechi ◽  
Toshikazu Hasegawa ◽  
...  

The current study investigated the role of cultural norms on the development of face-scanning. British and Japanese adults’ eye movements were recorded while they observed avatar faces moving their mouth, and then their eyes toward or away from the participants. British participants fixated more on the mouth, which contrasts with Japanese participants fixating mainly on the eyes. Moreover, eye fixations of British participants were less affected by the gaze shift of the avatar than Japanese participants, who shifted their fixation to the corresponding direction of the avatar’s gaze. Results are consistent with the Western cultural norms that value the maintenance of eye contact, and the Eastern cultural norms that require flexible use of eye contact and gaze aversion.


2018 ◽  
Author(s):  
Kyveli Kompatsiari ◽  
Francesca Ciardo ◽  
Vadim Tikhanoff ◽  
Giorgio Metta ◽  
Agnieszka Wykowska

Most experimental protocols examining joint attention with the gaze cueing paradigm are “observational” and “offline”, thereby not involving social interaction. We examined whether within a naturalistic online interaction, real-time eye contact influences the gaze cueing effect (GCE). We embedded gaze cueing in an interactive protocol with the iCub humanoid robot. This has the advantage of ecological validity combined with excellent experimental control. Critically, before averting the gaze, iCub either established eye contact or not, a manipulation enabled by an algorithm detecting position of the human eyes. For non-predictive gaze cueing procedure (Experiment 1), only the eye contact condition elicited GCE, while for counter-predictive procedure (Experiment 2), only the condition with no eye contact induced GCE. These results reveal an interactive effect of strategic (gaze validity) and social (eye contact) top-down components on the reflexive orienting of attention induced by gaze cues. More generally, we propose that naturalistic protocols with an embodied presence of an agent can cast a new light on mechanisms of social cognition.


2021 ◽  
Author(s):  
Serena Marchesi ◽  
Nicolas Spatola ◽  
Agnieszka Wykowska

Evidence from cognitive psychology showed that cultural differences influence human social cognition, leading to a different activation of social cognitive mechanisms. A growing corpus of literature in Human-Robot Interaction is investigating how culture shapes cognitive processes like anthropomorphism or mind attribution when humans face artificial agents, such as robots. The present paper aims at disentangling the relationship between cultural values, anthropomorphism, and intentionality attribution to robots, in the context of the intentional stance theory. We administered a battery of tests to 600 participants from various nations worldwide and modeled our data with a path model. Results showed a consistent direct influence of collectivism on anthropomorphism but not on the adoption of the intentional stance. Therefore, we further explored this result with a mediation analysis that revealed anthropomorphism as a true mediator between collectivism and the adoption of the intentional stance. We conclude that our findings extend previous literature by showing that the adoption of the intentional stance towards humanoid robots depends on anthropomorphic attribution in the context of cultural values.


2015 ◽  
Vol 48 (48) ◽  
pp. 9
Author(s):  
Elisabeth Engberg-Pedersen

Linguistic perspective can be used either to denote the way en event is described as seen from the perspective of one of the referents, or as a term for various linguistic means used to indicate whether a referent is new or given and whether an event is foreground or background. In this article, the former type is called referent perspective, the latter narrator perspective. In Danish Sign Language (DTS) narrator perspective is expressed by the signer’s eye contact with the addressee, the sign EN (‘one, a’) to indicate a new, prominent referent, and nonmanual signals indicating topicalization and accessibility. Referent perspective is expressed by combinations of predicates of motion and location with gaze, facial expression, and head and body orientation that represent a referent. Narratives elicited from DTS-signing adults by means of cartoons are shown to have a strong emphasis on referent perspective compared with narratives in spoken Danish elicited by means of the same cartoons. DTS-signing deaf children of six to nine years of age are shown to be well underway in acquiring the use of en, but they struggle with the expression of the referent perspective, especially the use of gaze direction and facial expression. The results are discussed in relation to Slobin’s (1996) notion rhetorical style and the role of iconicity in acquisition.


2021 ◽  
Author(s):  
Elef Schellen ◽  
Francesco Bossi ◽  
Agnieszka Wykowska

As the use of humanoid robots proliferates, an increasing amount of people may find themselves face-to-“face” with a robot in everyday life. Although there is a plethora of information available on facial social cues and how we interpret them in the field of human-human social interaction, we cannot assume that these findings flawlessly transfer to human-robot interaction. Therefore, more research on facial cues in human-robot interaction is required. This study investigated deception in human-robot interaction context, focusing on the effect that eye contact with a robot has on honesty towards this robot. In an iterative task, participants could assist a humanoid robot by providing it with correct information, or potentially secure a reward for themselves by providing it with incorrect information. Results show that participants are increasingly honest after the robot establishes eye contact with them, but only if this is in response to deceptive behavior. Behavior is not influenced by the establishment of eye contact if the participant is actively engaging in honest behavior. These findings support the notion that humanoid robots can be perceived as, and treated like, social agents, since the herein described effect mirrors one present in human-human social interaction.


2015 ◽  
Vol 1 (1) ◽  
Author(s):  
Elisabeth Engberg-Pedersen

AbstractIn gesture studies character viewpoint and observer viewpoint (McNeill 1992) characterize co-speech gestures depending on whether the gesturer’s hand and body imitate a referent’s hand and body or the hand represents a referent in its entirety. In sign languages, handling handshapes and entity handshapes are used in depicting predicates. Narratives in Danish Sign Language (DTS) elicited to make signers describe an event from either the agent’s or the patient’s perspective demonstrate that discourse perspective is expressed by which referent, the agent or the patient, the signers represent at their own locus. This is reflected in the orientation and movement direction of the manual articulator, not by the type of representation in the articulator. Signers may also imitate the gaze direction of the referent represented at their locus or have eye contact with the addressees. When they represent a referent by their own locus and simultaneously have eye contact with the addressee, the construction mixes referent perspective and narrator perspective. This description accords with an understanding of linguistic perspective as grounded in bodily perspective within a physical scene (Sweetser 2012) and relates the deictic and attitudinal means for expressing perspective in sign languages to the way perspective is expressed in spoken languages.


2009 ◽  
Vol 6 (3-4) ◽  
pp. 369-397 ◽  
Author(s):  
Kerstin Dautenhahn ◽  
Chrystopher L. Nehaniv ◽  
Michael L. Walters ◽  
Ben Robins ◽  
Hatice Kose-Bagci ◽  
...  

This paper provides a comprehensive introduction to the design of the minimally expressive robot KASPAR, which is particularly suitable for human–robot interaction studies. A low-cost design with off-the-shelf components has been used in a novel design inspired from a multi-disciplinary viewpoint, including comics design and Japanese Noh theatre. The design rationale of the robot and its technical features are described in detail. Three research studies will be presented that have been using KASPAR extensively. Firstly, we present its application in robot-assisted play and therapy for children with autism. Secondly, we illustrate its use in human–robot interaction studies investigating the role of interaction kinesics and gestures. Lastly, we describe a study in the field of developmental robotics into computational architectures based on interaction histories for robot ontogeny. The three areas differ in the way as to how the robot is being operated and its role in social interaction scenarios. Each will be introduced briefly and examples of the results will be presented. Reflections on the specific design features of KASPAR that were important in these studies and lessons learnt from these studies concerning the design of humanoid robots for social interaction will also be discussed. An assessment of the robot in terms of utility of the design for human–robot interaction experiments concludes the paper.


2014 ◽  
Vol 11 (01) ◽  
pp. 1450003 ◽  
Author(s):  
Hatice Kose ◽  
Neziha Akalin ◽  
Pinar Uluer

This paper investigates the role of interaction and communication kinesics in human–robot interaction. This study is part of a novel research project on sign language (SL) tutoring through interaction games with humanoid robots. The main goal is to motivate the children with communication problems to understand and imitate the signs implemented by the robot using basic upper torso gestures and sound. We present an empirical and exploratory study investigating the effect of basic nonverbal gestures consisting of hand movements, body and face gestures expressed by a humanoid robot, and having comprehended the word, the participants will give relevant feedback in SL. This way the participant is both a passive observer and an active imitator throughout the learning process in different phases of the game. A five-fingered R3 robot platform and a three-fingered Nao H-25 robot are employed within the games. Vision-, sound-, touch- and motion-based cues are used for multimodal communication between the robot, child and therapist/parent within the study. This paper presents the preliminary results of the proposed game tested with adult participants. The aim is to evaluate the SL learning ability of participants from a robot, and compare different robot platforms within this setup.


2021 ◽  
Vol 4 ◽  
Author(s):  
Elef Schellen ◽  
Francesco Bossi ◽  
Agnieszka Wykowska

As the use of humanoid robots proliferates, an increasing amount of people may find themselves face-to-“face” with a robot in everyday life. Although there is a plethora of information available on facial social cues and how we interpret them in the field of human-human social interaction, we cannot assume that these findings flawlessly transfer to human-robot interaction. Therefore, more research on facial cues in human-robot interaction is required. This study investigated deception in human-robot interaction context, focusing on the effect that eye contact with a robot has on honesty toward this robot. In an iterative task, participants could assist a humanoid robot by providing it with correct information, or potentially secure a reward for themselves by providing it with incorrect information. Results show that participants are increasingly honest after the robot establishes eye contact with them, but only if this is in response to deceptive behavior. Behavior is not influenced by the establishment of eye contact if the participant is actively engaging in honest behavior. These findings support the notion that humanoid robots can be perceived as, and treated like, social agents, since the herein described effect mirrors one present in human-human social interaction.


Sign in / Sign up

Export Citation Format

Share Document