scholarly journals Predicting Extraversion from Non-verbal Features During a Face-to-Face Human-Robot Interaction

Author(s):  
Faezeh Rahbar ◽  
Salvatore M. Anzalone ◽  
Giovanna Varni ◽  
Elisabetta Zibetti ◽  
Serena Ivaldi ◽  
...  
2021 ◽  
Author(s):  
Elef Schellen ◽  
Francesco Bossi ◽  
Agnieszka Wykowska

As the use of humanoid robots proliferates, an increasing amount of people may find themselves face-to-“face” with a robot in everyday life. Although there is a plethora of information available on facial social cues and how we interpret them in the field of human-human social interaction, we cannot assume that these findings flawlessly transfer to human-robot interaction. Therefore, more research on facial cues in human-robot interaction is required. This study investigated deception in human-robot interaction context, focusing on the effect that eye contact with a robot has on honesty towards this robot. In an iterative task, participants could assist a humanoid robot by providing it with correct information, or potentially secure a reward for themselves by providing it with incorrect information. Results show that participants are increasingly honest after the robot establishes eye contact with them, but only if this is in response to deceptive behavior. Behavior is not influenced by the establishment of eye contact if the participant is actively engaging in honest behavior. These findings support the notion that humanoid robots can be perceived as, and treated like, social agents, since the herein described effect mirrors one present in human-human social interaction.


2021 ◽  
Vol 4 ◽  
Author(s):  
Elef Schellen ◽  
Francesco Bossi ◽  
Agnieszka Wykowska

As the use of humanoid robots proliferates, an increasing amount of people may find themselves face-to-“face” with a robot in everyday life. Although there is a plethora of information available on facial social cues and how we interpret them in the field of human-human social interaction, we cannot assume that these findings flawlessly transfer to human-robot interaction. Therefore, more research on facial cues in human-robot interaction is required. This study investigated deception in human-robot interaction context, focusing on the effect that eye contact with a robot has on honesty toward this robot. In an iterative task, participants could assist a humanoid robot by providing it with correct information, or potentially secure a reward for themselves by providing it with incorrect information. Results show that participants are increasingly honest after the robot establishes eye contact with them, but only if this is in response to deceptive behavior. Behavior is not influenced by the establishment of eye contact if the participant is actively engaging in honest behavior. These findings support the notion that humanoid robots can be perceived as, and treated like, social agents, since the herein described effect mirrors one present in human-human social interaction.


2011 ◽  
Vol 08 (03) ◽  
pp. 481-511 ◽  
Author(s):  
KRISTOF GORIS ◽  
JELLE SALDIEN ◽  
BRAM VANDERBORGHT ◽  
DIRK LEFEBER

This paper reports on the mechanical design of the huggable robot Probo. Its intentions include human–robot interaction (HRI), both physical and cognitive, with a special focus on children. Since most of the communication passes through nonverbal cues and since people rely on face-to-face communication, the focus of Probo's communicative skills lies initially on facial expressions. The robot has 20 high-precision motors in its head and body. They are used to actuate the ears, eyebrows, eyelids, eyes, trunk, mouth, and neck. To build safety aspects intrinsically in the robot's hardware, all the motors are linked with flexible components. In case of a collision, the robot will be elastic and safety will be ensured. The mechanics of Probo are covered by protecting plastic shells, foam, and soft fur. This gives Probo's animal-like look and makes the robot huggable.


Author(s):  
Yulai Weng ◽  
Andrew Specian ◽  
Mark Yim

This paper presents the design of a low cost system that can be used as a spherical humanoid robot head to display expressive animations for social robotics. The system offers a versatile canvas for Human Robot Interaction (HRI), especially for face to face communication. To maximize flexibility, both in the style and apparent motion of the robot’s head, we exploit the relatively recent availability of low-cost portable projectors in a retro-projected animated face (RAF). The optical mechanical system is comprised of a projector whose light is reflected off a hemispherical mirror and onto a 360 degree section of the spherical head with sufficient resolution and illumination. We derive the forward and inverse mapping relation between the pixel coordinates on the projection plane of the projector, and the outer spherical surface to offer fast graphic generation. Calibration of the system is achieved by controlling three parameters of image translation and scaling, resulting in a specifically devised light cone whose edges are tangential to the hemispherical mirror. Several facial expressions are tested in illuminated indoor environments to show its potential as a modular low cost robot head for HRI.


2013 ◽  
Vol 10 (01) ◽  
pp. 1350011 ◽  
Author(s):  
NICOLE MIRNIG ◽  
ASTRID WEISS ◽  
GABRIEL SKANTZE ◽  
SAMER AL MOUBAYED ◽  
JOAKIM GUSTAFSON ◽  
...  

While much of the state-of-the-art research in human–robot interaction (HRI) investigates task-oriented interaction, this paper aims at exploring what people talk about to a robot if the content of the conversation is not predefined. We used the robot head Furhat to explore the conversational behavior of people who encounter a robot in the public setting of a robot exhibition in a scientific museum, but without a predefined purpose. Upon analyzing the conversations, it could be shown that a sophisticated robot provides an inviting atmosphere for people to engage in interaction and to be experimental and challenge the robot's capabilities. Many visitors to the exhibition were willing to go beyond the guiding questions that were provided as a starting point. Amongst other things, they asked Furhat questions concerning the robot itself, such as how it would define a robot, or if it plans to take over the world. People were also interested in the feelings and likes of the robot and they asked many personal questions — this is how Furhat ended up with its first marriage proposal. People who talked to Furhat were asked to complete a questionnaire on their assessment of the conversation, with which we could show that the interaction with Furhat was rated as a pleasant experience.


2008 ◽  
Vol 20 (3) ◽  
pp. 378-385 ◽  
Author(s):  
Hidenobu Sumioka ◽  
◽  
Yuichiro Yoshikawa ◽  
Minoru Asada ◽  

Joint attention, i.e., the behavior of looking at the same object that another person is looking at, plays an important role in human and human-robot communication. Previous synthetic studies focusing on modeling the early developmental process of joint attention have proposed learning methods without explicit instructions for joint attention. In these studies, the causal structure between a perception variable (a caregiver’s face direction or an individual object) and an action variable (gaze shift to a caregiver’s face or to an object location) was given in advance to learn joint attention. However, such a structure is expected to be found by the robot through interaction experiences. In this paper, we investigates how transfer entropy, an information theory measure, is used to quantify the causality inherent in face-to-face interaction. In computer simulations of human-robot interaction, we examine which pair of perceptions and actions is selected as the causal pair and show that the selected pairs can be used for learning a sensorimotor map for joint attention.


2009 ◽  
Author(s):  
Matthew S. Prewett ◽  
Kristin N. Saboe ◽  
Ryan C. Johnson ◽  
Michael D. Coovert ◽  
Linda R. Elliott

2010 ◽  
Author(s):  
Eleanore Edson ◽  
Judith Lytle ◽  
Thomas McKenna

Sign in / Sign up

Export Citation Format

Share Document