Design and Validation of a Talking Face for the iCub

2015 ◽  
Vol 12 (03) ◽  
pp. 1550026 ◽  
Author(s):  
Alberto Parmiggiani ◽  
Marco Randazzo ◽  
Marco Maggiali ◽  
Giorgio Metta ◽  
Frederic Elisei ◽  
...  

Recent developments in human–robot interaction show how the ability to communicate with people in a natural way is of great importance for artificial agents. The implementation of facial expressions has been found to significantly increase the interaction capabilities of humanoid robots. For speech, displaying a correct articulation with sound is mandatory to avoid audiovisual illusions like the McGurk effect (leading to comprehension errors) as well as to enhance the intelligibility in noisy conditions. This work describes the design, construction and testing of an animatronic talking face developed for the iCub robot. This talking head has an articulated jaw and four independent lip movements actuated by five motors. It is covered by a specially designed elastic tissue cover whose hemlines at the lips are attached to the motors via connecting linkages. The mechanical design and the control scheme have been evaluated by speech intelligibility in noise (SPIN) perceptual tests that demonstrate an absolute 10% intelligibility gain provided by the jaw and lip movements over the audio-only display.

Author(s):  
Giorgio Metta

This chapter outlines a number of research lines that, starting from the observation of nature, attempt to mimic human behavior in humanoid robots. Humanoid robotics is one of the most exciting proving grounds for the development of biologically inspired hardware and software—machines that try to recreate billions of years of evolution with some of the abilities and characteristics of living beings. Humanoids could be especially useful for their ability to “live” in human-populated environments, occupying the same physical space as people and using tools that have been designed for people. Natural human–robot interaction is also an important facet of humanoid research. Finally, learning and adapting from experience, the hallmark of human intelligence, may require some approximation to the human body in order to attain similar capacities to humans. This chapter focuses particularly on compliant actuation, soft robotics, biomimetic robot vision, robot touch, and brain-inspired motor control in the context of the iCub humanoid robot.


2020 ◽  
Vol 12 (1) ◽  
pp. 58-73
Author(s):  
Sofia Thunberg ◽  
Tom Ziemke

AbstractInteraction between humans and robots will benefit if people have at least a rough mental model of what a robot knows about the world and what it plans to do. But how do we design human-robot interactions to facilitate this? Previous research has shown that one can change people’s mental models of robots by manipulating the robots’ physical appearance. However, this has mostly not been done in a user-centred way, i.e. without a focus on what users need and want. Starting from theories of how humans form and adapt mental models of others, we investigated how the participatory design method, PICTIVE, can be used to generate design ideas about how a humanoid robot could communicate. Five participants went through three phases based on eight scenarios from the state-of-the-art tasks in the RoboCup@Home social robotics competition. The results indicate that participatory design can be a suitable method to generate design concepts for robots’ communication in human-robot interaction.


Author(s):  
Louise LePage

AbstractStage plays, theories of theatre, narrative studies, and robotics research can serve to identify, explore, and interrogate theatrical elements that support the effective performance of sociable humanoid robots. Theatre, including its parts of performance, aesthetics, character, and genre, can also reveal features of human–robot interaction key to creating humanoid robots that are likeable rather than uncanny. In particular, this can be achieved by relating Mori's (1970/2012) concept of total appearance to realism. Realism is broader and more subtle in its workings than is generally recognised in its operationalization in studies that focus solely on appearance. For example, it is complicated by genre. A realistic character cast in a detective drama will convey different qualities and expectations than the same character in a dystopian drama or romantic comedy. The implications of realism and genre carry over into real life. As stage performances and robotics studies reveal, likeability depends on creating aesthetically coherent representations of character, where all the parts coalesce to produce a socially identifiable figure demonstrating predictable behaviour.


2012 ◽  
Vol 3 (2) ◽  
pp. 68-83 ◽  
Author(s):  
David K. Grunberg ◽  
Alyssa M. Batula ◽  
Erik M. Schmidt ◽  
Youngmoo E. Kim

The recognition and display of synthetic emotions in humanoid robots is a critical attribute for facilitating natural human-robot interaction. The authors utilize an efficient algorithm to estimate the mood in acoustic music, and then use the results of that algorithm to drive movement generation systems to provide motions for the robot that are suitable for the music. This system is evaluated on multiple sets of humanoid robots to determine if the choice of robot platform or number of robots influences the perceived emotional content of the motions. Their tests verify that the authors’ system can accurately identify the emotional content of acoustic music and produce motions that convey a similar emotion to that in the audio. They also determine the perceptual effects of using different sized or different numbers of robots in the motion performances.


2018 ◽  
Vol 15 (4) ◽  
pp. 172988141877319 ◽  
Author(s):  
S M Mizanoor Rahman ◽  
Ryojun Ikeura

In the first step, a one degree of freedom power assist robotic system is developed for lifting lightweight objects. Dynamics for human–robot co-manipulation is derived that includes human cognition, for example, weight perception. A novel admittance control scheme is derived using the weight perception–based dynamics. Human subjects lift a small-sized, lightweight object with the power assist robotic system. Human–robot interaction and system characteristics are analyzed. A comprehensive scheme is developed to evaluate the human–robot interaction and performance, and a constrained optimization algorithm is developed to determine the optimum human–robot interaction and performance. The results show that the inclusion of weight perception in the control helps achieve optimum human–robot interaction and performance for a set of hard constraints. In the second step, the same optimization algorithm and control scheme are used for lifting a heavy object with a multi-degree of freedom power assist robotic system. The results show that the human–robot interaction and performance for lifting the heavy object are not as good as that for lifting the lightweight object. Then, weight perception–based intelligent controls in the forms of model predictive control and vision-based variable admittance control are applied for lifting the heavy object. The results show that the intelligent controls enhance human–robot interaction and performance, help achieve optimum human–robot interaction and performance for a set of soft constraints, and produce similar human–robot interaction and performance as obtained for lifting the lightweight object. The human–robot interaction and performance for lifting the heavy object with power assist are treated as intuitive and natural because these are calibrated with those for lifting the lightweight object. The results also show that the variable admittance control outperforms the model predictive control. We also propose a method to adjust the variable admittance control for three degrees of freedom translational manipulation of heavy objects based on human intent recognition. The results are useful for developing controls of human friendly, high performance power assist robotic systems for heavy object manipulation in industries.


2012 ◽  
Vol 09 (04) ◽  
pp. 1250028 ◽  
Author(s):  
ELENA TORTA ◽  
RAYMOND H. CUIJPERS ◽  
JAMES F. JUOLA ◽  
DAVID VAN DER POL

Humanoid robots that share the same space with humans need to be socially acceptable and effective as they interact with people. In this paper we focus our attention on the definition of a behavior-based robotic architecture that (1) allows the robot to navigate safely in a cluttered and dynamically changing domestic environment and (2) encodes embodied non-verbal interactions: the robot respects the users personal space (PS) by choosing the appropriate distance and direction of approach. The model of the PS is derived from human–robot interaction tests, and it is described in a convenient mathematical form. The robot's target location is dynamically inferred through the solution of a Bayesian filtering problem. The validation of the overall behavioral architecture shows that the robot is able to exhibit appropriate proxemic behavior.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

In our daily lives, we need to predict and understand others’ behaviour in order to navigate through our social environment. Predictions concerning other humans’ behaviour usually refer to their mental states, such as beliefs or intentions. Such a predictive strategy is called adoption of the intentional stance. In this paper, we review literature related to the concept of intentional stance from the perspectives of philosophy, psychology, human development, culture and human-robot interaction. We propose that adopting the intentional stance might be a central factor in facilitating social attunement with artificial agents. The paper first reviews the theoretical considerations regarding the intentional stance, and examines literature related to the development of intentional stance across the life span. Subsequently, it discusses cultural norms as grounded in the intentional stance and finally, it focuses on the issue of adopting the intentional stance towards artificial agents, such as humanoid robots. At the dawn of the artificial intelligence era, the question of how (and when) we predict and explain robots’ behaviour by referring to mental states is of high interest. The paper concludes with the discussion of the ethical consequences of robots towards which we adopt the intentional stance, and sketches future directions in research on this topic.


2019 ◽  
Vol 10 (1) ◽  
pp. 20-33
Author(s):  
Catelyn Scholl ◽  
Susan McRoy

Gestures that co-occur with speech are a fundamental component of communication. Prior research with children suggests that gestures may help them to resolve certain forms of lexical ambiguity, including homophones. To test this idea in the context of human-robot interaction, the effects of iconic and deictic gestures on the understanding of homophones was assessed in an experiment where a humanoid robot told a short story containing pairs of homophones to small groups of young participants, accompanied by either expressive gestures or no gestures. Both groups of subjects completed a pretest and post-test to measure their ability to discriminate between pairs of homophones and we calculated aggregated precision. The results show that the use of iconic and deictic gestures aids in general understanding of homophones, providing additional evidence for the importance of gesture to the development of children’s language and communication skills.


2021 ◽  
Author(s):  
Elef Schellen ◽  
Francesco Bossi ◽  
Agnieszka Wykowska

As the use of humanoid robots proliferates, an increasing amount of people may find themselves face-to-“face” with a robot in everyday life. Although there is a plethora of information available on facial social cues and how we interpret them in the field of human-human social interaction, we cannot assume that these findings flawlessly transfer to human-robot interaction. Therefore, more research on facial cues in human-robot interaction is required. This study investigated deception in human-robot interaction context, focusing on the effect that eye contact with a robot has on honesty towards this robot. In an iterative task, participants could assist a humanoid robot by providing it with correct information, or potentially secure a reward for themselves by providing it with incorrect information. Results show that participants are increasingly honest after the robot establishes eye contact with them, but only if this is in response to deceptive behavior. Behavior is not influenced by the establishment of eye contact if the participant is actively engaging in honest behavior. These findings support the notion that humanoid robots can be perceived as, and treated like, social agents, since the herein described effect mirrors one present in human-human social interaction.


2022 ◽  
Vol 8 ◽  
Author(s):  
Niyati Rawal ◽  
Dorothea Koert ◽  
Cigdem Turan ◽  
Kristian Kersting ◽  
Jan Peters ◽  
...  

The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.


Sign in / Sign up

Export Citation Format

Share Document