Robot feedback shapes the tutor’s presentation

2013 ◽  
Vol 14 (2) ◽  
pp. 268-296 ◽  
Author(s):  
Karola Pitsch ◽  
Anna-Lisa Vollmer ◽  
Manuel Mühlig

The paper investigates the effects of a humanoid robot’s online feedback during a tutoring situation in which a human demonstrates how to make a frog jump across a table. Motivated by micro-analytic studies of adult-child-interaction, we investigated whether tutors react to a robot’s gaze strategies while they are presenting an action. And if so, how they would adapt to them. Analysis reveals that tutors adjust typical “motionese” parameters (pauses, speed, and height of motion). We argue that a robot – when using adequate online feedback strategies – has at its disposal an important resource with which it could proactively shape the tutor’s presentation and help generate the input from which it would benefit most. These results advance our understanding of robotic “Social Learning” in that they suggest a paradigm shift towards considering human and robot as one interational learning system. Keywords: human-robot-interaction; feedback; adaptation; multimodality; gaze; conversation analysis; social learning; pro-active robot conduct

2014 ◽  
Vol 15 (1) ◽  
pp. 55-98 ◽  
Author(s):  
Karola Pitsch ◽  
Anna-Lisa Vollmer ◽  
Katharina J. Rohlfing ◽  
Jannik Fritsch ◽  
Britta Wrede

Research of tutoring in parent-infant interaction has shown that tutors – when presenting some action – modify both their verbal and manual performance for the learner (‘motherese’, ‘motionese’). Investigating the sources and effects of the tutors’ action modifications, we suggest an interactional account of ‘motionese’. Using video-data from a semi-experimental study in which parents taught their 8- to 11-month old infants how to nest a set of differently sized cups, we found that the tutors’ action modifications (in particular: high arches) functioned as an orienting device to guide the infant’s visual attention (gaze). Action modification and the recipient’s gaze can be seen to have a reciprocal sequential relationship and to constitute a constant loop of mutual adjustments. Implications are discussed for developmental research and for robotic ‘Social Learning’. We argue that a robot system could use on-line feedback strategies (e.g. gaze) to pro-actively shape a tutor’s action presentation as it emerges.


2007 ◽  
Vol 8 (1) ◽  
pp. 53-81 ◽  
Author(s):  
Luís Seabra Lopes ◽  
Aneesh Chauhan

This paper addresses word learning for human–robot interaction. The focus is on making a robotic agent aware of its surroundings, by having it learn the names of the objects it can find. The human user, acting as instructor, can help the robotic agent ground the words used to refer to those objects. A lifelong learning system, based on one-class learning, was developed (OCLL). This system is incremental and evolves with the presentation of any new word, which acts as a class to the robot, relying on instructor feedback. A novel experimental evaluation methodology, that takes into account the open-ended nature of word learning, is proposed and applied. This methodology is based on the realization that a robot’s vocabulary will be limited by its discriminatory capacity which, in turn, depends on its sensors and perceptual capabilities. The results indicate that the robot’s representations are capable of incrementally evolving by correcting class descriptions, based on instructor feedback to classification results. In successive experiments, it was possible for the robot to learn between 6 and 12 names of real-world office objects. Although these results are comparable to those obtained by other authors, there is a need to scale-up. The limitations of the method are discussed and potential directions for improvement are pointed out.


2008 ◽  
Vol 5 (4) ◽  
pp. 213-223 ◽  
Author(s):  
Shuhei Ikemoto ◽  
Takashi Minato ◽  
Hiroshi Ishiguro

In this paper, we investigate physical human–robot interaction (PHRI) as an important extension of traditional HRI research. The aim of this research is to develop a motor learning system that uses physical help from a human helper. We first propose a new control system that takes advantage of inherent joint flexibility. This control system is applied on a new humanoid robot called CB2. In order to clarify the difference between successful and unsuccessful interaction, we conduct an experiment where a human subject has to help the CB2robot in its rising-up motion. We then develop a new measure that demonstrates the difference between smooth and non-smooth physical interactions. An analysis of the experiment’s data, based on the introduced measure, shows significant differences between experts and beginners in human–robot interaction.


Robotics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 113
Author(s):  
Diogo Carneiro ◽  
Filipe Silva ◽  
Petia Georgieva

Catching flying objects is a challenging task in human–robot interaction. Traditional techniques predict the intersection position and time using the information obtained during the free-flying ball motion. A common pain point in these systems is the short ball flight time and uncertainties in the ball’s trajectory estimation. In this paper, we present the Robot Anticipation Learning System (RALS) that accounts for the information obtained from observation of the thrower’s hand motion before the ball is released. RALS takes extra time for the robot to start moving in the direction of the target before the opponent finishes throwing. To the best of our knowledge, this is the first robot control system for ball-catching with anticipation skills. Our results show that the information fused from both throwing and flying motions improves the ball-catching rate by up to 20% compared to the baseline approach, with the predictions relying only on the information acquired during the flight phase.


2013 ◽  
Vol 14 (3) ◽  
pp. 366-389
Author(s):  
Akiko Yamazaki ◽  
Keiichi Yamazaki ◽  
Keiko Ikeda ◽  
Matthew Burdelski ◽  
Mihoko Fukushima ◽  
...  

This paper reports on a quiz robot experiment in which we explore similarities and differences in human participant speech, gaze, and bodily conduct in responding to a robot’s speech, gaze, and bodily conduct across two languages. Our experiment involved three-person groups of Japanese and English-speaking participants who stood facing the robot and a projection screen that displayed pictures related to the robot’s questions. The robot was programmed so that its speech was coordinated with its gaze, body position, and gestures in relation to transition relevance places (TRPs), key words, and deictic words and expressions (e.g. this, this picture) in both languages. Contrary to findings on human interaction, we found that the frequency of English speakers’ head nodding was higher than that of Japanese speakers in human-robot interaction (HRI). Our findings suggest that the coordination of the robot’s verbal and non-verbal actions surrounding TRPs, key words, and deictic words and expressions is important for facilitating HRI irrespective of participants’ native language. Keywords: coordination of verbal and non-verbal actions; robot gaze comparison between English and Japanese; human-robot interaction (HRI); transition relevance place (TRP); conversation analysis


2014 ◽  
Author(s):  
Mitchell S. Dunfee ◽  
Tracy Sanders ◽  
Peter A. Hancock

Author(s):  
Rosemarie Yagoda ◽  
Michael D. Coovert

2009 ◽  
Author(s):  
Matthew S. Prewett ◽  
Kristin N. Saboe ◽  
Ryan C. Johnson ◽  
Michael D. Coovert ◽  
Linda R. Elliott

2010 ◽  
Author(s):  
Eleanore Edson ◽  
Judith Lytle ◽  
Thomas McKenna

Sign in / Sign up

Export Citation Format

Share Document