How many words can my robot learn?

2007 ◽  
Vol 8 (1) ◽  
pp. 53-81 ◽  
Author(s):  
Luís Seabra Lopes ◽  
Aneesh Chauhan

This paper addresses word learning for human–robot interaction. The focus is on making a robotic agent aware of its surroundings, by having it learn the names of the objects it can find. The human user, acting as instructor, can help the robotic agent ground the words used to refer to those objects. A lifelong learning system, based on one-class learning, was developed (OCLL). This system is incremental and evolves with the presentation of any new word, which acts as a class to the robot, relying on instructor feedback. A novel experimental evaluation methodology, that takes into account the open-ended nature of word learning, is proposed and applied. This methodology is based on the realization that a robot’s vocabulary will be limited by its discriminatory capacity which, in turn, depends on its sensors and perceptual capabilities. The results indicate that the robot’s representations are capable of incrementally evolving by correcting class descriptions, based on instructor feedback to classification results. In successive experiments, it was possible for the robot to learn between 6 and 12 names of real-world office objects. Although these results are comparable to those obtained by other authors, there is a need to scale-up. The limitations of the method are discussed and potential directions for improvement are pointed out.

Robotics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 68
Author(s):  
Lei Shi ◽  
Cosmin Copot ◽  
Steve Vanlanduit

In gaze-based Human-Robot Interaction (HRI), it is important to determine human visual intention for interacting with robots. One typical HRI interaction scenario is that a human selects an object by gaze and a robotic manipulator will pick up the object. In this work, we propose an approach, GazeEMD, that can be used to detect whether a human is looking at an object for HRI application. We use Earth Mover’s Distance (EMD) to measure the similarity between the hypothetical gazes at objects and the actual gazes. Then, the similarity score is used to determine if the human visual intention is on the object. We compare our approach with a fixation-based method and HitScan with a run length in the scenario of selecting daily objects by gaze. Our experimental results indicate that the GazeEMD approach has higher accuracy and is more robust to noises than the other approaches. Hence, the users can lessen cognitive load by using our approach in the real-world HRI scenario.


Author(s):  
Matthias Scheutz ◽  
Paul Schermerhorn

Effective decision-making under real-world conditions can be very difficult as purely rational methods of decision-making are often not feasible or applicable. Psychologists have long hypothesized that humans are able to cope with time and resource limitations by employing affective evaluations rather than rational ones. In this chapter, we present the distributed integrated affect cognition and reflection architecture DIARC for social robots intended for natural human-robot interaction and demonstrate the utility of its human-inspired affect mechanisms for the selection of tasks and goals. Specifically, we show that DIARC incorporates affect mechanisms throughout the architecture, which are based on “evaluation signals” generated in each architectural component to obtain quick and efficient estimates of the state of the component, and illustrate the operation and utility of these mechanisms with examples from human-robot interaction experiments.


2019 ◽  
Vol 12 (3) ◽  
pp. 639-657 ◽  
Author(s):  
Antonio Andriella ◽  
Carme Torras ◽  
Guillem Alenyà

2013 ◽  
Vol 14 (2) ◽  
pp. 268-296 ◽  
Author(s):  
Karola Pitsch ◽  
Anna-Lisa Vollmer ◽  
Manuel Mühlig

The paper investigates the effects of a humanoid robot’s online feedback during a tutoring situation in which a human demonstrates how to make a frog jump across a table. Motivated by micro-analytic studies of adult-child-interaction, we investigated whether tutors react to a robot’s gaze strategies while they are presenting an action. And if so, how they would adapt to them. Analysis reveals that tutors adjust typical “motionese” parameters (pauses, speed, and height of motion). We argue that a robot – when using adequate online feedback strategies – has at its disposal an important resource with which it could proactively shape the tutor’s presentation and help generate the input from which it would benefit most. These results advance our understanding of robotic “Social Learning” in that they suggest a paradigm shift towards considering human and robot as one interational learning system. Keywords: human-robot-interaction; feedback; adaptation; multimodality; gaze; conversation analysis; social learning; pro-active robot conduct


Author(s):  
Tracy Sanders ◽  
Alexandra Kaplan ◽  
Ryan Koch ◽  
Michael Schwartz ◽  
P. A. Hancock

Objective: To understand the influence of trust on use choice in human-robot interaction via experimental investigation. Background: The general assumption that trusting a robot leads to using that robot has been previously identified, often by asking participants to choose between manually completing a task or using an automated aid. Our work further evaluates the relationship between trust and use choice and examines factors impacting choice. Method: An experiment was conducted wherein participants rated a robot on a trust scale, then made decisions about whether to use that robotic agent or a human agent to complete a task. Participants provided explicit reasoning for their choices. Results: While we found statistical support for the “trust leads to use” relationship, qualitative results indicate other factors are important as well. Conclusion: Results indicated that while trust leads to use, use is also heavily influenced by the specific task at hand. Users more often chose a robot for a dangerous task where loss of life is likely, citing safety as their primary concern. Conversely, users chose humans for the mundane warehouse task, mainly citing financial reasons, specifically fear of job and income loss for the human worker. Application: Understanding the factors driving use choice is key to appropriate interaction in the field of human-robot teaming.


2008 ◽  
Vol 5 (4) ◽  
pp. 213-223 ◽  
Author(s):  
Shuhei Ikemoto ◽  
Takashi Minato ◽  
Hiroshi Ishiguro

In this paper, we investigate physical human–robot interaction (PHRI) as an important extension of traditional HRI research. The aim of this research is to develop a motor learning system that uses physical help from a human helper. We first propose a new control system that takes advantage of inherent joint flexibility. This control system is applied on a new humanoid robot called CB2. In order to clarify the difference between successful and unsuccessful interaction, we conduct an experiment where a human subject has to help the CB2robot in its rising-up motion. We then develop a new measure that demonstrates the difference between smooth and non-smooth physical interactions. An analysis of the experiment’s data, based on the introduced measure, shows significant differences between experts and beginners in human–robot interaction.


Sign in / Sign up

Export Citation Format

Share Document