scholarly journals Robot Anticipation Learning System for Ball Catching

Robotics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 113
Author(s):  
Diogo Carneiro ◽  
Filipe Silva ◽  
Petia Georgieva

Catching flying objects is a challenging task in human–robot interaction. Traditional techniques predict the intersection position and time using the information obtained during the free-flying ball motion. A common pain point in these systems is the short ball flight time and uncertainties in the ball’s trajectory estimation. In this paper, we present the Robot Anticipation Learning System (RALS) that accounts for the information obtained from observation of the thrower’s hand motion before the ball is released. RALS takes extra time for the robot to start moving in the direction of the target before the opponent finishes throwing. To the best of our knowledge, this is the first robot control system for ball-catching with anticipation skills. Our results show that the information fused from both throwing and flying motions improves the ball-catching rate by up to 20% compared to the baseline approach, with the predictions relying only on the information acquired during the flight phase.

2007 ◽  
Vol 8 (1) ◽  
pp. 53-81 ◽  
Author(s):  
Luís Seabra Lopes ◽  
Aneesh Chauhan

This paper addresses word learning for human–robot interaction. The focus is on making a robotic agent aware of its surroundings, by having it learn the names of the objects it can find. The human user, acting as instructor, can help the robotic agent ground the words used to refer to those objects. A lifelong learning system, based on one-class learning, was developed (OCLL). This system is incremental and evolves with the presentation of any new word, which acts as a class to the robot, relying on instructor feedback. A novel experimental evaluation methodology, that takes into account the open-ended nature of word learning, is proposed and applied. This methodology is based on the realization that a robot’s vocabulary will be limited by its discriminatory capacity which, in turn, depends on its sensors and perceptual capabilities. The results indicate that the robot’s representations are capable of incrementally evolving by correcting class descriptions, based on instructor feedback to classification results. In successive experiments, it was possible for the robot to learn between 6 and 12 names of real-world office objects. Although these results are comparable to those obtained by other authors, there is a need to scale-up. The limitations of the method are discussed and potential directions for improvement are pointed out.


2013 ◽  
Vol 14 (2) ◽  
pp. 268-296 ◽  
Author(s):  
Karola Pitsch ◽  
Anna-Lisa Vollmer ◽  
Manuel Mühlig

The paper investigates the effects of a humanoid robot’s online feedback during a tutoring situation in which a human demonstrates how to make a frog jump across a table. Motivated by micro-analytic studies of adult-child-interaction, we investigated whether tutors react to a robot’s gaze strategies while they are presenting an action. And if so, how they would adapt to them. Analysis reveals that tutors adjust typical “motionese” parameters (pauses, speed, and height of motion). We argue that a robot – when using adequate online feedback strategies – has at its disposal an important resource with which it could proactively shape the tutor’s presentation and help generate the input from which it would benefit most. These results advance our understanding of robotic “Social Learning” in that they suggest a paradigm shift towards considering human and robot as one interational learning system. Keywords: human-robot-interaction; feedback; adaptation; multimodality; gaze; conversation analysis; social learning; pro-active robot conduct


Author(s):  
HARI KRISHNAN R ◽  
VALLIKANNU A. L

The fundamental technologies for Human-Computer Interaction are Hand motion tracking and Gesture Identification. The same technology has been adapted for Human-Robot Interaction. This paper discusses a natural methodology for Human-Robot Interaction. In the proposed system, the accelerometers at the fingers, tracks specific gestures. These gestures are identified by the controller, which in turn controls the actuators that results in Humanoid walking. The Humanoid under consideration has 8 Degrees of Freedom.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6529
Author(s):  
Masaya Iwasaki ◽  
Mizuki Ikeda ◽  
Tatsuyuki Kawamura ◽  
Hideyuki Nakanishi

Robotic salespeople are often ignored by people due to their weak social presence, and thus have difficulty facilitating sales autonomously. However, for robots that are remotely controlled by humans, there is a need for experienced and trained operators. In this paper, we suggest crowdsourcing to allow general users on the internet to operate a robot remotely and facilitate customers’ purchasing activities while flexibly responding to various situations through a user interface. To implement this system, we examined how our developed remote interface can improve a robot’s social presence while being controlled by a human operator, including first-time users. Therefore, we investigated the typical flow of a customer–robot interaction that was effective for sales promotion, and modeled it as a state transition with automatic functions by accessing the robot’s sensor information. Furthermore, we created a user interface based on the model and examined whether it was effective in a real environment. Finally, we conducted experiments to examine whether the user interface could be operated by an amateur user and enhance the robot’s social presence. The results revealed that our model was able to improve the robot’s social presence and facilitate customers’ purchasing activity even when the operator was a first-time user.


2011 ◽  
Vol 23 (4) ◽  
pp. 557-566 ◽  
Author(s):  
Vincent Duchaine ◽  
◽  
Clément Gosselin ◽  

While the majority of industrial manipulators currently in use only need to performautonomousmotion, future generations of cooperative robots will also have to execute cooperative motion and intelligently react to contacts. These extended behaviours are essential to enable safe and effective physical Human-Robot Interaction (pHRI). However, they will inevitably result in an increase of the controller complexity. This paper presents a single variable admittance control scheme that handles the three modes of operation, thereby minimizing the complexity of the controller. First, the adaptative admittance controller previously proposed by the authors for cooperative motion is recalled. Then, a novel implementation of variable admittance control for the generation of smooth autonomous motion including reaction to collisions anywhere on the robot is presented. Finally, it is shown how the control equations for these three modes of operation can be simply unified into a unique control scheme.


2008 ◽  
Vol 5 (4) ◽  
pp. 213-223 ◽  
Author(s):  
Shuhei Ikemoto ◽  
Takashi Minato ◽  
Hiroshi Ishiguro

In this paper, we investigate physical human–robot interaction (PHRI) as an important extension of traditional HRI research. The aim of this research is to develop a motor learning system that uses physical help from a human helper. We first propose a new control system that takes advantage of inherent joint flexibility. This control system is applied on a new humanoid robot called CB2. In order to clarify the difference between successful and unsuccessful interaction, we conduct an experiment where a human subject has to help the CB2robot in its rising-up motion. We then develop a new measure that demonstrates the difference between smooth and non-smooth physical interactions. An analysis of the experiment’s data, based on the introduced measure, shows significant differences between experts and beginners in human–robot interaction.


2014 ◽  
Vol 11 (04) ◽  
pp. 1442005 ◽  
Author(s):  
Youngho Lee ◽  
Young Jae Ryoo ◽  
Jongmyung Choi

With the development of computing technology, robots are now popular in our daily life. Human–robot interaction is not restricted to a direct communication between them. The communication could include various different human to human interactions. In this paper, we present a framework for enhancing the interaction among human–robot-environments. The proposed framework is composed of a robot part, a user part, and the DigiLog space. To evaluate the proposed framework, we applied the framework into a real-time remote robot-control platform in the smart DigiLog space. We are implementing real time controlling and monitoring of a robot by using one smart phone as the robot brain and the other smart phone as the remote controller.


Author(s):  
Prashant K. Jamwal ◽  
Sheng Quan Xie ◽  
Sean Quigley

Variants of Fuzzy logic controllers (FLC) have been widely used to control the systems characterized by uncertain and ambiguous parameters. Control objectives for such systems become more challenging when they are subjected to uncertain environments. Human-robot interaction is such phenomenon wherein robot control difficulties are further augmented due to human intervention. State of the art of research in FLC has been limited in establishing a trade-off between accuracy and interpretability, since achieving both these performance measures simultaneously is difficult. In the present research, an adaptive FLC has been designed in order to achieve better accuracy and higher interpretability. Supported by another instance of FLC as disturbance observer, the proposed controller has adaptive mechanism specifically designed to alter its parameters. The adaptive FLC has been implemented to control actuation of a pneumatic muscle actuator (PMA). Experimental results show excellent trajectory tracking performance of the PMA in the presence of varying environment.


Sign in / Sign up

Export Citation Format

Share Document