scholarly journals Coarse-to-Fine Imitation Learning: Robot Manipulation from a Single Demonstration

Author(s):  
Edward Johns
2021 ◽  
Vol 18 (4(Suppl.)) ◽  
pp. 1350
Author(s):  
Tho Nguyen Duc ◽  
Chanh Minh Tran ◽  
Phan Xuan Tan ◽  
Eiji Kamioka

Imitation learning is an effective method for training an autonomous agent to accomplish a task by imitating expert behaviors in their demonstrations. However, traditional imitation learning methods require a large number of expert demonstrations in order to learn a complex behavior. Such a disadvantage has limited the potential of imitation learning in complex tasks where the expert demonstrations are not sufficient. In order to address the problem, we propose a Generative Adversarial Network-based model which is designed to learn optimal policies using only a single demonstration. The proposed model is evaluated on two simulated tasks in comparison with other methods. The results show that our proposed model is capable of completing considered tasks despite the limitation in the number of expert demonstrations, which clearly indicate the potential of our model.


Author(s):  
Mohamed Khalil Jabri

Imitation learning allows learning complex behaviors given demonstrations. Early approaches belonging to either Behavior Cloning or Inverse Reinforcement Learning were however of limited scalability to complex environments. A more promising approach termed as Generative Adversarial Imitation Learning tackles the imitation learning problem by drawing a connection with Generative Adversarial Networks. In this work, we advocate the use of this class of methods and investigate possible extensions by endowing them with global temporal consistency, in particular through a contrastive learning based approach.


2005 ◽  
Author(s):  
Frderick L. Crabbe ◽  
Rebecca Hwa

2017 ◽  
Vol 22 (5) ◽  
pp. 1433-1444 ◽  
Author(s):  
Huansheng Song ◽  
Xuan Wang ◽  
Cui Hua ◽  
Weixing Wang ◽  
Qi Guan ◽  
...  

2021 ◽  
Author(s):  
Markku Suomalainen ◽  
Fares J. Abu-dakka ◽  
Ville Kyrki

AbstractWe present a novel method for learning from demonstration 6-D tasks that can be modeled as a sequence of linear motions and compliances. The focus of this paper is the learning of a single linear primitive, many of which can be sequenced to perform more complex tasks. The presented method learns from demonstrations how to take advantage of mechanical gradients in in-contact tasks, such as assembly, both for translations and rotations, without any prior information. The method assumes there exists a desired linear direction in 6-D which, if followed by the manipulator, leads the robot’s end-effector to the goal area shown in the demonstration, either in free space or by leveraging contact through compliance. First, demonstrations are gathered where the teacher explicitly shows the robot how the mechanical gradients can be used as guidance towards the goal. From the demonstrations, a set of directions is computed which would result in the observed motion at each timestep during a demonstration of a single primitive. By observing which direction is included in all these sets, we find a single desired direction which can reproduce the demonstrated motion. Finding the number of compliant axes and their directions in both rotation and translation is based on the assumption that in the presence of a desired direction of motion, all other observed motion is caused by the contact force of the environment, signalling the need for compliance. We evaluate the method on a KUKA LWR4+ robot with test setups imitating typical tasks where a human would use compliance to cope with positional uncertainty. Results show that the method can successfully learn and reproduce compliant motions by taking advantage of the geometry of the task, therefore reducing the need for localization accuracy.


Sign in / Sign up

Export Citation Format

Share Document