scholarly journals An Adaptive Imitation Learning Framework for Robotic Complex Contact-Rich Insertion Tasks

2022 ◽  
Vol 8 ◽  
Author(s):  
Yan Wang ◽  
Cristian C. Beltran-Hernandez ◽  
Weiwei Wan ◽  
Kensuke Harada

Complex contact-rich insertion is a ubiquitous robotic manipulation skill and usually involves nonlinear and low-clearance insertion trajectories as well as varying force requirements. A hybrid trajectory and force learning framework can be utilized to generate high-quality trajectories by imitation learning and find suitable force control policies efficiently by reinforcement learning. However, with the mentioned approach, many human demonstrations are necessary to learn several tasks even when those tasks require topologically similar trajectories. Therefore, to reduce human repetitive teaching efforts for new tasks, we present an adaptive imitation framework for robot manipulation. The main contribution of this work is the development of a framework that introduces dynamic movement primitives into a hybrid trajectory and force learning framework to learn a specific class of complex contact-rich insertion tasks based on the trajectory profile of a single task instance belonging to the task class. Through experimental evaluations, we validate that the proposed framework is sample efficient, safer, and generalizes better at learning complex contact-rich insertion tasks on both simulation environments and on real hardware.

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3409
Author(s):  
Eunjin Jung ◽  
Incheol Kim

This study proposes a novel hybrid imitation learning (HIL) framework in which behavior cloning (BC) and state cloning (SC) methods are combined in a mutually complementary manner to enhance the efficiency of robotic manipulation task learning. The proposed HIL framework efficiently combines BC and SC losses using an adaptive loss mixing method. It uses pretrained dynamics networks to enhance SC efficiency and performs stochastic state recovery to ensure stable learning of policy networks by transforming the learner’s task state into a demo state on the demo task trajectory during SC. The training efficiency and policy flexibility of the proposed HIL framework are demonstrated in a series of experiments conducted to perform major robotic manipulation tasks (pick-up, pick-and-place, and stack tasks). In the experiments, the HIL framework showed about a 2.6 times higher performance improvement than the pure BC and about a four times faster training time than the pure SC imitation learning method. In addition, the HIL framework also showed about a 1.6 times higher performance improvement and about a 2.2 times faster training time than the other hybrid learning method combining BC and reinforcement learning (BC + RL) in the experiments.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Zhenyu Lu ◽  
Ning Wang

Purpose Dynamic movement primitives (DMPs) is a general robotic skill learning from demonstration method, but it is usually used for single robotic manipulation. For cloud-based robotic skill learning, the authors consider trajectories/skills changed by the environment, rebuild the DMPs model and propose a new DMPs-based skill learning framework removing the influence of the changing environment. Design/methodology/approach The authors proposed methods for two obstacle avoidance scenes: point obstacle and non-point obstacle. For the case with point obstacles, an accelerating term is added to the original DMPs function. The unknown parameters in this term are estimated by interactive identification and fitting step of the forcing function. Then a pure skill despising the influence of obstacles is achieved. Using identified parameters, the skill can be applied to new tasks with obstacles. For the non-point obstacle case, a space matching method is proposed by building a matching function from the universal space without obstacle to the space condensed by obstacles. Then the original trajectory will change along with transformation of the space to get a general trajectory for the new environment. Findings The proposed two methods are certified by two experiments, one of which is taken based on Omni joystick to record operator’s manipulation motions. Results show that the learned skills allow robots to execute tasks such as autonomous assembling in a new environment. Originality/value This is a new innovation for DMPs-based cloud robotic skill learning from multi-scene tasks and generalizing new skills following the changes of the environment.


2020 ◽  
Vol 53 (5) ◽  
pp. 265-270
Author(s):  
Xian Li ◽  
Chenguang Yang ◽  
Ying Feng

2021 ◽  
Author(s):  
Tiantian Wang ◽  
Liang Yan ◽  
Gang Wang ◽  
Xiaoshan Gao ◽  
Nannan Du ◽  
...  

Author(s):  
Cong Fei ◽  
Bin Wang ◽  
Yuzheng Zhuang ◽  
Zongzhang Zhang ◽  
Jianye Hao ◽  
...  

Generative adversarial imitation learning (GAIL) has shown promising results by taking advantage of generative adversarial nets, especially in the field of robot learning. However, the requirement of isolated single modal demonstrations limits the scalability of the approach to real world scenarios such as autonomous vehicles' demand for a proper understanding of human drivers' behavior. In this paper, we propose a novel multi-modal GAIL framework, named Triple-GAIL, that is able to learn skill selection and imitation jointly from both expert demonstrations and continuously generated experiences with data augmentation purpose by introducing an auxiliary selector. We provide theoretical guarantees on the convergence to optima for both of the generator and the selector respectively. Experiments on real driver trajectories and real-time strategy game datasets demonstrate that Triple-GAIL can better fit multi-modal behaviors close to the demonstrators and outperforms state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document