Admittance-Based Controller Design for Physical Human–Robot Interaction in the Constrained Task Space

2020 ◽  
Vol 17 (4) ◽  
pp. 1937-1949 ◽  
Author(s):  
Wei He ◽  
Chengqian Xue ◽  
Xinbo Yu ◽  
Zhijun Li ◽  
Chenguang Yang
Robotica ◽  
2019 ◽  
Vol 38 (10) ◽  
pp. 1807-1823 ◽  
Author(s):  
Leon Žlajpah ◽  
Tadej Petrič

SUMMARYIn this paper, we propose a novel unified framework for virtual guides. The human–robot interaction is based on a virtual robot, which is controlled by the admittance control. The unified framework combines virtual guides, control of the dynamic behavior, and path tracking. Different virtual guides and active constraints can be realized by using dead-zones in the position part of the admittance controller. The proposed algorithm can act in a changing task space and allows selection of the tasks-space and redundant degrees-of-freedom during the task execution. The admittance control algorithm can be implemented either on a velocity or on acceleration level. The proposed framework has been validated by an experiment on a KUKA LWR robot performing the Buzz-Wire task.


2007 ◽  
Vol 24 (2) ◽  
pp. 123-134 ◽  
Author(s):  
Eric Meisner ◽  
Volkan Isler ◽  
Jeff Trinkle

PAMM ◽  
2018 ◽  
Vol 18 (1) ◽  
Author(s):  
Dominik Kaserer ◽  
Hubert Gattringer ◽  
Andreas Müller

2020 ◽  
Vol 34 (2) ◽  
Author(s):  
Daniel Angelov ◽  
Yordan Hristov ◽  
Subramanian Ramamoorthy

Abstract Learning models of user behaviour is an important problem that is broadly applicable across many application domains requiring human–robot interaction. In this work, we show that it is possible to learn generative models for distinct user behavioural types, extracted from human demonstrations, by enforcing clustering of preferred task solutions within the latent space. We use these models to differentiate between user types and to find cases with overlapping solutions. Moreover, we can alter an initially guessed solution to satisfy the preferences that constitute a particular user type by backpropagating through the learned differentiable models. An advantage of structuring generative models in this way is that we can extract causal relationships between symbols that might form part of the user’s specification of the task, as manifested in the demonstrations. We further parameterize these specifications through constraint optimization in order to find a safety envelope under which motion planning can be performed. We show that the proposed method is capable of correctly distinguishing between three user types, who differ in degrees of cautiousness in their motion, while performing the task of moving objects with a kinesthetically driven robot in a tabletop environment. Our method successfully identifies the correct type, within the specified time, in 99% [97.8–99.8] of the cases, which outperforms an IRL baseline. We also show that our proposed method correctly changes a default trajectory to one satisfying a particular user specification even with unseen objects. The resulting trajectory is shown to be directly implementable on a PR2 humanoid robot completing the same task.


Sign in / Sign up

Export Citation Format

Share Document