scholarly journals Cooperative Motion Generation Method for Human-robot Cooperation to deal with Environmental/Task Constraints

2009 ◽  
Vol 27 (2) ◽  
pp. 221-229 ◽  
Author(s):  
Fumi Seto ◽  
Yasuhisa Hirata ◽  
Kazuhiro Kosuge
Robotics ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 24 ◽  
Author(s):  
Hang Cui ◽  
Catherine Maguire ◽  
Amy LaViers

This paper presents a method for creating expressive aerial robots through an algorithmic procedure for creating variable motion under given task constraints. This work is informed by the close study of the Laban/Bartenieff movement system, and movement observation from this discipline will provide important analysis of the method, offering descriptive words and fitting contexts—a choreographic frame—for the motion styles produced. User studies that use utilize this qualitative analysis then validate that the method can be used to generate appropriate motion in in-home contexts. The accuracy of an individual descriptive word for the developed motion is up to 77% and context accuracy is up to 83%. A capacity for state discernment from motion profile is essential in the context of projects working toward developing in-home robots.


Author(s):  
Christoph Batke ◽  
Tarek Tounsi ◽  
Karl-Heinz Wurst ◽  
Alexander Verl ◽  
Hans-Werner Hoffmeister

2021 ◽  
Author(s):  
Riddhiman Laha ◽  
Anjali Rao ◽  
Luis F. C. Figueredo ◽  
Qing Chang ◽  
Sami Haddadin ◽  
...  

Abstract Despite the increasing number of collaborative robots in human-centered manufacturing, currently, industrial robots are still largely preprogrammed with very little autonomous features. In this context, it is paramount that the robot planning and motion generation strategies are able to account for changes in production line in a timely and easy-to-implement fashion. The same requirements are also valid for service robotics in unstructured environments where an explicit definition of a task and the underlying path and constraints are often hard to characterize. In this regard, this paper presents a real-time point-to-point kinematic task-space planner based on screw interpolation that implicitly follows the underlying geometric constraints from a user demonstration. We demonstrate through example scenarios that implicit task constraints in a single user demonstration can be captured in our approach. It is important to highlight that the proposed planner does not learn a trajectory or intends to imitate a human trajectory, but rather explores the geometric features throughout a one-time guidance and extend such features as constraints in a generalized path generator. In this sense, the framework allows for generalization of initial and final configurations, it accommodates path disturbances, and it is agnostic to the robot being used. We evaluate our approach on the 7 DOF Baxter robot on a multitude of common tasks and also show generalization ability of our method with respect to different conditions.


Author(s):  
Andrew M Gordon ◽  
Sarah R Lewis ◽  
Ann-Christin Eliasson ◽  
Susan V Duff

Author(s):  
Fumikazu MINAMIYAMA ◽  
Hidetsugu KOGA ◽  
Kentaro KOBAYASHI ◽  
Masaaki KATAYAMA

Sign in / Sign up

Export Citation Format

Share Document