Programming-Free Approaches for Human–Robot Collaboration in Assembly Tasks

Author(s):  
Sharath Chandra Akkaladevi ◽  
Matthias Propst ◽  
Michael Hofmann ◽  
Leopold Hiesmair ◽  
Markus Ikeda ◽  
...  
Author(s):  
Carlos W. Morato ◽  
Krishnanand N. Kaipa ◽  
Satyandra K. Gupta

Hybrid assembly cells allow humans and robots to collaborate on assembly tasks. We consider a model of the hybrid cell in which a human and a robot asynchronously collaborate to assemble a product. The human retrieves parts from a bin and places them in the robot’s workspace, while the robot picks up the placed parts and assembles them into the product. Realizing hybrid cells requires -automated plan generation, system state monitoring, and contingency handling. In this paper we describe system state monitoring and present a characterization of the part matching algorithm. Finally, we report results from human-robot collaboration experiments using a KUKA robot and a 3D-printed mockup of a simplified jet-engine assembly to illustrate our approach.


Procedia CIRP ◽  
2018 ◽  
Vol 78 ◽  
pp. 255-260 ◽  
Author(s):  
Martijn Cramer ◽  
Jeroen Cramer ◽  
Karel Kellens ◽  
Eric Demeester

Author(s):  
Thomas Smith ◽  
Panorios Benardos ◽  
David Branson

The aim of this research is to develop a framework to allow efficient human robot collaboration on manufacturing assembly tasks based on cost functions that quantify capabilities and performance of each element in a system and enable their efficient evaluation. A proposed cost function format is developed along with initial development of two example cost function variables, completion time and fatigue, obtained as each worker is completing assembly tasks. The cost function format and example variables were tested with two example tasks utilizing an ABB YuMi Robot in addition to a simulated human worker under various levels of fatigue. The total costs produced clearly identified the best worker to complete each task with these costs also clearly indicating when a human worker is fatigued to a greater or lesser degree than expected.


2021 ◽  
Author(s):  
Dmitrii Monakhov ◽  
Jyrki Latokartano ◽  
Minna Lanz ◽  
Roel Pieters ◽  
Joni-Kristian Kamarainen

2022 ◽  
Vol 73 ◽  
pp. 102227
Author(s):  
Rong Zhang ◽  
Qibing Lv ◽  
Jie Li ◽  
Jinsong Bao ◽  
Tianyuan Liu ◽  
...  

Author(s):  
Carlos Morato ◽  
Krishnanand N. Kaipa ◽  
Boxuan Zhao ◽  
Satyandra K. Gupta

We present a multiple Kinects based exteroceptive sensing framework to achieve safe human-robot collaboration during assembly tasks. Our approach is mainly based on a real-time replication of the human and robot movements inside a physics-based simulation of the work cell. This enables the evaluation of the human-robot separation in a 3D Euclidean space, which can be used to generate safe motion goals for the robot. For this purpose, we develop an N-Kinect system to build an explicit model of the human and a roll-out strategy, in which we forward-simulate the robot's trajectory into the near future. Now, we use a precollision strategy that allows a human to operate in close proximity with the robot, while pausing the robot's motion whenever an imminent collision between the human model and any part of the robot is detected. Whereas most previous range based methods analyzed the physical separation based on depth data pertaining to 2D projections of robot and human, our approach evaluates the separation in a 3D space based on an explicit human model and a forward physical simulation of the robot. Real-time behavior (≈ 30 Hz) observed during experiments with a 5 DOF articulated robot and a human safely collaborating to perform an assembly task validate our approach.


ACTA IMEKO ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 72
Author(s):  
Imre Paniti ◽  
János Nacsa ◽  
Péter Kovács ◽  
Dávid Szur

<p class="Abstract"><span lang="EN-US">The performance of human–robot collaboration can be improved in some assembly tasks when a robot emulates the effective coordination behaviours observed in human teams. However, this close collaboration could cause collisions, resulting in delays in the initial scheduling. Besides the commonly used acoustic or visual signals, vibrations from a mobile device can be used to communicate the intention of a collaborative robot (cobot). In this paper, the communication time of a virtual reality and depth camera-based system is presented in which vibration signals are used to alert the user of a probable collision with a UR5 cobot. Preliminary tests are carried out on human reaction time and network communication time measurements to achieve an initial picture of the collision predictor system’s performance. Experimental tests are also presented in an assembly task with a three-finger gripper that functions as a flexible assembly device.</span></p>


Author(s):  
Asad Tirmizi ◽  
Patricia Leconte ◽  
Karel Janssen ◽  
Jean Hoyos ◽  
Maarten Witters

This chapter proposes a framework to make the programming of cobots faster, user-friendly and flexible for assembly tasks. The work focusses on an industrial case of a small (10kg) air compressor and investigates the technologies that can be used to automate this task with human-robot collaboration. To this end, the framework takes a radically different approach at the motion stack level and integrates the cobot with a constraint-based robot programming paradigm that enhances the robot programming possibilities. Additionally, the framework takes inputs from the operator via speech recognition and computer vision to increase the intuitiveness of the programing process. An implementation is made with focus on industrial robustness and the results show that this framework is a promising approach for the overall goal of achieving flexible assembly in the factories by making robot programming faster and intuitive.


2007 ◽  
Author(s):  
Elsa Eiriksdottir ◽  
Richard Catrambone
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document