scholarly journals Gesture Recognition Using a Depth Camera for Human Robot Collaboration on Assembly Line

2015 ◽  
Vol 3 ◽  
pp. 518-525 ◽  
Author(s):  
Eva Coupeté ◽  
Fabien Moutarde ◽  
Sotiris Manitsaris
Author(s):  
Matthias Scholer ◽  
Matthias Vette ◽  
Mueller Rainer

Purpose – This study aims to deliver an approach of how lightweight robot systems can be used to automate manual processes for higher efficiency, increased process capability and enhanced ergonomics. As a use case, a new collaborative testing system for an automated water leak test was designed using an image processing system utilized by the robot. Design/methodology/approach – The “water leak test” in an automotive final assembly line is often a significant cost factor due to its labour-intensive nature. This is particularly the case for premium car manufacturers as each vehicle is watered and manually inspected for leakage. This paper delivers an approach that optimizes the efficiency and capability of the test process by using a new automated in-line inspection system whereby thermographic images are taken by a lightweight robot system and then processed to locate the leak. Such optimization allows the collaboration of robots and manual labour, which in turn enhances the capability of the process station. Findings – This paper examines the development of a new application for lightweight robotic systems and provides a suitable process whereby the system was optimized regarding technical, ergonomic and safety-related aspects. Research limitations/implications – A new automated testing process in combination with a processing algorithm was developed. A modular system suitable for the integration of human–robot collaboration into the assembly line is presented as well. Practical implications – To optimize and validate the system, it was set up in a true to reality model factory and brought to a prototypical status. Several original equipment manufacturers showed great interest in the system. Feasibility studies for a practical implementation are running at the moment. Social implications – The direct human–robot collaboration allows humans and robots to share the same workspace without strict separation measures, which is a great advantage compared with traditional industrial robots. The workers benefit from a more ergonomic workflow and are relieved from unpleasant, repetitive and burdensome tasks. Originality/value – A lightweight robotic system was implemented in a continuous assembly line as a new area of application for these systems. The automated water leak test gives a practical example of how to enrich the assembly and commissioning lines, which are currently dominated by manual labour, with new technologies. This is necessary to reach a higher efficiency and process capability while maintaining a higher flexibility potential than fully automated systems.


Author(s):  
Haodong Chen ◽  
Ming C. Leu ◽  
Wenjin Tao ◽  
Zhaozheng Yin

Abstract With the development of industrial automation and artificial intelligence, robotic systems are developing into an essential part of factory production, and the human-robot collaboration (HRC) becomes a new trend in the industrial field. In our previous work, ten dynamic gestures have been designed for communication between a human worker and a robot in manufacturing scenarios, and a dynamic gesture recognition model based on Convolutional Neural Networks (CNN) has been developed. Based on the model, this study aims to design and develop a new real-time HRC system based on multi-threading method and the CNN. This system enables the real-time interaction between a human worker and a robotic arm based on dynamic gestures. Firstly, a multi-threading architecture is constructed for high-speed operation and fast response while schedule more than one task at the same time. Next, A real-time dynamic gesture recognition algorithm is developed, where a human worker’s behavior and motion are continuously monitored and captured, and motion history images (MHIs) are generated in real-time. The generation of the MHIs and their identification using the classification model are synchronously accomplished. If a designated dynamic gesture is detected, it is immediately transmitted to the robotic arm to conduct a real-time response. A Graphic User Interface (GUI) for the integration of the proposed HRC system is developed for the visualization of the real-time motion history and classification results of the gesture identification. A series of actual collaboration experiments are carried out between a human worker and a six-degree-of-freedom (6 DOF) Comau industrial robot, and the experimental results show the feasibility and robustness of the proposed system.


2021 ◽  
Vol 15 ◽  
Author(s):  
Dimitris Papanagiotou ◽  
Gavriela Senteri ◽  
Sotiris Manitsaris

Collaborative robots are currently deployed in professional environments, in collaboration with professional human operators, helping to strike the right balance between mechanization and manual intervention in manufacturing processes required by Industry 4.0. In this paper, the contribution of gesture recognition and pose estimation to the smooth introduction of cobots into an industrial assembly line is described, with a view to performing actions in parallel with the human operators and enabling interaction between them. The proposed active vision system uses two RGB-D cameras that record different points of view of gestures and poses of the operator, to build an external perception layer for the robot that facilitates spatiotemporal adaptation, in accordance with the human's behavior. The use-case of this work is concerned with LCD TV assembly of an appliance manufacturer, comprising of two parts. The first part of the above-mentioned operation is assigned to a robot, strengthening the assembly line. The second part is assigned to a human operator. Gesture recognition, pose estimation, physical interaction, and sonic notification, create a multimodal human-robot interaction system. Five experiments are performed, to test if gesture recognition and pose estimation can reduce the cycle time and range of motion of the operator, respectively. Physical interaction is achieved using the force sensor of the cobot. Pose estimation through a skeleton-tracking algorithm provides the cobot with human pose information and makes it spatially adjustable. Sonic notification is added for the case of unexpected incidents. A real-time gesture recognition module is implemented through a Deep Learning architecture consisting of Convolutional layers, trained in an egocentric view and reducing the cycle time of the routine by almost 20%. This constitutes an added value in this work, as it affords the potential of recognizing gestures independently of the anthropometric characteristics and the background. Common metrics derived from the literature are used for the evaluation of the proposed system. The percentage of spatial adaptation of the cobot is proposed as a new KPI for a collaborative system and the opinion of the human operator is measured through a questionnaire that concerns the various affective states of the operator during the collaboration.


Sign in / Sign up

Export Citation Format

Share Document