Design of a Real-Time Human-Robot Collaboration System Using Dynamic Gestures

2021 ◽  
Author(s):  
Daniel Chen ◽  
Ming C. Leu ◽  
Zhaozheng Yin ◽  
Wenjin Tao
Author(s):  
Haodong Chen ◽  
Ming C. Leu ◽  
Wenjin Tao ◽  
Zhaozheng Yin

Abstract With the development of industrial automation and artificial intelligence, robotic systems are developing into an essential part of factory production, and the human-robot collaboration (HRC) becomes a new trend in the industrial field. In our previous work, ten dynamic gestures have been designed for communication between a human worker and a robot in manufacturing scenarios, and a dynamic gesture recognition model based on Convolutional Neural Networks (CNN) has been developed. Based on the model, this study aims to design and develop a new real-time HRC system based on multi-threading method and the CNN. This system enables the real-time interaction between a human worker and a robotic arm based on dynamic gestures. Firstly, a multi-threading architecture is constructed for high-speed operation and fast response while schedule more than one task at the same time. Next, A real-time dynamic gesture recognition algorithm is developed, where a human worker’s behavior and motion are continuously monitored and captured, and motion history images (MHIs) are generated in real-time. The generation of the MHIs and their identification using the classification model are synchronously accomplished. If a designated dynamic gesture is detected, it is immediately transmitted to the robotic arm to conduct a real-time response. A Graphic User Interface (GUI) for the integration of the proposed HRC system is developed for the visualization of the real-time motion history and classification results of the gesture identification. A series of actual collaboration experiments are carried out between a human worker and a six-degree-of-freedom (6 DOF) Comau industrial robot, and the experimental results show the feasibility and robustness of the proposed system.


2013 ◽  
Vol 4 (1) ◽  
pp. 1
Author(s):  
Ednaldo Brigante Pizzolato ◽  
Mauro dos Santos Anjo ◽  
Sebastian Feuerstack

Sign languages are the natural way Deafs use to communicate with other people. They have their own formal semantic definitions and syntactic rules and are composed by a large set of gestures involving hands and head. Automatic recognition of sign languages (ARSL) tries to recognize the signs and translate them into a written language. ARSL is a challenging task as it involves background segmentation, hands and head posture modeling, recognition and tracking, temporal analysis and syntactic and semantic interpretation. Moreover, when real-time requirements are considered, this task becomes even more challenging. In this paper, we present a study of real time requirements of automatic sign language recognition of small sets of static and dynamic gestures of the Brazilian Sign Language (LIBRAS). For the task of static gesture recognition, we implemented a system that is able to work on small sub-sets of the alphabet - like A,E,I,O,U and B,C,F,L,V - reaching very high recognition rates. For the task of dynamic gesture recognition, we tested our system over a small set of LIBRAS words and collected the execution times. The aim was to gather knowledge regarding execution time of all the recognition processes (like segmentation, analysis and recognition itself) to evaluate the feasibility of building a real-time system to recognize small sets of both static and dynamic gestures. Our findings indicate that the bottleneck of our current architecture is the recognition phase.


2019 ◽  
Author(s):  
Rainer Mueller ◽  
Matthias Vette ◽  
Tobias Masiak ◽  
Benjamin Duppe ◽  
Albert Schulz

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 663
Author(s):  
Yuji Yamakawa ◽  
Yutaro Matsui ◽  
Masatoshi Ishikawa

In this research, we focused on Human-Robot collaboration. There were two goals: (1) to develop and evaluate a real-time Human-Robot collaborative system, and (2) to achieve concrete tasks such as collaborative peg-in-hole using the developed system. We proposed an algorithm for visual sensing and robot hand control to perform collaborative motion, and we analyzed the stability of the collaborative system and a so-called collaborative error caused by image processing and latency. We achieved collaborative motion using this developed system and evaluated the collaborative error on the basis of the analysis results. Moreover, we aimed to realize a collaborative peg-in-hole task that required a system with high speed and high accuracy. To achieve this goal, we analyzed the conditions required for performing the collaborative peg-in-hole task from the viewpoints of geometric, force and posture conditions. Finally, in this work, we show the experimental results and data of the collaborative peg-in-hole task, and we examine the effectiveness of our collaborative system.


Author(s):  
Le Wang ◽  
Shengquan Xie ◽  
Wenjun Xu ◽  
Bitao Yao ◽  
Jia Cui ◽  
...  

Abstract In complex industrial human-robot collaboration (HRC) environment, obstacles in the shared working space will occlude the operator, and the industrial robot will threaten the safety of the operator if it is unable to get the complete human spatial point cloud. This paper proposes a real-time human point cloud inpainting method based on the deep generative model. The method can recover the human point cloud occluded by obstacles in the shared working space to ensure the safety of the operator. The method proposed in this paper can be mainly divided into three parts: (i) real-time obstacles detection. This process can detect obstacle locations in real time and generate the image of obstacles. (ii) the application of the deep generative model algorithm. It is a complete convolutional neural network (CNN) structure and introduces advanced generative adversarial loss. The model can generate the missing depth data of operators at arbitrary position in the human depth image. (iii) spatial mapping of the depth image. The depth image will be mapped to point cloud by coordinate system conversion. The effectiveness of the method is verified by filling hole of the human point cloud occluded by obstacles in industrial HRC environment. The experiment results show that the proposed method can accurately generate the occluded human point cloud in real time and ensure the safety of the operator.


Robotica ◽  
2019 ◽  
Vol 38 (10) ◽  
pp. 1756-1777 ◽  
Author(s):  
Gerold Huber ◽  
Dirk Wollherr

SummaryWith the increasing demand for humans and robots to collaborate in a joint workspace, it is essential that robots react and adapt instantaneously to unforeseen events to ensure safety. Constraining robot dynamics directly on SE(3), that is, the group of 3D translation and rotation, is essential to comply with the emerging Human–Robot Collaboration (HRC) safety standard ISO/TS 15066. We argue that limiting coordinate-independent magnitudes of physical dynamic quantities at the same time allows more intuitive constraint definitions. We present the first real-time capable online trajectory generator that constrains translational and rotational magnitude values of 3D translation and 3D rotation dynamics in a singularity-free formulation. Simulations as well as experiments on a hardware platform show the utility in HRC contexts.


Proceedings ◽  
2019 ◽  
Vol 42 (1) ◽  
pp. 48 ◽  
Author(s):  
Tadele Belay Tuli ◽  
Martin Manns

Human-robot collaboration combines the extended capabilities of humans and robots to create a more inclusive and human-centered production system in the future. However, human safety is the primary concern for manufacturing industries. Therefore, real-time motion tracking is necessary to identify if the human worker body parts enter the restricted working space solely dedicated to the robot. Tracking these motions using decentralized and different tracking systems requires a generic model controller and consistent motion exchanging formats. In this work, our task is to investigate a concept for a unified real-time motion tracking for human-robot collaboration. In this regard, a low cost and game-based motion tracking system, e.g., HTC Vive, is utilized to capture human motion by mapping into a digital human model in the Unity3D environment. In this context, the human model is described using a biomechanical model that comprises joint segments defined by position and orientation. Concerning robot motion tracking, a unified robot description format is used to describe the kinematic trees. Finally, a concept of assembly operation that involves snap joining is simulated to analyze the performance of the system in real-time capability. The distribution of joint variables in spatial-space and time-space is analyzed. The results suggest that real-time tracking in human-robot collaborative assembly environments can be considered to maximize the safety of the human worker. However, the accuracy and reliability of the system regarding system disturbances need to be justified.


Sign in / Sign up

Export Citation Format

Share Document