Design of a Real-Time Human-Robot Collaboration System Using Dynamic Gestures

Author(s):  
Haodong Chen ◽  
Ming C. Leu ◽  
Wenjin Tao ◽  
Zhaozheng Yin

Abstract With the development of industrial automation and artificial intelligence, robotic systems are developing into an essential part of factory production, and the human-robot collaboration (HRC) becomes a new trend in the industrial field. In our previous work, ten dynamic gestures have been designed for communication between a human worker and a robot in manufacturing scenarios, and a dynamic gesture recognition model based on Convolutional Neural Networks (CNN) has been developed. Based on the model, this study aims to design and develop a new real-time HRC system based on multi-threading method and the CNN. This system enables the real-time interaction between a human worker and a robotic arm based on dynamic gestures. Firstly, a multi-threading architecture is constructed for high-speed operation and fast response while schedule more than one task at the same time. Next, A real-time dynamic gesture recognition algorithm is developed, where a human worker’s behavior and motion are continuously monitored and captured, and motion history images (MHIs) are generated in real-time. The generation of the MHIs and their identification using the classification model are synchronously accomplished. If a designated dynamic gesture is detected, it is immediately transmitted to the robotic arm to conduct a real-time response. A Graphic User Interface (GUI) for the integration of the proposed HRC system is developed for the visualization of the real-time motion history and classification results of the gesture identification. A series of actual collaboration experiments are carried out between a human worker and a six-degree-of-freedom (6 DOF) Comau industrial robot, and the experimental results show the feasibility and robustness of the proposed system.

Author(s):  
Haodong Chen ◽  
Wenjin Tao ◽  
Ming C. Leu ◽  
Zhaozheng Yin

Abstract Human-robot collaboration (HRC) is a challenging task in modern industry and gesture communication in HRC has attracted much interest. This paper proposes and demonstrates a dynamic gesture recognition system based on Motion History Image (MHI) and Convolutional Neural Networks (CNN). Firstly, ten dynamic gestures are designed for a human worker to communicate with an industrial robot. Secondly, the MHI method is adopted to extract the gesture features from video clips and generate static images of dynamic gestures as inputs to CNN. Finally, a CNN model is constructed for gesture recognition. The experimental results show very promising classification accuracy using this method.


2021 ◽  
pp. 105219
Author(s):  
Yong-Liang Zhang ◽  
Qiang Li ◽  
Hui Zhang ◽  
Wei-Zhen Wang ◽  
Jun Han ◽  
...  

2014 ◽  
Vol 926-930 ◽  
pp. 2714-2717
Author(s):  
Quan Wei Shi

For the real-time motion capture in the sport training to analysis and study, this paper adopts Kinect technology and the development of sports training combined with. Kinect somatosensory the camera as the system core, the body movements, facial expressions capture system in development costs, operating results and the development efficiency has the optimal balance point. The purpose of this research is based on the OGRE graphics rendering engine, using 3DSMAX and open source code, the design and implementation of Kinect somatosensory camera and 3DSMAX, OGRE combination of game action, motion capture system based on. This system provides an important help for realizing the real-time motion capture in the sports training, can be used in the field of sports training.


2013 ◽  
Vol 4 (1) ◽  
pp. 1
Author(s):  
Ednaldo Brigante Pizzolato ◽  
Mauro dos Santos Anjo ◽  
Sebastian Feuerstack

Sign languages are the natural way Deafs use to communicate with other people. They have their own formal semantic definitions and syntactic rules and are composed by a large set of gestures involving hands and head. Automatic recognition of sign languages (ARSL) tries to recognize the signs and translate them into a written language. ARSL is a challenging task as it involves background segmentation, hands and head posture modeling, recognition and tracking, temporal analysis and syntactic and semantic interpretation. Moreover, when real-time requirements are considered, this task becomes even more challenging. In this paper, we present a study of real time requirements of automatic sign language recognition of small sets of static and dynamic gestures of the Brazilian Sign Language (LIBRAS). For the task of static gesture recognition, we implemented a system that is able to work on small sub-sets of the alphabet - like A,E,I,O,U and B,C,F,L,V - reaching very high recognition rates. For the task of dynamic gesture recognition, we tested our system over a small set of LIBRAS words and collected the execution times. The aim was to gather knowledge regarding execution time of all the recognition processes (like segmentation, analysis and recognition itself) to evaluate the feasibility of building a real-time system to recognize small sets of both static and dynamic gestures. Our findings indicate that the bottleneck of our current architecture is the recognition phase.


Author(s):  
Dhanashree Shyam Bendarkar ◽  
Pratiksha Appasaheb Somase ◽  
Preety Kalyansingh Rebari ◽  
Renuka Ramkrishna Paturkar ◽  
Arjumand Masood Khan

Individuals with hearing hindrance utilize gesture based communication to exchange their thoughts. Generally hand movements are used by them to communicate among themselves. But there are certain limitations when they communicate with other people who cannot understand these hand movements. There is a need to have a mechanism that can act as a translator between these people to communicate. It would be easier for these people to interact if there exists direct infrastructure that is able to convert signs to text and voice messages. As of late, numerous such frameworks for gesture based communication acknowledgment have been developed. But most of them are made either for static gesture recognition or dynamic gesture recognition. As sentences are generated using combinations of static and dynamic gestures, it would be simpler for hearing debilitated individuals if such computerized frameworks can detect both the static and dynamic motions together. We have proposed a design and architecture of American Sign Language (ASL) recognition with convolutional neural networks (CNN). This paper utilizes a pretrained VGG-16 architecture for static gesture recognition and for dynamic gesture recognition, spatiotemporal features were learnt with the complex architecture, called deep learning. It contains a bidirectional convolutional Long Short Term Memory network (ConvLSTM) and 3D convolutional neural network (3DCNN) and this architecture is responsible to extract  2D spatio temporal features.


2019 ◽  
Vol 29 ◽  
pp. 02007
Author(s):  
Robert Kristof ◽  
Cristian Moldovan ◽  
Valentin Ciupe ◽  
Inocenţiu Maniu ◽  
Magdalena Banda

This paper presents our work regarding two different applications that use the electromyography sensors incorporated in the Thalmic Labs Myo Armband. The first application is about the HumanMachine Interface (HMI) for controlling an industrial robot bycreating the environment for the user to control a robot’s gripper position just by moving his arm. The second application refers to the real time control of a tracked mobile robot that is built with the Arduino development board. For each application thesystem design and the experimental results are presented.


Sign in / Sign up

Export Citation Format

Share Document