scholarly journals Safe and Efficient Human–Robot Collaboration Part II: Optimal Generalized Human-in-the-Loop Real-Time Motion Generation

2018 ◽  
Vol 3 (4) ◽  
pp. 3781-3788 ◽  
Author(s):  
Roman Weitschat ◽  
Harald Aschemann
Author(s):  
Haodong Chen ◽  
Ming C. Leu ◽  
Wenjin Tao ◽  
Zhaozheng Yin

Abstract With the development of industrial automation and artificial intelligence, robotic systems are developing into an essential part of factory production, and the human-robot collaboration (HRC) becomes a new trend in the industrial field. In our previous work, ten dynamic gestures have been designed for communication between a human worker and a robot in manufacturing scenarios, and a dynamic gesture recognition model based on Convolutional Neural Networks (CNN) has been developed. Based on the model, this study aims to design and develop a new real-time HRC system based on multi-threading method and the CNN. This system enables the real-time interaction between a human worker and a robotic arm based on dynamic gestures. Firstly, a multi-threading architecture is constructed for high-speed operation and fast response while schedule more than one task at the same time. Next, A real-time dynamic gesture recognition algorithm is developed, where a human worker’s behavior and motion are continuously monitored and captured, and motion history images (MHIs) are generated in real-time. The generation of the MHIs and their identification using the classification model are synchronously accomplished. If a designated dynamic gesture is detected, it is immediately transmitted to the robotic arm to conduct a real-time response. A Graphic User Interface (GUI) for the integration of the proposed HRC system is developed for the visualization of the real-time motion history and classification results of the gesture identification. A series of actual collaboration experiments are carried out between a human worker and a six-degree-of-freedom (6 DOF) Comau industrial robot, and the experimental results show the feasibility and robustness of the proposed system.


2020 ◽  
Vol 53 (2) ◽  
pp. 9384-9390
Author(s):  
Bárbara Barros Carlos ◽  
Tommaso Sartor ◽  
Andrea Zanelli ◽  
Moritz Diehl ◽  
Giuseppe Oriolo

Proceedings ◽  
2019 ◽  
Vol 42 (1) ◽  
pp. 48 ◽  
Author(s):  
Tadele Belay Tuli ◽  
Martin Manns

Human-robot collaboration combines the extended capabilities of humans and robots to create a more inclusive and human-centered production system in the future. However, human safety is the primary concern for manufacturing industries. Therefore, real-time motion tracking is necessary to identify if the human worker body parts enter the restricted working space solely dedicated to the robot. Tracking these motions using decentralized and different tracking systems requires a generic model controller and consistent motion exchanging formats. In this work, our task is to investigate a concept for a unified real-time motion tracking for human-robot collaboration. In this regard, a low cost and game-based motion tracking system, e.g., HTC Vive, is utilized to capture human motion by mapping into a digital human model in the Unity3D environment. In this context, the human model is described using a biomechanical model that comprises joint segments defined by position and orientation. Concerning robot motion tracking, a unified robot description format is used to describe the kinematic trees. Finally, a concept of assembly operation that involves snap joining is simulated to analyze the performance of the system in real-time capability. The distribution of joint variables in spatial-space and time-space is analyzed. The results suggest that real-time tracking in human-robot collaborative assembly environments can be considered to maximize the safety of the human worker. However, the accuracy and reliability of the system regarding system disturbances need to be justified.


Author(s):  
Meng-Lun Lee ◽  
Sara Behdad ◽  
Xiao Liang ◽  
Minghui Zheng

Abstract Product disassembly is a labor-intensive process and is far from being automated. Typically, disassembly is not robust enough to handle product varieties from different shapes, models, and physical uncertainties due to component imperfections, damage throughout component usage, or insufficient product information. To overcome these difficulties and to automate the disassembly procedure through human-robot collaboration without excessive computational cost, this paper proposes a real-time receding horizon sequence planner that distributes tasks between robot and human operator while taking real-time human motion into consideration. The sequence planner aims to address several issues in the disassembly line, such as varying orientations, safety constraints of human operators, uncertainty of human operation, and the computational cost of large number of disassembly tasks. The proposed disassembly sequence planner identifies both the positions and orientations of the to-be-disassembled items, as well as the locations of human operator, and obtains an optimal disassembly sequence that follows disassembly rules and safety constraints for human operation. Experimental tests have been conducted to validate the proposed planner: the robot can locate and disassemble the components following the optimal sequence, and consider explicitly human operator’s real-time motion, and collaborate with the human operator without violating safety constraints.


Sign in / Sign up

Export Citation Format

Share Document