scholarly journals Comparison of RGB-D and IMU-based gesture recognition for human-robot interaction in remanufacturing

Author(s):  
Luis Roda-Sanchez ◽  
Celia Garrido-Hidalgo ◽  
Arturo S. García ◽  
Teresa Olivares ◽  
Antonio Fernández-Caballero

AbstractWith product life-cycles getting shorter and limited availability of natural resources, the paradigm shift towards the circular economy is being impulsed. In this domain, the successful adoption of remanufacturing is key. However, its associated process efficiency is to date limited given high flexibility requirements for product disassembly. With the emergence of Industry 4.0, natural human-robot interaction is expected to provide numerous benefits in terms of (re)manufacturing efficiency and cost. In this regard, vision-based and wearable-based approaches are the most extended when it comes to establishing a gesture-based interaction interface. In this work, an experimental comparison of two different movement-estimation systems—(i) position data collected from Microsoft Kinect RGB-D cameras and (ii) acceleration data collected from inertial measurement units (IMUs)—is addressed. The results point to our IMU-based proposal, OperaBLE, having recognition accuracy rates up to 8.5 times higher than these of Microsoft Kinect, which proved to be dependent on the movement’s execution plane, subject’s posture, and focal distance.

2021 ◽  
Author(s):  
◽  
Callum Robinson

<p>MARVIN (Mobile Autonomous Robotic Vehicle for Indoor Navigation) was once the flagship of Victoria University’s mobile robotic fleet. However, over the years MARVIN has become obsolete. This thesis continues the the redevelopment of MARVIN, transforming it into a fully autonomous research platform for human-robot interaction (HRI).  MARVIN utilises a Segway RMP, a self balancing mobility platform. This provides agile locomotion, but increases sensor processing complexity due to its dynamic pitch. MARVIN’s existing sensing systems (including a laser rangefinder and ultrasonic sensors) are augmented with tactile sensors and a Microsoft Kinect v2 RGB-D camera for 3D sensing. This allows the detection of the obstacles often found in MARVIN’s unmodified office-like operating environment.  These sensors are processed using novel techniques to account for the Segway’s dynamic pitch. A newly developed navigation stack takes the processed sensor data to facilitate localisation, obstacle detection and motion planning.  MARVIN’s inherited humanoid robotic torso is augmented with a touch screen and voice interface, enabling HRI. MARVIN’s HRI capabilities are demonstrated by implementing it as a robotic guide. This implementation is evaluated through a usability study and found to be successful.  Through evaluations of MARVIN’s locomotion, sensing, localisation and motion planning systems, in addition to the usability study, MARVIN is found to be capable of both autonomous navigation and engaging HRI. These developed features open a diverse range of research directions and HRI tasks that MARVIN can be used to explore.</p>


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4586 ◽  
Author(s):  
Chunxu Li ◽  
Ashraf Fahmy ◽  
Johann Sienz

In this paper, the application of Augmented Reality (AR) for the control and adjustment of robots has been developed, with the aim of making interaction and adjustment of robots easier and more accurate from a remote location. A LeapMotion sensor based controller has been investigated to track the movement of the operator hands. The data from the controller allows gestures and the position of the hand palm’s central point to be detected and tracked. A Kinect V2 camera is able to measure the corresponding motion velocities in x, y, z directions after our investigated post-processing algorithm is fulfilled. Unreal Engine 4 is used to create an AR environment for the user to monitor the control process immersively. Kalman filtering (KF) algorithm is employed to fuse the position signals from the LeapMotion sensor with the velocity signals from the Kinect camera sensor, respectively. The fused/optimal data are sent to teleoperate a Baxter robot in real-time by User Datagram Protocol (UDP). Several experiments have been conducted to test the validation of the proposed method.


Author(s):  
Christos Papadopoulos ◽  
Ioannis Mariolis ◽  
Angeliki Topalidou-Kyniazopoulou ◽  
Grigorios Piperagkas ◽  
Dimosthenis Ioannidis ◽  
...  

Author(s):  
Carlos Morato ◽  
Krishnanand Kaipa ◽  
Boxuan Zhao ◽  
Satyandra K. Gupta

In this paper, we propose an exteroceptive sensing based framework to achieve safe human-robot interaction during shared tasks. Our approach allows a human to operate in close proximity with the robot, while pausing the robot’s motion whenever a collision between the human and the robot is imminent. The human’s presence is sensed by a N-range sensor based system, which consists of multiple range sensors mounted at various points on the periphery of the work cell. Each range sensor is based on a Microsoft Kinect sensor. Each sensor observes the human and outputs a 20 DOF human model. Positional data of these models are fused together to generate a refined human model. Next, the robot and the human model are approximated by dynamic bounding spheres and the robot’s motion is controlled by tracking the collisions between these spheres. Whereas most previous exteroceptive methods relied on depth data from camera images, our approach is one of the first successful attempts to build an explicit human model online and use it to evaluate human-robot interference. Real-time behavior observed during experiments with a 5 DOF robot and a human safely performing shared assembly tasks validate our approach.


2019 ◽  
pp. 794-812
Author(s):  
Christos Papadopoulos ◽  
Ioannis Mariolis ◽  
Angeliki Topalidou-Kyniazopoulou ◽  
Grigorios Piperagkas ◽  
Dimosthenis Ioannidis ◽  
...  

This article introduces an advanced human-robot interaction (HRI) interface that allows teaching new assembly tasks to collaborative robotic systems. Using advanced perception and simulation technologies, the interface provides the proper tools for a non-expert user to teach a robot a new assembly task in a short amount of time. An RGBD camera is used to allow the user to demonstrate the task and the system extracts the needed information for the assembly to be simulated and performed by the robot, while the user guides the process. The HRI interface is integrated with the ROS framework and is built as a web application allowing operation through portable devices, such as a tablet PC. The interface is evaluated with user experience rating from test subjects that are requested to teach a folding assembly task to the robot.


Author(s):  
Christos Papadopoulos ◽  
Ioannis Mariolis ◽  
Angeliki Topalidou-Kyniazopoulou ◽  
Grigorios Piperagkas ◽  
Dimosthenis Ioannidis ◽  
...  

This article introduces an advanced human-robot interaction (HRI) interface that allows teaching new assembly tasks to collaborative robotic systems. Using advanced perception and simulation technologies, the interface provides the proper tools for a non-expert user to teach a robot a new assembly task in a short amount of time. An RGBD camera is used to allow the user to demonstrate the task and the system extracts the needed information for the assembly to be simulated and performed by the robot, while the user guides the process. The HRI interface is integrated with the ROS framework and is built as a web application allowing operation through portable devices, such as a tablet PC. The interface is evaluated with user experience rating from test subjects that are requested to teach a folding assembly task to the robot.


Sign in / Sign up

Export Citation Format

Share Document