Tracking multiple rigid symmetric and non-symmetric objects in real-time using depth data

Author(s):  
Sharath Akkaladevi ◽  
Martin Ankerl ◽  
Christoph Heindl ◽  
Andreas Pichler
Keyword(s):  
2018 ◽  
Vol 2018 ◽  
pp. 1-7 ◽  
Author(s):  
Meng Li ◽  
Liang Yan ◽  
Qianying Wang

This paper addresses the problem of predicting human actions in depth videos. Due to the complex spatiotemporal structure of human actions, it is difficult to infer ongoing human actions before they are fully executed. To handle this challenging issue, we first propose two new depth-based features called pairwise relative joint orientations (PRJOs) and depth patch motion maps (DPMMs) to represent the relative movements between each pair of joints and human-object interactions, respectively. The two proposed depth-based features are suitable for recognizing and predicting human actions in real-time fashion. Then, we propose a regression-based learning approach with a group sparsity inducing regularizer to learn action predictor based on the combination of PRJOs and DPMMs for a sparse set of joints. Experimental results on benchmark datasets have demonstrated that our proposed approach significantly outperforms existing methods for real-time human action recognition and prediction from depth data.


2020 ◽  
Vol 29 (16) ◽  
pp. 2050266
Author(s):  
Adnan Ramakić ◽  
Diego Sušanj ◽  
Kristijan Lenac ◽  
Zlatko Bundalo

Each person describes unique patterns during gait cycles and this information can be extracted from live video stream and used for subject identification. In recent years, there has been a profusion of sensors that in addition to RGB video images also provide depth data in real-time. In this paper, a method to enhance the appearance-based gait recognition method by also integrating features extracted from depth data is proposed. Two approaches are proposed that integrate simple depth features in a way suitable for real-time processing. Unlike previously presented works which usually use a short range sensors like Microsoft Kinect, here, a long-range stereo camera in outdoor environment is used. The experimental results for the proposed approaches show that recognition rates are improved when compared to existing popular gait recognition methods.


2018 ◽  
Vol 14 (7) ◽  
pp. 155014771879085 ◽  
Author(s):  
Yundong Guo ◽  
Shu-Chuan Chu ◽  
Zhenyu Liu ◽  
Chan Qiu ◽  
Hao Luo ◽  
...  

Reconstruction and projection mapping enable us to bring virtual worlds into real spaces, which can give spectators an immersive augmented reality experience. Based on an interactive system with RGB-depth sensor and projector, we present a combined hardware and software solution for surface reconstruction and dynamic projection mapping in real time. In this article, a novel and adaptable calibration scheme is proposed, which is used to estimate approximate models to correct and transform raw depth data. Besides, our system allows for smooth real-time performance using an optimization framework, including denoising and stabilizing. In the entire pipeline, markers are only used in the calibration procedure, and any priors are not needed. Our approach enables us to interact with the target surface in real time, while maintaining correct illumination. It is easy and fast to develop different applications for our system, and some interesting cases are demonstrated at last.


2014 ◽  
Vol 989-994 ◽  
pp. 2651-2654
Author(s):  
Yan Song ◽  
Bo He

In this paper, a novel feature-based real-time visual Simultaneous localization and mapping (SLAM) system is proposed. This system generates colored 3-D reconstruction models and 3-D estimated trajectory using a Kinect style camera. Microsoft Kinect, a low priced 3-D camera, is the only sensor we use in our experiment. Kinect style sensors give RGB-D (red-green-blue depth) data which contains 2D image and per-pixel depth information. ORB (Oriented FAST and Rotated BRIEF) is the algorithm used to extract image features for speed up the whole system. Our system can be used to generate 3-D detailed reconstruction models. Furthermore, an estimated 3D trajectory of the sensor is given in this paper. The results of the experiments demonstrate that our system performs robustly and effectively in both getting detailed 3D models and mapping camera trajectory.


Proceedings ◽  
2018 ◽  
Vol 4 (1) ◽  
pp. 31
Author(s):  
Guillaume Plouffe ◽  
Pierre Payeur ◽  
Ana-Maria Cretu

In this paper, we propose a vision-based recognition approach to control the posture of a robotic arm with three degrees of freedom (DOF) using static and dynamic human hand gestures. Two different methods are investigated to intuitively control a robotic arm posture in real-time using depth data collected by a Kinect sensor. In the first method, the user’s right index fingertip position is mapped to compute the inverse kinematics on the robot. Using the Forward And Backward Reaching Inverse Kinematics (FABRIK) algorithm, the inverse kinematics (IK) solutions are displayed in a graphical interface. Using this interface and his left hand, the user can intuitively browse and select a desired robotic arm posture. In the second method, the user’s left index position and direction are respectively used to determine the end-effector position and an attraction point position. The latter enables the control of the robotic arm posture. The performance of these real-time natural human control approaches is evaluated for precision and speed against static and dynamic obstacles.


Author(s):  
Le Wang ◽  
Shengquan Xie ◽  
Wenjun Xu ◽  
Bitao Yao ◽  
Jia Cui ◽  
...  

Abstract In complex industrial human-robot collaboration (HRC) environment, obstacles in the shared working space will occlude the operator, and the industrial robot will threaten the safety of the operator if it is unable to get the complete human spatial point cloud. This paper proposes a real-time human point cloud inpainting method based on the deep generative model. The method can recover the human point cloud occluded by obstacles in the shared working space to ensure the safety of the operator. The method proposed in this paper can be mainly divided into three parts: (i) real-time obstacles detection. This process can detect obstacle locations in real time and generate the image of obstacles. (ii) the application of the deep generative model algorithm. It is a complete convolutional neural network (CNN) structure and introduces advanced generative adversarial loss. The model can generate the missing depth data of operators at arbitrary position in the human depth image. (iii) spatial mapping of the depth image. The depth image will be mapped to point cloud by coordinate system conversion. The effectiveness of the method is verified by filling hole of the human point cloud occluded by obstacles in industrial HRC environment. The experiment results show that the proposed method can accurately generate the occluded human point cloud in real time and ensure the safety of the operator.


2014 ◽  
Vol 32 (11) ◽  
pp. 860-869 ◽  
Author(s):  
Nikolai Smolyanskiy ◽  
Christian Huitema ◽  
Lin Liang ◽  
Sean Eron Anderson

2018 ◽  
Vol 93 (3-4) ◽  
pp. 587-600 ◽  
Author(s):  
Somar Boubou ◽  
Hamed Jabbari Asl ◽  
Tatsuo Narikiyo ◽  
Michihiro Kawanishi
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document