pose recovery
Recently Published Documents


TOTAL DOCUMENTS

87
(FIVE YEARS 8)

H-INDEX

17
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Bin Ji ◽  
Chen Yang ◽  
Yao Shunyu ◽  
Ye Pan

2021 ◽  
Vol 14 (1) ◽  
pp. 246-270
Author(s):  
Jianqiao Wangni ◽  
Dahua Lin ◽  
Ji Liu ◽  
Kostas Daniilidis ◽  
Jianbo Shi
Keyword(s):  

2020 ◽  
Vol 96 ◽  
pp. 103898 ◽  
Author(s):  
Caner Sahin ◽  
Guillermo Garcia-Hernando ◽  
Juil Sock ◽  
Tae-Kyun Kim
Keyword(s):  

2019 ◽  
Vol 9 (17) ◽  
pp. 3613
Author(s):  
Xin Min ◽  
Shouqian Sun ◽  
Honglie Wang ◽  
Xurui Zhang ◽  
Chao Li ◽  
...  

Using video sequences to restore 3D human poses is of great significance in the field of motion capture. This paper proposes a novel approach to estimate 3D human action via end-to-end learning of deep convolutional neural network to calculate the parameters of the parameterized skinned multi-person linear model. The method is divided into two main stages: (1) 3D human pose estimation based on a single frame image. We use 2D/3D skeleton point constraints, human height constraints, and generative adversarial network constraints to obtain a more accurate human-body model. The model is pre-trained using open-source human pose datasets; (2) Human-body pose generation based on video streams. Combined with the correlation of video sequences, a 3D human pose recovery method based on video streams is proposed, which uses the correlation between videos to generate a smoother 3D pose. In addition, we compared the proposed 3D human pose recovery method with the commercial motion capture platform to prove the effectiveness of the proposed method. To make a contrast, we first built a motion capture platform through two Kinect (V2) devices and iPi Soft series software to obtain depth-camera video sequences and monocular-camera video sequences respectively. Then we defined several different tasks, including the speed of the movements, the position of the subject, the orientation of the subject, and the complexity of the movements. Experimental results show that our low-cost method based on RGB video data can achieve similar results to commercial motion capture platform with RGB-D video data.


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3784 ◽  
Author(s):  
Jameel Malik ◽  
Ahmed Elhayek ◽  
Didier Stricker

Hand shape and pose recovery is essential for many computer vision applications such as animation of a personalized hand mesh in a virtual environment. Although there are many hand pose estimation methods, only a few deep learning based algorithms target 3D hand shape and pose from a single RGB or depth image. Jointly estimating hand shape and pose is very challenging because none of the existing real benchmarks provides ground truth hand shape. For this reason, we propose a novel weakly-supervised approach for 3D hand shape and pose recovery (named WHSP-Net) from a single depth image by learning shapes from unlabeled real data and labeled synthetic data. To this end, we propose a novel framework which consists of three novel components. The first is the Convolutional Neural Network (CNN) based deep network which produces 3D joints positions from learned 3D bone vectors using a new layer. The second is a novel shape decoder that recovers dense 3D hand mesh from sparse joints. The third is a novel depth synthesizer which reconstructs 2D depth image from 3D hand mesh. The whole pipeline is fine-tuned in an end-to-end manner. We demonstrate that our approach recovers reasonable hand shapes from real world datasets as well as from live stream of depth camera in real-time. Our algorithm outperforms state-of-the-art methods that output more than the joint positions and shows competitive performance on 3D pose estimation task.


2018 ◽  
Vol 79 ◽  
pp. 63-75 ◽  
Author(s):  
Meysam Madadi ◽  
Sergio Escalera ◽  
Alex Carruesco ◽  
Carlos Andujar ◽  
Xavier Baró ◽  
...  

2018 ◽  
Vol 2018 ◽  
pp. 1-13
Author(s):  
Edmundo Guerra ◽  
Rodrigo Munguía ◽  
Yolanda Bolea ◽  
Antoni Grau

A multimodal sensory array to accurately position aerial multicopter drones with respect to pipes has been studied, and a solution exploiting both LiDAR and vision sensors has been proposed. Several challenges, including detection of pipes and other cylindrical elements in sensor space and validation of the elements detected, have been studied. A probabilistic parametric method has been applied to segment and position cylinders with LIDAR, while several vision-based techniques have been tested to find the contours of the pipe, combined with conic estimation cylinder pose recovery. Multiple solutions have been studied and analyzed, evaluating their results. This allowed proposing an approach that combines both LiDAR and vision to produce robust and accurate pipe detection. This combined solution is validated with real experimental data.


Sign in / Sign up

Export Citation Format

Share Document