scholarly journals 3D Human Pose Estimation Using Spatio-Temporal Networks with Explicit Occlusion Training

2020 ◽  
Vol 34 (07) ◽  
pp. 10631-10638
Author(s):  
Yu Cheng ◽  
Bo Yang ◽  
Bo Wang ◽  
Robby T. Tan

Estimating 3D poses from a monocular video is still a challenging task, despite the significant progress that has been made in the recent years. Generally, the performance of existing methods drops when the target person is too small/large, or the motion is too fast/slow relative to the scale and speed of the training data. Moreover, to our knowledge, many of these methods are not designed or trained under severe occlusion explicitly, making their performance on handling occlusion compromised. Addressing these problems, we introduce a spatio-temporal network for robust 3D human pose estimation. As humans in videos may appear in different scales and have various motion speeds, we apply multi-scale spatial features for 2D joints or keypoints prediction in each individual frame, and multi-stride temporal convolutional networks (TCNs) to estimate 3D joints or keypoints. Furthermore, we design a spatio-temporal discriminator based on body structures as well as limb motions to assess whether the predicted pose forms a valid pose and a valid movement. During training, we explicitly mask out some keypoints to simulate various occlusion cases, from minor to severe occlusion, so that our network can learn better and becomes robust to various degrees of occlusion. As there are limited 3D ground truth data, we further utilize 2D video data to inject a semi-supervised learning capability to our network. Experiments on public data sets validate the effectiveness of our method, and our ablation studies show the strengths of our network's individual submodules.

2020 ◽  
Vol 34 (07) ◽  
pp. 11442-11449 ◽  
Author(s):  
Yang Li ◽  
Kan Li ◽  
Shuai Jiang ◽  
Ziyue Zhang ◽  
Congzhentao Huang ◽  
...  

The neural network based approach for 3D human pose estimation from monocular images has attracted growing interest. However, annotating 3D poses is a labor-intensive and expensive process. In this paper, we propose a novel self-supervised approach to avoid the need of manual annotations. Different from existing weakly/self-supervised methods that require extra unpaired 3D ground-truth data to alleviate the depth ambiguity problem, our method trains the network only relying on geometric knowledge without any additional 3D pose annotations. The proposed method follows the two-stage pipeline: 2D pose estimation and 2D-to-3D pose lifting. We design the transform re-projection loss that is an effective way to explore multi-view consistency for training the 2D-to-3D lifting network. Besides, we adopt the confidences of 2D joints to integrate losses from different views to alleviate the influence of noises caused by the self-occlusion problem. Finally, we design a two-branch training architecture, which helps to preserve the scale information of re-projected 2D poses during training, resulting in accurate 3D pose predictions. We demonstrate the effectiveness of our method on two popular 3D human pose datasets, Human3.6M and MPI-INF-3DHP. The results show that our method significantly outperforms recent weakly/self-supervised approaches.


Author(s):  
Jinbao Wang ◽  
Shujie Tan ◽  
Xiantong Zhen ◽  
Shuo Xu ◽  
Feng Zheng ◽  
...  

2020 ◽  
Vol 2 (6) ◽  
pp. 471-500
Author(s):  
Xiaopeng Ji ◽  
Qi Fang ◽  
Junting Dong ◽  
Qing Shuai ◽  
Wen Jiang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document