Human Body Joints Estimation for Clinical Jumping Analysis

Author(s):  
Liangjia Zhu ◽  
Jehoon Lee ◽  
Peter Karasev ◽  
Ivan Kolesov ◽  
John Xerogeanes ◽  
...  
Keyword(s):  

2021 ◽  
Vol 11 (9) ◽  
pp. 4241
Author(s):  
Jiahua Wu ◽  
Hyo Jong Lee

In bottom-up multi-person pose estimation, grouping joint candidates into the appropriately structured corresponding instance of a person is challenging. In this paper, a new bottom-up method, the Partitioned CenterPose (PCP) Network, is proposed to better cluster the detected joints. To achieve this goal, we propose a novel approach called Partition Pose Representation (PPR) which integrates the instance of a person and its body joints based on joint offset. PPR leverages information about the center of the human body and the offsets between that center point and the positions of the body’s joints to encode human poses accurately. To enhance the relationships between body joints, we divide the human body into five parts, and then, we generate a sub-PPR for each part. Based on this PPR, the PCP Network can detect people and their body joints simultaneously, then group all body joints according to joint offset. Moreover, an improved l1 loss is designed to more accurately measure joint offset. Using the COCO keypoints and CrowdPose datasets for testing, it was found that the performance of the proposed method is on par with that of existing state-of-the-art bottom-up methods in terms of accuracy and speed.



1998 ◽  
Vol 3 (4) ◽  
pp. 181-184 ◽  
Author(s):  
R. Lee
Keyword(s):  


2014 ◽  
Vol 556-562 ◽  
pp. 4347-4351
Author(s):  
Ning Yang ◽  
Jin Tao Li ◽  
Rong Wang

The position extraction of lower limb joint points is important for gait recognition because the feature data is always based on the position of lower limb joint points. Since the detection of motion information of human body can affect the gait recognition directly, we propose a position extraction method of lower limb joint points in this paper. Through the study on the human body centroid tracking, and positioning of human lower limb joint point, we can obtain the step cycle information. It has been demonstrated via plenty experiments that the proposed method is feasible and easy for implement, since it can achieve real-time tracking and improve positioning accuracy of the human body joints, and can provide feature data for human gait recognition.



2019 ◽  
Vol 24 (6) ◽  
pp. 53-59
Author(s):  
Denys Volodymyrovych Soldatov ◽  
Anton Yuriiovych Varfolomieiev


Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3499 ◽  
Author(s):  
Wensong Chan ◽  
Zhiqiang Tian ◽  
Yang Wu

Skeleton-based action recognition has achieved great advances with the development of graph convolutional networks (GCNs). Many existing GCNs-based models only use the fixed hand-crafted adjacency matrix to describe the connections between human body joints. This omits the important implicit connections between joints, which contain discriminative information for different actions. In this paper, we propose an action-specific graph convolutional module, which is able to extract the implicit connections and properly balance them for each action. In addition, to filter out the useless and redundant information in the temporal dimension, we propose a simple yet effective operation named gated temporal convolution. These two major novelties ensure the superiority of our proposed method, as demonstrated on three large-scale public datasets: NTU-RGB + D, Kinetics, and NTU-RGB + D 120, and also shown in the detailed ablation studies.



2013 ◽  
Vol 333-335 ◽  
pp. 675-679
Author(s):  
Yan Tao Zhao ◽  
Bo Zhang ◽  
Xu Guang Zhang ◽  
Xiao Li Li ◽  
Mei Ling Fu ◽  
...  

This paper presents an efficient and novel framework for human action recognition based on representing the motion of human body-joints and the theory of nonlinear dynamical systems. Our work is motivated by the pictorial structures model and advances in human pose estimation. Intuitively, a collective understanding of human joints movements can lead to a better representation and understanding of any human action through quantization in the polar space. We use time-delay embedding on the time series resulting of the evolution of human body-joints variables along time to reconstruct phase portraits. Moreover, we train SVM models for action recognition by comparing the distances between trajectories of human body-joints variables within the reconstructed phase portraits. The proposed framework is evaluated on MSR-Action3D dataset and results compared against several state-of-the-art methods.



Author(s):  
R. R. Coelho ◽  
S. B. Soares ◽  
L. Landau ◽  
E. H. M. Dantas ◽  
J. L. D. Alves ◽  
...  


2020 ◽  
Vol 34 (07) ◽  
pp. 13033-13040 ◽  
Author(s):  
Lu Zhou ◽  
Yingying Chen ◽  
Jinqiao Wang ◽  
Hanqing Lu

In this paper, we propose a progressive pose grammar network learned with Bi-C3D (Bidirectional Convolutional 3D) for human pose estimation. Exploiting the dependencies among the human body parts proves effective in solving the problems such as complex articulation, occlusion and so on. Therefore, we propose two articulated grammars learned with Bi-C3D to build the relationships of the human joints and exploit the contextual information of human body structure. Firstly, a local multi-scale Bi-C3D kinematics grammar is proposed to promote the message passing process among the locally related joints. The multi-scale kinematics grammar excavates different levels human context learned by the network. Moreover, a global sequential grammar is put forward to capture the long-range dependencies among the human body joints. The whole procedure can be regarded as a local-global progressive refinement process. Without bells and whistles, our method achieves competitive performance on both MPII and LSP benchmarks compared with previous methods, which confirms the feasibility and effectiveness of C3D in information interactions.



Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3503 ◽  
Author(s):  
Konstantinos Papadopoulos ◽  
Girum Demisse ◽  
Enjie Ghorbel ◽  
Michel Antunes ◽  
Djamila Aouada ◽  
...  

The Dense Trajectories concept is one of the most successful approaches in action recognition, suitable for scenarios involving a significant amount of motion. However, due to noise and background motion, many generated trajectories are irrelevant to the actual human activity and can potentially lead to performance degradation. In this paper, we propose Localized Trajectories as an improved version of Dense Trajectories where motion trajectories are clustered around human body joints provided by RGB-D cameras and then encoded by local Bag-of-Words. As a result, the Localized Trajectories concept provides an advanced discriminative representation of actions. Moreover, we generalize Localized Trajectories to 3D by using the depth modality. One of the main advantages of 3D Localized Trajectories is that they describe radial displacements that are perpendicular to the image plane. Extensive experiments and analysis were carried out on five different datasets.



Sign in / Sign up

Export Citation Format

Share Document