Segmentation of Human Motion Sequence Based on Motion Primitives

Author(s):  
Yu-ting Zhao ◽  
Yao Wang ◽  
Shuang Wu ◽  
Xiao-jun Li ◽  
Hua Qin ◽  
...  
Author(s):  
Zi Hau Chin ◽  
Hu Ng ◽  
Timothy Tzen Vun Yap ◽  
Hau Lee Tong ◽  
Chiung Ching Ho ◽  
...  

2011 ◽  
Vol 31 (3) ◽  
pp. 330-345 ◽  
Author(s):  
Dana Kulić ◽  
Christian Ott ◽  
Dongheui Lee ◽  
Junichi Ishikawa ◽  
Yoshihiko Nakamura

Author(s):  
Bir Bhanu ◽  
Ju Han

In this chapter the Authors introduce the concepts behind the mouse dynamics biometric technology, present a generic architecture of the detector used to collect and process mouse dynamics, and study the various factors used to build the user’s signature. The Authors will also provide an updated survey on the researches and industrial implementations related to the technology, and study possible applications in computer security.In this chapter, we investigate repetitive human activity patterns and individual recognition in thermal infrared imagery, where human motion can be easily detected from the background regardless of the lighting conditions and colors of the human clothing and surfaces, and backgrounds. We employ an efficient spatiotemporal representation for human repetitive activity and individual recognition, which represents human motion sequence in a single image while preserving spatiotemporal characteristics. A statistical approach is used to extract features for activity and individual recognition. Experimental results show that the proposed approach achieves good performance for repetitive human activity and individual recognition.


2014 ◽  
Vol 538 ◽  
pp. 481-485 ◽  
Author(s):  
Shu Lu Zhang ◽  
Dong Sheng Zhou ◽  
Qiang Zhang

In this paper, we propose the motion sequence segmentation based on LLE (Locally Linear Embedding) algorithm. The method is to reduce the dimension of the high dimension motion sequence to obtain one-dimension feature curve. Then we use the feature curve to achieve motion sequence segmentation. Simulation results demonstrate that this method can achieve motion sequences segmentation and improve the accuracy rate greatly compared with the traditional algorithm.


Author(s):  
Jogendra Nath Kundu ◽  
Maharshi Gor ◽  
R. Venkatesh Babu

Human motion prediction model has applications in various fields of computer vision. Without taking into account the inherent stochasticity in the prediction of future pose dynamics, such methods often converges to a deterministic undesired mean of multiple probable outcomes. Devoid of this, we propose a novel probabilistic generative approach called Bidirectional Human motion prediction GAN, or BiHMP-GAN. To be able to generate multiple probable human-pose sequences, conditioned on a given starting sequence, we introduce a random extrinsic factor r, drawn from a predefined prior distribution. Furthermore, to enforce a direct content loss on the predicted motion sequence and also to avoid mode-collapse, a novel bidirectional framework is incorporated by modifying the usual discriminator architecture. The discriminator is trained also to regress this extrinsic factor r, which is used alongside with the intrinsic factor (encoded starting pose sequence) to generate a particular pose sequence. To further regularize the training, we introduce a novel recursive prediction strategy. In spite of being in a probabilistic framework, the enhanced discriminator architecture allows predictions of an intermediate part of pose sequence to be used as a conditioning for prediction of the latter part of the sequence. The bidirectional setup also provides a new direction to evaluate the prediction quality against a given test sequence. For a fair assessment of BiHMP-GAN, we report performance of the generated motion sequence using (i) a critic model trained to discriminate between real and fake motion sequence, and (ii) an action classifier trained on real human motion dynamics. Outcomes of both qualitative and quantitative evaluations, on the probabilistic generations of the model, demonstrate the superiority of BiHMP-GAN over previously available methods.


Sign in / Sign up

Export Citation Format

Share Document